Jul 9 14:53:24.037654 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 08:38:39 -00 2025 Jul 9 14:53:24.037698 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f85d3be94c634d7d72fbcd0e670073ce56ae2e0cc763f83b329300b7cea5203d Jul 9 14:53:24.037709 kernel: BIOS-provided physical RAM map: Jul 9 14:53:24.037720 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 9 14:53:24.037727 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 9 14:53:24.037735 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 9 14:53:24.037745 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jul 9 14:53:24.037753 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jul 9 14:53:24.037762 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 9 14:53:24.037770 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 9 14:53:24.037778 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jul 9 14:53:24.037786 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 9 14:53:24.037796 kernel: NX (Execute Disable) protection: active Jul 9 14:53:24.037804 kernel: APIC: Static calls initialized Jul 9 14:53:24.037814 kernel: SMBIOS 3.0.0 present. Jul 9 14:53:24.037823 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jul 9 14:53:24.037831 kernel: DMI: Memory slots populated: 1/1 Jul 9 14:53:24.037841 kernel: Hypervisor detected: KVM Jul 9 14:53:24.037850 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 9 14:53:24.037859 kernel: kvm-clock: using sched offset of 5068678209 cycles Jul 9 14:53:24.037868 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 9 14:53:24.037877 kernel: tsc: Detected 1996.249 MHz processor Jul 9 14:53:24.037886 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 9 14:53:24.037895 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 9 14:53:24.037905 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jul 9 14:53:24.037914 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 9 14:53:24.037924 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 9 14:53:24.037935 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jul 9 14:53:24.037944 kernel: ACPI: Early table checksum verification disabled Jul 9 14:53:24.037952 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jul 9 14:53:24.037960 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 14:53:24.037968 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 14:53:24.037977 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 14:53:24.037985 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jul 9 14:53:24.037993 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 14:53:24.038002 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 14:53:24.038011 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jul 9 14:53:24.038019 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jul 9 14:53:24.038027 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jul 9 14:53:24.038035 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jul 9 14:53:24.038046 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jul 9 14:53:24.038055 kernel: No NUMA configuration found Jul 9 14:53:24.038065 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jul 9 14:53:24.038073 kernel: NODE_DATA(0) allocated [mem 0x13fff8dc0-0x13fffffff] Jul 9 14:53:24.038082 kernel: Zone ranges: Jul 9 14:53:24.038090 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 9 14:53:24.038099 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 9 14:53:24.038107 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jul 9 14:53:24.038116 kernel: Device empty Jul 9 14:53:24.038124 kernel: Movable zone start for each node Jul 9 14:53:24.038134 kernel: Early memory node ranges Jul 9 14:53:24.038143 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 9 14:53:24.038151 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jul 9 14:53:24.038160 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jul 9 14:53:24.038168 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jul 9 14:53:24.038177 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 9 14:53:24.038185 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 9 14:53:24.038194 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jul 9 14:53:24.038202 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 9 14:53:24.038213 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 9 14:53:24.038221 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 9 14:53:24.038230 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 9 14:53:24.038238 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 9 14:53:24.038247 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 9 14:53:24.038255 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 9 14:53:24.038264 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 9 14:53:24.038272 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 9 14:53:24.038281 kernel: CPU topo: Max. logical packages: 2 Jul 9 14:53:24.038292 kernel: CPU topo: Max. logical dies: 2 Jul 9 14:53:24.038300 kernel: CPU topo: Max. dies per package: 1 Jul 9 14:53:24.038325 kernel: CPU topo: Max. threads per core: 1 Jul 9 14:53:24.038334 kernel: CPU topo: Num. cores per package: 1 Jul 9 14:53:24.038342 kernel: CPU topo: Num. threads per package: 1 Jul 9 14:53:24.038351 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 9 14:53:24.038359 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 9 14:53:24.038367 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jul 9 14:53:24.038376 kernel: Booting paravirtualized kernel on KVM Jul 9 14:53:24.038387 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 9 14:53:24.038396 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 9 14:53:24.038404 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 9 14:53:24.038413 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 9 14:53:24.038421 kernel: pcpu-alloc: [0] 0 1 Jul 9 14:53:24.038430 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 9 14:53:24.038439 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f85d3be94c634d7d72fbcd0e670073ce56ae2e0cc763f83b329300b7cea5203d Jul 9 14:53:24.038449 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 14:53:24.038459 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 9 14:53:24.038467 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 14:53:24.038476 kernel: Fallback order for Node 0: 0 Jul 9 14:53:24.038484 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jul 9 14:53:24.038493 kernel: Policy zone: Normal Jul 9 14:53:24.038501 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 14:53:24.038510 kernel: software IO TLB: area num 2. Jul 9 14:53:24.038518 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 9 14:53:24.038527 kernel: ftrace: allocating 40097 entries in 157 pages Jul 9 14:53:24.038537 kernel: ftrace: allocated 157 pages with 5 groups Jul 9 14:53:24.038545 kernel: Dynamic Preempt: voluntary Jul 9 14:53:24.038554 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 14:53:24.038563 kernel: rcu: RCU event tracing is enabled. Jul 9 14:53:24.038572 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 9 14:53:24.038580 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 14:53:24.038589 kernel: Rude variant of Tasks RCU enabled. Jul 9 14:53:24.038597 kernel: Tracing variant of Tasks RCU enabled. Jul 9 14:53:24.038606 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 14:53:24.038616 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 9 14:53:24.038625 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 9 14:53:24.038634 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 9 14:53:24.038642 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 9 14:53:24.038651 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 9 14:53:24.038660 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 9 14:53:24.038668 kernel: Console: colour VGA+ 80x25 Jul 9 14:53:24.038676 kernel: printk: legacy console [tty0] enabled Jul 9 14:53:24.038685 kernel: printk: legacy console [ttyS0] enabled Jul 9 14:53:24.038695 kernel: ACPI: Core revision 20240827 Jul 9 14:53:24.038703 kernel: APIC: Switch to symmetric I/O mode setup Jul 9 14:53:24.038711 kernel: x2apic enabled Jul 9 14:53:24.038720 kernel: APIC: Switched APIC routing to: physical x2apic Jul 9 14:53:24.038728 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 9 14:53:24.038737 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 9 14:53:24.038751 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jul 9 14:53:24.038761 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 9 14:53:24.038770 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 9 14:53:24.038779 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 9 14:53:24.038788 kernel: Spectre V2 : Mitigation: Retpolines Jul 9 14:53:24.038797 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 9 14:53:24.038808 kernel: Speculative Store Bypass: Vulnerable Jul 9 14:53:24.038817 kernel: x86/fpu: x87 FPU will use FXSAVE Jul 9 14:53:24.038826 kernel: Freeing SMP alternatives memory: 32K Jul 9 14:53:24.038835 kernel: pid_max: default: 32768 minimum: 301 Jul 9 14:53:24.038843 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 9 14:53:24.038854 kernel: landlock: Up and running. Jul 9 14:53:24.038863 kernel: SELinux: Initializing. Jul 9 14:53:24.038872 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 14:53:24.038881 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 14:53:24.038890 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jul 9 14:53:24.038899 kernel: Performance Events: AMD PMU driver. Jul 9 14:53:24.038907 kernel: ... version: 0 Jul 9 14:53:24.038916 kernel: ... bit width: 48 Jul 9 14:53:24.038925 kernel: ... generic registers: 4 Jul 9 14:53:24.038935 kernel: ... value mask: 0000ffffffffffff Jul 9 14:53:24.038944 kernel: ... max period: 00007fffffffffff Jul 9 14:53:24.038953 kernel: ... fixed-purpose events: 0 Jul 9 14:53:24.038962 kernel: ... event mask: 000000000000000f Jul 9 14:53:24.038971 kernel: signal: max sigframe size: 1440 Jul 9 14:53:24.038980 kernel: rcu: Hierarchical SRCU implementation. Jul 9 14:53:24.038989 kernel: rcu: Max phase no-delay instances is 400. Jul 9 14:53:24.038998 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 9 14:53:24.039007 kernel: smp: Bringing up secondary CPUs ... Jul 9 14:53:24.039017 kernel: smpboot: x86: Booting SMP configuration: Jul 9 14:53:24.039026 kernel: .... node #0, CPUs: #1 Jul 9 14:53:24.039035 kernel: smp: Brought up 1 node, 2 CPUs Jul 9 14:53:24.039044 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jul 9 14:53:24.039053 kernel: Memory: 3962040K/4193772K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54568K init, 2400K bss, 227284K reserved, 0K cma-reserved) Jul 9 14:53:24.039062 kernel: devtmpfs: initialized Jul 9 14:53:24.039071 kernel: x86/mm: Memory block size: 128MB Jul 9 14:53:24.039080 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 14:53:24.039089 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 9 14:53:24.039100 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 14:53:24.039109 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 14:53:24.039117 kernel: audit: initializing netlink subsys (disabled) Jul 9 14:53:24.039126 kernel: audit: type=2000 audit(1752072800.714:1): state=initialized audit_enabled=0 res=1 Jul 9 14:53:24.039135 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 14:53:24.039144 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 9 14:53:24.039153 kernel: cpuidle: using governor menu Jul 9 14:53:24.039162 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 14:53:24.039171 kernel: dca service started, version 1.12.1 Jul 9 14:53:24.039181 kernel: PCI: Using configuration type 1 for base access Jul 9 14:53:24.039190 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 9 14:53:24.039199 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 14:53:24.039208 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 14:53:24.039217 kernel: ACPI: Added _OSI(Module Device) Jul 9 14:53:24.039226 kernel: ACPI: Added _OSI(Processor Device) Jul 9 14:53:24.039234 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 14:53:24.039243 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 14:53:24.039252 kernel: ACPI: Interpreter enabled Jul 9 14:53:24.039262 kernel: ACPI: PM: (supports S0 S3 S5) Jul 9 14:53:24.039271 kernel: ACPI: Using IOAPIC for interrupt routing Jul 9 14:53:24.039280 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 9 14:53:24.039289 kernel: PCI: Using E820 reservations for host bridge windows Jul 9 14:53:24.039298 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 9 14:53:24.039326 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 9 14:53:24.039604 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 9 14:53:24.039698 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 9 14:53:24.039788 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 9 14:53:24.039802 kernel: acpiphp: Slot [3] registered Jul 9 14:53:24.039811 kernel: acpiphp: Slot [4] registered Jul 9 14:53:24.039820 kernel: acpiphp: Slot [5] registered Jul 9 14:53:24.039829 kernel: acpiphp: Slot [6] registered Jul 9 14:53:24.039838 kernel: acpiphp: Slot [7] registered Jul 9 14:53:24.039846 kernel: acpiphp: Slot [8] registered Jul 9 14:53:24.039855 kernel: acpiphp: Slot [9] registered Jul 9 14:53:24.039864 kernel: acpiphp: Slot [10] registered Jul 9 14:53:24.039875 kernel: acpiphp: Slot [11] registered Jul 9 14:53:24.039884 kernel: acpiphp: Slot [12] registered Jul 9 14:53:24.039893 kernel: acpiphp: Slot [13] registered Jul 9 14:53:24.039902 kernel: acpiphp: Slot [14] registered Jul 9 14:53:24.039910 kernel: acpiphp: Slot [15] registered Jul 9 14:53:24.039919 kernel: acpiphp: Slot [16] registered Jul 9 14:53:24.039928 kernel: acpiphp: Slot [17] registered Jul 9 14:53:24.039936 kernel: acpiphp: Slot [18] registered Jul 9 14:53:24.039945 kernel: acpiphp: Slot [19] registered Jul 9 14:53:24.039955 kernel: acpiphp: Slot [20] registered Jul 9 14:53:24.039964 kernel: acpiphp: Slot [21] registered Jul 9 14:53:24.039973 kernel: acpiphp: Slot [22] registered Jul 9 14:53:24.039982 kernel: acpiphp: Slot [23] registered Jul 9 14:53:24.039990 kernel: acpiphp: Slot [24] registered Jul 9 14:53:24.039999 kernel: acpiphp: Slot [25] registered Jul 9 14:53:24.040008 kernel: acpiphp: Slot [26] registered Jul 9 14:53:24.040016 kernel: acpiphp: Slot [27] registered Jul 9 14:53:24.040025 kernel: acpiphp: Slot [28] registered Jul 9 14:53:24.040034 kernel: acpiphp: Slot [29] registered Jul 9 14:53:24.040044 kernel: acpiphp: Slot [30] registered Jul 9 14:53:24.040053 kernel: acpiphp: Slot [31] registered Jul 9 14:53:24.040062 kernel: PCI host bridge to bus 0000:00 Jul 9 14:53:24.040181 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 9 14:53:24.040262 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 9 14:53:24.040369 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 9 14:53:24.040447 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 9 14:53:24.040543 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jul 9 14:53:24.040622 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 9 14:53:24.040731 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 9 14:53:24.040842 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jul 9 14:53:24.040947 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jul 9 14:53:24.041043 kernel: pci 0000:00:01.1: BAR 4 [io 0xc120-0xc12f] Jul 9 14:53:24.041141 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jul 9 14:53:24.041234 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jul 9 14:53:24.041363 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jul 9 14:53:24.041461 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jul 9 14:53:24.041587 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 9 14:53:24.041720 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 9 14:53:24.041815 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 9 14:53:24.041924 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jul 9 14:53:24.042015 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jul 9 14:53:24.042101 kernel: pci 0000:00:02.0: BAR 2 [mem 0xc000000000-0xc000003fff 64bit pref] Jul 9 14:53:24.042189 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff] Jul 9 14:53:24.042280 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref] Jul 9 14:53:24.045437 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 9 14:53:24.045563 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 9 14:53:24.045668 kernel: pci 0000:00:03.0: BAR 0 [io 0xc080-0xc0bf] Jul 9 14:53:24.045765 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff] Jul 9 14:53:24.045861 kernel: pci 0000:00:03.0: BAR 4 [mem 0xc000004000-0xc000007fff 64bit pref] Jul 9 14:53:24.045960 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref] Jul 9 14:53:24.046056 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 9 14:53:24.046145 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jul 9 14:53:24.046237 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff] Jul 9 14:53:24.046353 kernel: pci 0000:00:04.0: BAR 4 [mem 0xc000008000-0xc00000bfff 64bit pref] Jul 9 14:53:24.046452 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jul 9 14:53:24.046541 kernel: pci 0000:00:05.0: BAR 0 [io 0xc0c0-0xc0ff] Jul 9 14:53:24.046629 kernel: pci 0000:00:05.0: BAR 4 [mem 0xc00000c000-0xc00000ffff 64bit pref] Jul 9 14:53:24.046723 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 9 14:53:24.046812 kernel: pci 0000:00:06.0: BAR 0 [io 0xc100-0xc11f] Jul 9 14:53:24.046906 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfeb93000-0xfeb93fff] Jul 9 14:53:24.046992 kernel: pci 0000:00:06.0: BAR 4 [mem 0xc000010000-0xc000013fff 64bit pref] Jul 9 14:53:24.047006 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 9 14:53:24.047016 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 9 14:53:24.047025 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 9 14:53:24.047034 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 9 14:53:24.047043 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 9 14:53:24.047052 kernel: iommu: Default domain type: Translated Jul 9 14:53:24.047062 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 9 14:53:24.047074 kernel: PCI: Using ACPI for IRQ routing Jul 9 14:53:24.047083 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 9 14:53:24.047092 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 9 14:53:24.047101 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jul 9 14:53:24.047204 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 9 14:53:24.049376 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 9 14:53:24.049530 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 9 14:53:24.049552 kernel: vgaarb: loaded Jul 9 14:53:24.049566 kernel: clocksource: Switched to clocksource kvm-clock Jul 9 14:53:24.049582 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 14:53:24.049592 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 14:53:24.049608 kernel: pnp: PnP ACPI init Jul 9 14:53:24.049838 kernel: pnp 00:03: [dma 2] Jul 9 14:53:24.049858 kernel: pnp: PnP ACPI: found 5 devices Jul 9 14:53:24.049872 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 9 14:53:24.049885 kernel: NET: Registered PF_INET protocol family Jul 9 14:53:24.049898 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 9 14:53:24.049916 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 9 14:53:24.049929 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 14:53:24.049942 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 14:53:24.049956 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 9 14:53:24.049968 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 9 14:53:24.049977 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 14:53:24.049992 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 14:53:24.050001 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 14:53:24.050011 kernel: NET: Registered PF_XDP protocol family Jul 9 14:53:24.050116 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 9 14:53:24.050219 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 9 14:53:24.050336 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 9 14:53:24.050436 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jul 9 14:53:24.050534 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jul 9 14:53:24.050654 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 9 14:53:24.050770 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 9 14:53:24.050791 kernel: PCI: CLS 0 bytes, default 64 Jul 9 14:53:24.050806 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 9 14:53:24.050816 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jul 9 14:53:24.050828 kernel: Initialise system trusted keyrings Jul 9 14:53:24.050840 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 9 14:53:24.050849 kernel: Key type asymmetric registered Jul 9 14:53:24.050858 kernel: Asymmetric key parser 'x509' registered Jul 9 14:53:24.050868 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 9 14:53:24.050877 kernel: io scheduler mq-deadline registered Jul 9 14:53:24.050888 kernel: io scheduler kyber registered Jul 9 14:53:24.050897 kernel: io scheduler bfq registered Jul 9 14:53:24.050906 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 9 14:53:24.050916 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 9 14:53:24.050925 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 9 14:53:24.050934 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 9 14:53:24.050943 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 9 14:53:24.050952 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 14:53:24.050962 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 9 14:53:24.050978 kernel: random: crng init done Jul 9 14:53:24.050987 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 9 14:53:24.050997 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 9 14:53:24.051006 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 9 14:53:24.051105 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 9 14:53:24.051120 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 9 14:53:24.051198 kernel: rtc_cmos 00:04: registered as rtc0 Jul 9 14:53:24.051277 kernel: rtc_cmos 00:04: setting system clock to 2025-07-09T14:53:23 UTC (1752072803) Jul 9 14:53:24.053640 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 9 14:53:24.053686 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 9 14:53:24.053697 kernel: NET: Registered PF_INET6 protocol family Jul 9 14:53:24.053707 kernel: Segment Routing with IPv6 Jul 9 14:53:24.053717 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 14:53:24.053728 kernel: NET: Registered PF_PACKET protocol family Jul 9 14:53:24.053738 kernel: Key type dns_resolver registered Jul 9 14:53:24.053748 kernel: IPI shorthand broadcast: enabled Jul 9 14:53:24.053758 kernel: sched_clock: Marking stable (3928008905, 178498657)->(4117156126, -10648564) Jul 9 14:53:24.053773 kernel: registered taskstats version 1 Jul 9 14:53:24.053783 kernel: Loading compiled-in X.509 certificates Jul 9 14:53:24.053793 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 8ba3d283fde4a005aa35ab9394afe8122b8a3878' Jul 9 14:53:24.053802 kernel: Demotion targets for Node 0: null Jul 9 14:53:24.053812 kernel: Key type .fscrypt registered Jul 9 14:53:24.053822 kernel: Key type fscrypt-provisioning registered Jul 9 14:53:24.053832 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 14:53:24.053842 kernel: ima: Allocated hash algorithm: sha1 Jul 9 14:53:24.053851 kernel: ima: No architecture policies found Jul 9 14:53:24.053864 kernel: clk: Disabling unused clocks Jul 9 14:53:24.053874 kernel: Warning: unable to open an initial console. Jul 9 14:53:24.053884 kernel: Freeing unused kernel image (initmem) memory: 54568K Jul 9 14:53:24.053893 kernel: Write protecting the kernel read-only data: 24576k Jul 9 14:53:24.053903 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 9 14:53:24.053914 kernel: Run /init as init process Jul 9 14:53:24.053923 kernel: with arguments: Jul 9 14:53:24.053933 kernel: /init Jul 9 14:53:24.053943 kernel: with environment: Jul 9 14:53:24.053956 kernel: HOME=/ Jul 9 14:53:24.053965 kernel: TERM=linux Jul 9 14:53:24.053973 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 14:53:24.053989 systemd[1]: Successfully made /usr/ read-only. Jul 9 14:53:24.054002 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 14:53:24.054013 systemd[1]: Detected virtualization kvm. Jul 9 14:53:24.054023 systemd[1]: Detected architecture x86-64. Jul 9 14:53:24.054041 systemd[1]: Running in initrd. Jul 9 14:53:24.054053 systemd[1]: No hostname configured, using default hostname. Jul 9 14:53:24.054063 systemd[1]: Hostname set to . Jul 9 14:53:24.054073 systemd[1]: Initializing machine ID from VM UUID. Jul 9 14:53:24.054083 systemd[1]: Queued start job for default target initrd.target. Jul 9 14:53:24.054093 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 14:53:24.054105 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 14:53:24.054116 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 14:53:24.054126 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 14:53:24.054137 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 14:53:24.054148 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 14:53:24.054159 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 14:53:24.054169 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 14:53:24.054182 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 14:53:24.054192 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 14:53:24.054202 systemd[1]: Reached target paths.target - Path Units. Jul 9 14:53:24.054212 systemd[1]: Reached target slices.target - Slice Units. Jul 9 14:53:24.054222 systemd[1]: Reached target swap.target - Swaps. Jul 9 14:53:24.054232 systemd[1]: Reached target timers.target - Timer Units. Jul 9 14:53:24.054243 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 14:53:24.054253 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 14:53:24.054266 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 14:53:24.054276 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 14:53:24.054288 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 14:53:24.054298 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 14:53:24.054325 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 14:53:24.054335 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 14:53:24.054346 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 14:53:24.054356 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 14:53:24.054366 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 14:53:24.054379 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 9 14:53:24.054389 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 14:53:24.054401 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 14:53:24.054412 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 14:53:24.054422 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 14:53:24.054458 systemd-journald[214]: Collecting audit messages is disabled. Jul 9 14:53:24.054483 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 14:53:24.054498 systemd-journald[214]: Journal started Jul 9 14:53:24.054526 systemd-journald[214]: Runtime Journal (/run/log/journal/8240d74addd74686a1713d35a9ca20ed) is 8M, max 78.5M, 70.5M free. Jul 9 14:53:24.068333 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 14:53:24.073169 systemd-modules-load[216]: Inserted module 'overlay' Jul 9 14:53:24.113518 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 14:53:24.113547 kernel: Bridge firewalling registered Jul 9 14:53:24.105360 systemd-modules-load[216]: Inserted module 'br_netfilter' Jul 9 14:53:24.114996 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 14:53:24.116039 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 14:53:24.117245 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 14:53:24.118348 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 14:53:24.123550 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 14:53:24.125423 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 14:53:24.130528 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 14:53:24.133463 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 14:53:24.153712 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 14:53:24.154867 systemd-tmpfiles[233]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 9 14:53:24.160867 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 14:53:24.163916 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 14:53:24.173639 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 14:53:24.176397 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 14:53:24.178409 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 14:53:24.181738 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 14:53:24.205868 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 14:53:24.223476 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f85d3be94c634d7d72fbcd0e670073ce56ae2e0cc763f83b329300b7cea5203d Jul 9 14:53:24.241236 systemd-resolved[250]: Positive Trust Anchors: Jul 9 14:53:24.242071 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 14:53:24.242118 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 14:53:24.251669 systemd-resolved[250]: Defaulting to hostname 'linux'. Jul 9 14:53:24.256424 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 14:53:24.258351 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 14:53:24.321468 kernel: SCSI subsystem initialized Jul 9 14:53:24.334380 kernel: Loading iSCSI transport class v2.0-870. Jul 9 14:53:24.347397 kernel: iscsi: registered transport (tcp) Jul 9 14:53:24.409550 kernel: iscsi: registered transport (qla4xxx) Jul 9 14:53:24.409690 kernel: QLogic iSCSI HBA Driver Jul 9 14:53:24.440943 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 14:53:24.472998 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 14:53:24.478103 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 14:53:24.582517 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 14:53:24.589728 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 14:53:24.691459 kernel: raid6: sse2x4 gen() 5138 MB/s Jul 9 14:53:24.709428 kernel: raid6: sse2x2 gen() 12546 MB/s Jul 9 14:53:24.727964 kernel: raid6: sse2x1 gen() 9317 MB/s Jul 9 14:53:24.728076 kernel: raid6: using algorithm sse2x2 gen() 12546 MB/s Jul 9 14:53:24.746998 kernel: raid6: .... xor() 8626 MB/s, rmw enabled Jul 9 14:53:24.747112 kernel: raid6: using ssse3x2 recovery algorithm Jul 9 14:53:24.771428 kernel: xor: measuring software checksum speed Jul 9 14:53:24.774365 kernel: prefetch64-sse : 3383 MB/sec Jul 9 14:53:24.778065 kernel: generic_sse : 1778 MB/sec Jul 9 14:53:24.778160 kernel: xor: using function: prefetch64-sse (3383 MB/sec) Jul 9 14:53:24.987400 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 14:53:24.997035 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 14:53:24.999840 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 14:53:25.059402 systemd-udevd[463]: Using default interface naming scheme 'v255'. Jul 9 14:53:25.073982 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 14:53:25.081911 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 14:53:25.113253 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Jul 9 14:53:25.154720 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 14:53:25.159235 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 14:53:25.223863 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 14:53:25.238764 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 14:53:25.323337 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jul 9 14:53:25.328347 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 9 14:53:25.344785 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jul 9 14:53:25.359141 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 9 14:53:25.359194 kernel: GPT:17805311 != 20971519 Jul 9 14:53:25.359207 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 9 14:53:25.359223 kernel: GPT:17805311 != 20971519 Jul 9 14:53:25.359243 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 9 14:53:25.359255 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 14:53:25.360064 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 14:53:25.360222 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 14:53:25.362437 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 14:53:25.366622 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 14:53:25.367557 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 14:53:25.382371 kernel: libata version 3.00 loaded. Jul 9 14:53:25.392724 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 9 14:53:25.405366 kernel: scsi host0: ata_piix Jul 9 14:53:25.419332 kernel: scsi host1: ata_piix Jul 9 14:53:25.419588 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 lpm-pol 0 Jul 9 14:53:25.421008 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 lpm-pol 0 Jul 9 14:53:25.440586 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 9 14:53:25.472089 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 14:53:25.484211 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 9 14:53:25.502178 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 14:53:25.510640 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 9 14:53:25.511241 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 9 14:53:25.515431 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 14:53:25.548365 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 14:53:25.548961 disk-uuid[561]: Primary Header is updated. Jul 9 14:53:25.548961 disk-uuid[561]: Secondary Entries is updated. Jul 9 14:53:25.548961 disk-uuid[561]: Secondary Header is updated. Jul 9 14:53:25.695703 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 14:53:25.712037 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 14:53:25.712713 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 14:53:25.714063 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 14:53:25.716159 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 14:53:25.749646 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 14:53:26.575384 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 14:53:26.577574 disk-uuid[562]: The operation has completed successfully. Jul 9 14:53:26.643776 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 14:53:26.644009 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 14:53:26.705419 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 14:53:26.740237 sh[586]: Success Jul 9 14:53:26.789178 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 14:53:26.789293 kernel: device-mapper: uevent: version 1.0.3 Jul 9 14:53:26.796535 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 9 14:53:26.812505 kernel: device-mapper: verity: sha256 using shash "sha256-ssse3" Jul 9 14:53:26.894709 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 14:53:26.901416 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 14:53:26.908520 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 14:53:26.923435 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 9 14:53:26.927395 kernel: BTRFS: device fsid 082bcfbc-2c86-46fe-87f4-85dea5450235 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (598) Jul 9 14:53:26.933414 kernel: BTRFS info (device dm-0): first mount of filesystem 082bcfbc-2c86-46fe-87f4-85dea5450235 Jul 9 14:53:26.933483 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 9 14:53:26.933515 kernel: BTRFS info (device dm-0): using free-space-tree Jul 9 14:53:26.948094 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 14:53:26.950627 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 9 14:53:26.952770 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 9 14:53:26.954616 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 14:53:26.957476 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 14:53:27.002357 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (629) Jul 9 14:53:27.011130 kernel: BTRFS info (device vda6): first mount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 14:53:27.011204 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 14:53:27.015620 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 14:53:27.033341 kernel: BTRFS info (device vda6): last unmount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 14:53:27.033799 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 14:53:27.037612 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 14:53:27.110524 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 14:53:27.116908 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 14:53:27.176007 systemd-networkd[769]: lo: Link UP Jul 9 14:53:27.176741 systemd-networkd[769]: lo: Gained carrier Jul 9 14:53:27.179239 systemd-networkd[769]: Enumeration completed Jul 9 14:53:27.179868 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 14:53:27.180494 systemd[1]: Reached target network.target - Network. Jul 9 14:53:27.182291 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 14:53:27.182296 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 14:53:27.186442 systemd-networkd[769]: eth0: Link UP Jul 9 14:53:27.186446 systemd-networkd[769]: eth0: Gained carrier Jul 9 14:53:27.186463 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 14:53:27.198439 systemd-networkd[769]: eth0: DHCPv4 address 172.24.4.253/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 9 14:53:27.257343 ignition[699]: Ignition 2.21.0 Jul 9 14:53:27.257355 ignition[699]: Stage: fetch-offline Jul 9 14:53:27.257390 ignition[699]: no configs at "/usr/lib/ignition/base.d" Jul 9 14:53:27.259046 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 14:53:27.257400 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 9 14:53:27.257481 ignition[699]: parsed url from cmdline: "" Jul 9 14:53:27.261563 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 9 14:53:27.257485 ignition[699]: no config URL provided Jul 9 14:53:27.257490 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 14:53:27.257498 ignition[699]: no config at "/usr/lib/ignition/user.ign" Jul 9 14:53:27.257502 ignition[699]: failed to fetch config: resource requires networking Jul 9 14:53:27.257657 ignition[699]: Ignition finished successfully Jul 9 14:53:27.285988 ignition[779]: Ignition 2.21.0 Jul 9 14:53:27.286003 ignition[779]: Stage: fetch Jul 9 14:53:27.286651 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jul 9 14:53:27.286663 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 9 14:53:27.286974 ignition[779]: parsed url from cmdline: "" Jul 9 14:53:27.286981 ignition[779]: no config URL provided Jul 9 14:53:27.286988 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 14:53:27.287005 ignition[779]: no config at "/usr/lib/ignition/user.ign" Jul 9 14:53:27.287258 ignition[779]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 9 14:53:27.289196 ignition[779]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 9 14:53:27.289228 ignition[779]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 9 14:53:27.583711 ignition[779]: GET result: OK Jul 9 14:53:27.583960 ignition[779]: parsing config with SHA512: 2e6744472003809634f4b91e3ac39d97164ee3da1221bdffd2e57fd673fc94163b529b76d06c004fa087d0e8d8bbf8ee8d70ff1e2384bb968e2a39b80f1c0a95 Jul 9 14:53:27.597163 unknown[779]: fetched base config from "system" Jul 9 14:53:27.597199 unknown[779]: fetched base config from "system" Jul 9 14:53:27.598127 ignition[779]: fetch: fetch complete Jul 9 14:53:27.597214 unknown[779]: fetched user config from "openstack" Jul 9 14:53:27.598140 ignition[779]: fetch: fetch passed Jul 9 14:53:27.604132 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 9 14:53:27.598244 ignition[779]: Ignition finished successfully Jul 9 14:53:27.608791 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 14:53:27.673404 ignition[786]: Ignition 2.21.0 Jul 9 14:53:27.673436 ignition[786]: Stage: kargs Jul 9 14:53:27.673793 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jul 9 14:53:27.673817 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 9 14:53:27.680337 ignition[786]: kargs: kargs passed Jul 9 14:53:27.681743 ignition[786]: Ignition finished successfully Jul 9 14:53:27.684243 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 14:53:27.688710 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 14:53:27.734565 ignition[792]: Ignition 2.21.0 Jul 9 14:53:27.736021 ignition[792]: Stage: disks Jul 9 14:53:27.736410 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jul 9 14:53:27.736436 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 9 14:53:27.740990 ignition[792]: disks: disks passed Jul 9 14:53:27.741093 ignition[792]: Ignition finished successfully Jul 9 14:53:27.744129 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 14:53:27.745831 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 14:53:27.748042 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 14:53:27.751284 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 14:53:27.754269 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 14:53:27.756907 systemd[1]: Reached target basic.target - Basic System. Jul 9 14:53:27.762129 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 14:53:27.823477 systemd-fsck[801]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jul 9 14:53:27.840672 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 14:53:27.845176 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 14:53:28.097411 kernel: EXT4-fs (vda9): mounted filesystem b08a603c-44fa-43af-af80-90bed9b8770a r/w with ordered data mode. Quota mode: none. Jul 9 14:53:28.101275 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 14:53:28.105014 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 14:53:28.110679 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 14:53:28.115527 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 14:53:28.119148 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 9 14:53:28.123364 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jul 9 14:53:28.138670 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 14:53:28.139545 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 14:53:28.157850 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 14:53:28.174844 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (809) Jul 9 14:53:28.177659 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 14:53:28.187881 kernel: BTRFS info (device vda6): first mount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 14:53:28.187921 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 14:53:28.190955 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 14:53:28.204244 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 14:53:28.270345 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:53:28.280075 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 14:53:28.290392 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jul 9 14:53:28.297158 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 14:53:28.302345 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 14:53:28.422513 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 14:53:28.425884 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 14:53:28.428557 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 14:53:28.443193 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 14:53:28.446964 kernel: BTRFS info (device vda6): last unmount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 14:53:28.468076 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 14:53:28.477237 ignition[925]: INFO : Ignition 2.21.0 Jul 9 14:53:28.477237 ignition[925]: INFO : Stage: mount Jul 9 14:53:28.480160 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 14:53:28.480160 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 9 14:53:28.480160 ignition[925]: INFO : mount: mount passed Jul 9 14:53:28.480160 ignition[925]: INFO : Ignition finished successfully Jul 9 14:53:28.483344 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 14:53:29.034739 systemd-networkd[769]: eth0: Gained IPv6LL Jul 9 14:53:29.319401 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:53:31.333380 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:53:35.346451 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:53:35.364250 coreos-metadata[811]: Jul 09 14:53:35.364 WARN failed to locate config-drive, using the metadata service API instead Jul 9 14:53:35.413257 coreos-metadata[811]: Jul 09 14:53:35.413 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 9 14:53:35.430906 coreos-metadata[811]: Jul 09 14:53:35.430 INFO Fetch successful Jul 9 14:53:35.432514 coreos-metadata[811]: Jul 09 14:53:35.432 INFO wrote hostname ci-9999-9-100-3d8d1010bc.novalocal to /sysroot/etc/hostname Jul 9 14:53:35.437962 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 9 14:53:35.438283 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jul 9 14:53:35.446489 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 14:53:35.494252 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 14:53:35.536465 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (943) Jul 9 14:53:35.545375 kernel: BTRFS info (device vda6): first mount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 14:53:35.545481 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 14:53:35.549807 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 14:53:35.565012 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 14:53:35.649247 ignition[961]: INFO : Ignition 2.21.0 Jul 9 14:53:35.649247 ignition[961]: INFO : Stage: files Jul 9 14:53:35.653513 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 14:53:35.653513 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 9 14:53:35.653513 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jul 9 14:53:35.658869 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 14:53:35.658869 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 14:53:35.662892 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 14:53:35.662892 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 14:53:35.662892 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 14:53:35.660767 unknown[961]: wrote ssh authorized keys file for user: core Jul 9 14:53:35.670585 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 9 14:53:35.670585 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 9 14:53:35.791964 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 9 14:53:36.930738 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 9 14:53:36.930738 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 9 14:53:36.937043 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 14:53:36.937043 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 9 14:53:36.937043 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 9 14:53:36.937043 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 14:53:36.937043 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 14:53:36.937043 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 14:53:36.937043 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 14:53:36.937043 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 14:53:36.937043 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 14:53:36.937043 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 9 14:53:36.958952 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 9 14:53:36.958952 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 9 14:53:36.958952 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 9 14:53:37.736685 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 9 14:53:39.451246 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 9 14:53:39.451246 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 9 14:53:39.456525 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 14:53:39.463386 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 14:53:39.463386 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 9 14:53:39.463386 ignition[961]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 9 14:53:39.472672 ignition[961]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 9 14:53:39.472672 ignition[961]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 14:53:39.472672 ignition[961]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 14:53:39.472672 ignition[961]: INFO : files: files passed Jul 9 14:53:39.472672 ignition[961]: INFO : Ignition finished successfully Jul 9 14:53:39.466968 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 14:53:39.476619 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 14:53:39.486884 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 14:53:39.489755 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 14:53:39.494213 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 14:53:39.522392 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 14:53:39.522392 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 14:53:39.524574 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 14:53:39.527093 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 14:53:39.527916 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 14:53:39.530929 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 14:53:39.595874 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 14:53:39.596131 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 14:53:39.599446 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 14:53:39.602115 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 14:53:39.605141 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 14:53:39.607590 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 14:53:39.652678 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 14:53:39.658562 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 14:53:39.700158 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 14:53:39.703658 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 14:53:39.705470 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 14:53:39.708570 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 14:53:39.708879 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 14:53:39.712066 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 14:53:39.714100 systemd[1]: Stopped target basic.target - Basic System. Jul 9 14:53:39.717113 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 14:53:39.719831 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 14:53:39.722507 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 14:53:39.725533 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 9 14:53:39.728688 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 14:53:39.731731 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 14:53:39.734765 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 14:53:39.737884 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 14:53:39.740998 systemd[1]: Stopped target swap.target - Swaps. Jul 9 14:53:39.743743 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 14:53:39.744177 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 14:53:39.747233 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 14:53:39.749143 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 14:53:39.751776 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 14:53:39.752541 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 14:53:39.754900 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 14:53:39.755517 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 14:53:39.759133 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 14:53:39.759646 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 14:53:39.762810 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 14:53:39.763220 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 14:53:39.769509 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 14:53:39.779820 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 14:53:39.781257 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 14:53:39.781728 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 14:53:39.787611 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 14:53:39.788399 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 14:53:39.794808 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 14:53:39.795452 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 14:53:39.819824 ignition[1015]: INFO : Ignition 2.21.0 Jul 9 14:53:39.821687 ignition[1015]: INFO : Stage: umount Jul 9 14:53:39.821687 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 14:53:39.821687 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 9 14:53:39.826423 ignition[1015]: INFO : umount: umount passed Jul 9 14:53:39.826423 ignition[1015]: INFO : Ignition finished successfully Jul 9 14:53:39.823708 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 14:53:39.825952 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 14:53:39.826068 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 14:53:39.826881 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 14:53:39.826977 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 14:53:39.828286 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 14:53:39.828388 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 14:53:39.829589 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 14:53:39.829634 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 14:53:39.830527 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 9 14:53:39.830579 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 9 14:53:39.831487 systemd[1]: Stopped target network.target - Network. Jul 9 14:53:39.832392 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 14:53:39.832456 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 14:53:39.833425 systemd[1]: Stopped target paths.target - Path Units. Jul 9 14:53:39.834342 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 14:53:39.840370 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 14:53:39.840918 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 14:53:39.842098 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 14:53:39.843076 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 14:53:39.843113 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 14:53:39.844057 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 14:53:39.844093 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 14:53:39.845084 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 14:53:39.845144 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 14:53:39.846114 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 14:53:39.846161 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 14:53:39.847118 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 14:53:39.847166 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 14:53:39.848268 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 14:53:39.849494 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 14:53:39.859339 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 14:53:39.859505 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 14:53:39.864067 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 14:53:39.864300 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 14:53:39.864508 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 14:53:39.866925 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 14:53:39.867569 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 9 14:53:39.868396 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 14:53:39.868455 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 14:53:39.870277 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 14:53:39.872079 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 14:53:39.872155 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 14:53:39.872786 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 14:53:39.872842 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 14:53:39.876577 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 14:53:39.876645 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 14:53:39.877881 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 14:53:39.877932 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 14:53:39.880629 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 14:53:39.882135 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 14:53:39.882221 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 14:53:39.888675 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 14:53:39.893567 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 14:53:39.894724 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 14:53:39.894815 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 14:53:39.896071 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 14:53:39.896104 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 14:53:39.897159 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 14:53:39.897212 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 14:53:39.898790 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 14:53:39.898833 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 14:53:39.899932 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 14:53:39.899978 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 14:53:39.901793 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 14:53:39.902941 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 9 14:53:39.902995 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 14:53:39.905959 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 14:53:39.906065 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 14:53:39.907060 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 9 14:53:39.907106 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 14:53:39.909166 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 14:53:39.909208 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 14:53:39.910129 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 14:53:39.910174 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 14:53:39.913081 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 9 14:53:39.913134 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 9 14:53:39.913174 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 9 14:53:39.913222 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 14:53:39.913567 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 14:53:39.916391 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 14:53:39.921541 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 14:53:39.921641 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 14:53:39.922428 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 14:53:39.925422 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 14:53:39.942532 systemd[1]: Switching root. Jul 9 14:53:39.981978 systemd-journald[214]: Journal stopped Jul 9 14:53:41.826973 systemd-journald[214]: Received SIGTERM from PID 1 (systemd). Jul 9 14:53:41.827090 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 14:53:41.827118 kernel: SELinux: policy capability open_perms=1 Jul 9 14:53:41.827135 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 14:53:41.827152 kernel: SELinux: policy capability always_check_network=0 Jul 9 14:53:41.827169 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 14:53:41.827219 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 14:53:41.827236 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 14:53:41.827247 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 14:53:41.827259 kernel: SELinux: policy capability userspace_initial_context=0 Jul 9 14:53:41.827277 kernel: audit: type=1403 audit(1752072820.413:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 14:53:41.827299 systemd[1]: Successfully loaded SELinux policy in 72.340ms. Jul 9 14:53:41.827445 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.036ms. Jul 9 14:53:41.827466 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 14:53:41.827505 systemd[1]: Detected virtualization kvm. Jul 9 14:53:41.827519 systemd[1]: Detected architecture x86-64. Jul 9 14:53:41.827534 systemd[1]: Detected first boot. Jul 9 14:53:41.827550 systemd[1]: Hostname set to . Jul 9 14:53:41.827565 systemd[1]: Initializing machine ID from VM UUID. Jul 9 14:53:41.827580 zram_generator::config[1058]: No configuration found. Jul 9 14:53:41.827603 kernel: Guest personality initialized and is inactive Jul 9 14:53:41.827617 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 9 14:53:41.827632 kernel: Initialized host personality Jul 9 14:53:41.827664 kernel: NET: Registered PF_VSOCK protocol family Jul 9 14:53:41.827677 systemd[1]: Populated /etc with preset unit settings. Jul 9 14:53:41.827693 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 14:53:41.827717 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 14:53:41.827729 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 14:53:41.827744 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 14:53:41.827757 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 14:53:41.827775 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 14:53:41.827805 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 14:53:41.827819 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 14:53:41.827835 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 14:53:41.827851 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 14:53:41.827863 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 14:53:41.827875 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 14:53:41.827888 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 14:53:41.827904 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 14:53:41.827922 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 14:53:41.827956 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 14:53:41.827970 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 14:53:41.827983 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 14:53:41.827998 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 9 14:53:41.828011 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 14:53:41.828029 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 14:53:41.828067 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 14:53:41.828092 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 14:53:41.828104 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 14:53:41.828116 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 14:53:41.828128 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 14:53:41.828144 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 14:53:41.828156 systemd[1]: Reached target slices.target - Slice Units. Jul 9 14:53:41.828168 systemd[1]: Reached target swap.target - Swaps. Jul 9 14:53:41.828180 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 14:53:41.828213 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 14:53:41.828226 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 14:53:41.828239 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 14:53:41.828254 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 14:53:41.828274 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 14:53:41.828287 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 14:53:41.828299 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 14:53:41.828336 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 14:53:41.828349 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 14:53:41.828383 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 14:53:41.828397 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 14:53:41.828409 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 14:53:41.828466 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 14:53:41.828480 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 14:53:41.828498 systemd[1]: Reached target machines.target - Containers. Jul 9 14:53:41.828511 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 14:53:41.828525 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 14:53:41.828544 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 14:53:41.828579 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 14:53:41.828594 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 14:53:41.828608 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 14:53:41.828621 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 14:53:41.828634 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 14:53:41.828647 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 14:53:41.828660 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 14:53:41.828683 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 14:53:41.828716 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 14:53:41.828731 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 14:53:41.828744 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 14:53:41.828761 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 14:53:41.828775 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 14:53:41.828809 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 14:53:41.828823 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 14:53:41.828840 kernel: loop: module loaded Jul 9 14:53:41.828853 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 14:53:41.828866 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 14:53:41.828879 kernel: fuse: init (API version 7.41) Jul 9 14:53:41.828892 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 14:53:41.828910 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 14:53:41.828927 systemd[1]: Stopped verity-setup.service. Jul 9 14:53:41.828972 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 14:53:41.828987 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 14:53:41.829001 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 14:53:41.829017 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 14:53:41.829030 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 14:53:41.829063 kernel: ACPI: bus type drm_connector registered Jul 9 14:53:41.829077 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 14:53:41.829090 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 14:53:41.829103 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 14:53:41.829121 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 14:53:41.829137 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 14:53:41.829151 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 14:53:41.829164 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 14:53:41.829177 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 14:53:41.829211 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 14:53:41.829234 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 14:53:41.829248 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 14:53:41.829261 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 14:53:41.829274 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 14:53:41.829287 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 14:53:41.829300 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 14:53:41.829376 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 14:53:41.829390 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 14:53:41.829432 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 14:53:41.829471 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 14:53:41.829492 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 14:53:41.829530 systemd-journald[1144]: Collecting audit messages is disabled. Jul 9 14:53:41.829584 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 14:53:41.829599 systemd-journald[1144]: Journal started Jul 9 14:53:41.829649 systemd-journald[1144]: Runtime Journal (/run/log/journal/8240d74addd74686a1713d35a9ca20ed) is 8M, max 78.5M, 70.5M free. Jul 9 14:53:41.383297 systemd[1]: Queued start job for default target multi-user.target. Jul 9 14:53:41.409607 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 9 14:53:41.410223 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 14:53:41.839329 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 14:53:41.844324 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 14:53:41.848347 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 14:53:41.853400 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 14:53:41.856339 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 14:53:41.860396 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 14:53:41.873872 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 14:53:41.881396 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 14:53:41.881461 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 14:53:41.883766 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 14:53:41.889550 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 14:53:41.899924 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 14:53:41.904440 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 14:53:41.910909 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 14:53:41.915866 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 14:53:41.917586 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 14:53:41.919236 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 14:53:41.920026 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 14:53:41.921131 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 14:53:41.939768 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 14:53:41.944248 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 14:53:41.948337 kernel: loop0: detected capacity change from 0 to 114008 Jul 9 14:53:41.949850 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 14:53:41.965005 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 14:53:41.988516 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jul 9 14:53:41.988536 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jul 9 14:53:41.995435 systemd-journald[1144]: Time spent on flushing to /var/log/journal/8240d74addd74686a1713d35a9ca20ed is 16.757ms for 982 entries. Jul 9 14:53:41.995435 systemd-journald[1144]: System Journal (/var/log/journal/8240d74addd74686a1713d35a9ca20ed) is 8M, max 584.8M, 576.8M free. Jul 9 14:53:42.025058 systemd-journald[1144]: Received client request to flush runtime journal. Jul 9 14:53:42.025100 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 14:53:41.993975 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 14:53:41.998558 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 14:53:42.003350 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 14:53:42.027670 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 14:53:42.046329 kernel: loop1: detected capacity change from 0 to 8 Jul 9 14:53:42.067348 kernel: loop2: detected capacity change from 0 to 146480 Jul 9 14:53:42.092925 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 14:53:42.098146 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 14:53:42.134543 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Jul 9 14:53:42.134811 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Jul 9 14:53:42.140424 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 14:53:42.153326 kernel: loop3: detected capacity change from 0 to 224512 Jul 9 14:53:42.207338 kernel: loop4: detected capacity change from 0 to 114008 Jul 9 14:53:42.280356 kernel: loop5: detected capacity change from 0 to 8 Jul 9 14:53:42.292994 kernel: loop6: detected capacity change from 0 to 146480 Jul 9 14:53:42.358527 kernel: loop7: detected capacity change from 0 to 224512 Jul 9 14:53:42.417045 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 14:53:42.421931 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jul 9 14:53:42.423072 (sd-merge)[1224]: Merged extensions into '/usr'. Jul 9 14:53:42.433085 systemd[1]: Reload requested from client PID 1177 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 14:53:42.433435 systemd[1]: Reloading... Jul 9 14:53:42.541356 zram_generator::config[1246]: No configuration found. Jul 9 14:53:42.704788 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 14:53:42.826450 systemd[1]: Reloading finished in 392 ms. Jul 9 14:53:42.839177 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 14:53:42.840846 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 14:53:42.852538 systemd[1]: Starting ensure-sysext.service... Jul 9 14:53:42.857170 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 14:53:42.865241 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 14:53:42.880384 systemd-tmpfiles[1307]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 9 14:53:42.880787 systemd-tmpfiles[1307]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 9 14:53:42.880802 systemd[1]: Reload requested from client PID 1306 ('systemctl') (unit ensure-sysext.service)... Jul 9 14:53:42.880813 systemd[1]: Reloading... Jul 9 14:53:42.881098 systemd-tmpfiles[1307]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 14:53:42.882423 systemd-tmpfiles[1307]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 14:53:42.883275 systemd-tmpfiles[1307]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 14:53:42.883789 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Jul 9 14:53:42.883928 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Jul 9 14:53:42.888181 systemd-tmpfiles[1307]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 14:53:42.888303 systemd-tmpfiles[1307]: Skipping /boot Jul 9 14:53:42.911091 systemd-tmpfiles[1307]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 14:53:42.911103 systemd-tmpfiles[1307]: Skipping /boot Jul 9 14:53:42.927961 ldconfig[1173]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 14:53:42.937833 systemd-udevd[1308]: Using default interface naming scheme 'v255'. Jul 9 14:53:42.953338 zram_generator::config[1331]: No configuration found. Jul 9 14:53:43.192194 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 14:53:43.261343 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 9 14:53:43.268339 kernel: ACPI: button: Power Button [PWRF] Jul 9 14:53:43.272338 kernel: mousedev: PS/2 mouse device common for all mice Jul 9 14:53:43.291488 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 9 14:53:43.291824 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 9 14:53:43.367723 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 9 14:53:43.368017 systemd[1]: Reloading finished in 486 ms. Jul 9 14:53:43.377918 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 14:53:43.379860 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 14:53:43.392036 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 14:53:43.414363 systemd[1]: Finished ensure-sysext.service. Jul 9 14:53:43.440256 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 14:53:43.451940 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 14:53:43.455477 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 14:53:43.457901 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 14:53:43.458634 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 14:53:43.460636 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 14:53:43.465633 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 14:53:43.472740 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 14:53:43.474418 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 14:53:43.475139 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 14:53:43.477451 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 14:53:43.478121 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 14:53:43.480203 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 14:53:43.488430 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 14:53:43.495814 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 14:53:43.501690 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 9 14:53:43.506544 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 14:53:43.510747 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 14:53:43.511440 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 14:53:43.516063 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 14:53:43.516296 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 14:53:43.538713 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 14:53:43.539740 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 14:53:43.540974 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 14:53:43.551680 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 14:53:43.552259 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 14:53:43.554431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 14:53:43.554762 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 14:53:43.556831 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 14:53:43.556922 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 14:53:43.564608 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 14:53:43.568685 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 14:53:43.592637 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 9 14:53:43.592762 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 9 14:53:43.598829 kernel: Console: switching to colour dummy device 80x25 Jul 9 14:53:43.600878 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 9 14:53:43.600912 kernel: [drm] features: -context_init Jul 9 14:53:43.603191 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 14:53:43.607121 kernel: [drm] number of scanouts: 1 Jul 9 14:53:43.606282 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 14:53:43.616255 kernel: [drm] number of cap sets: 0 Jul 9 14:53:43.618432 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 14:53:43.622536 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 14:53:43.626658 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 14:53:43.628342 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jul 9 14:53:43.630797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 14:53:43.643754 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 14:53:43.653471 augenrules[1492]: No rules Jul 9 14:53:43.655577 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 14:53:43.655862 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 14:53:43.661018 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 14:53:43.661851 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 14:53:43.707134 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 14:53:43.742674 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 14:53:43.804046 systemd-resolved[1454]: Positive Trust Anchors: Jul 9 14:53:43.804060 systemd-resolved[1454]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 14:53:43.804101 systemd-resolved[1454]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 14:53:43.810244 systemd-networkd[1452]: lo: Link UP Jul 9 14:53:43.810625 systemd-networkd[1452]: lo: Gained carrier Jul 9 14:53:43.813545 systemd-networkd[1452]: Enumeration completed Jul 9 14:53:43.813761 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 14:53:43.815560 systemd-networkd[1452]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 14:53:43.815568 systemd-networkd[1452]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 14:53:43.816530 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 14:53:43.817497 systemd-networkd[1452]: eth0: Link UP Jul 9 14:53:43.817750 systemd-networkd[1452]: eth0: Gained carrier Jul 9 14:53:43.817831 systemd-networkd[1452]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 14:53:43.819537 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 14:53:43.819880 systemd-resolved[1454]: Using system hostname 'ci-9999-9-100-3d8d1010bc.novalocal'. Jul 9 14:53:43.821665 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 14:53:43.822458 systemd[1]: Reached target network.target - Network. Jul 9 14:53:43.822585 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 14:53:43.827194 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 9 14:53:43.827338 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 14:53:43.827463 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 14:53:43.827540 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 14:53:43.827637 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 9 14:53:43.827695 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 14:53:43.827738 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 14:53:43.827765 systemd[1]: Reached target paths.target - Path Units. Jul 9 14:53:43.827808 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 14:53:43.827969 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 14:53:43.828104 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 14:53:43.828163 systemd[1]: Reached target timers.target - Timer Units. Jul 9 14:53:43.829728 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 14:53:43.831369 systemd-networkd[1452]: eth0: DHCPv4 address 172.24.4.253/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 9 14:53:43.833119 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Jul 9 14:53:43.834631 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 14:53:43.839617 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 14:53:43.839854 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 14:53:43.839931 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 14:53:43.842229 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 14:53:43.843144 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 14:53:43.844062 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 14:53:43.844579 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 14:53:43.845800 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 14:53:43.846002 systemd[1]: Reached target basic.target - Basic System. Jul 9 14:53:43.846214 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 14:53:43.846356 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 14:53:43.847453 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 14:53:43.849542 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 9 14:53:43.851148 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 14:53:43.853591 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 14:53:43.858622 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 14:53:43.860333 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 14:53:43.860488 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 14:53:43.864588 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 9 14:53:43.867826 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 14:53:43.871975 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 9 14:53:43.873809 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 14:53:43.883063 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:53:43.879240 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 14:53:43.891562 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 14:53:43.893464 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 14:53:43.894095 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 14:53:43.898385 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 14:53:43.906396 jq[1522]: false Jul 9 14:53:43.910773 extend-filesystems[1523]: Found /dev/vda6 Jul 9 14:53:43.907197 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 14:53:43.912335 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Refreshing passwd entry cache Jul 9 14:53:43.911248 oslogin_cache_refresh[1524]: Refreshing passwd entry cache Jul 9 14:53:43.913133 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 14:53:43.913538 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 14:53:43.913714 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 14:53:43.921703 extend-filesystems[1523]: Found /dev/vda9 Jul 9 14:53:43.928438 extend-filesystems[1523]: Checking size of /dev/vda9 Jul 9 14:53:43.932555 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 14:53:43.932786 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 14:53:43.936241 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Failure getting users, quitting Jul 9 14:53:43.936241 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 9 14:53:43.936241 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Refreshing group entry cache Jul 9 14:53:43.935282 oslogin_cache_refresh[1524]: Failure getting users, quitting Jul 9 14:53:43.935298 oslogin_cache_refresh[1524]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 9 14:53:43.935524 oslogin_cache_refresh[1524]: Refreshing group entry cache Jul 9 14:53:43.944499 oslogin_cache_refresh[1524]: Failure getting groups, quitting Jul 9 14:53:43.945008 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Failure getting groups, quitting Jul 9 14:53:43.945008 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 9 14:53:43.944511 oslogin_cache_refresh[1524]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 9 14:53:43.947858 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 9 14:53:43.949742 jq[1537]: true Jul 9 14:53:44.934747 systemd-resolved[1454]: Clock change detected. Flushing caches. Jul 9 14:53:44.934778 systemd-timesyncd[1458]: Contacted time server 198.46.254.130:123 (0.flatcar.pool.ntp.org). Jul 9 14:53:44.934822 systemd-timesyncd[1458]: Initial clock synchronization to Wed 2025-07-09 14:53:44.934633 UTC. Jul 9 14:53:44.935292 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 9 14:53:44.938334 update_engine[1535]: I20250709 14:53:44.938052 1535 main.cc:92] Flatcar Update Engine starting Jul 9 14:53:44.937991 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 14:53:44.939268 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 14:53:44.959007 extend-filesystems[1523]: Resized partition /dev/vda9 Jul 9 14:53:44.962851 (ntainerd)[1557]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 14:53:44.964482 tar[1541]: linux-amd64/LICENSE Jul 9 14:53:44.965135 tar[1541]: linux-amd64/helm Jul 9 14:53:44.966369 extend-filesystems[1566]: resize2fs 1.47.2 (1-Jan-2025) Jul 9 14:53:44.991637 jq[1561]: true Jul 9 14:53:44.995040 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jul 9 14:53:45.006479 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jul 9 14:53:45.014923 dbus-daemon[1520]: [system] SELinux support is enabled Jul 9 14:53:45.015311 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 14:53:45.015519 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 14:53:45.018809 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 14:53:45.036109 update_engine[1535]: I20250709 14:53:45.026664 1535 update_check_scheduler.cc:74] Next update check in 2m0s Jul 9 14:53:45.018833 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 14:53:45.020062 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 14:53:45.020084 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 14:53:45.026863 systemd[1]: Started update-engine.service - Update Engine. Jul 9 14:53:45.043341 extend-filesystems[1566]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 9 14:53:45.043341 extend-filesystems[1566]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 9 14:53:45.043341 extend-filesystems[1566]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jul 9 14:53:45.044131 extend-filesystems[1523]: Resized filesystem in /dev/vda9 Jul 9 14:53:45.067628 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 14:53:45.068229 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 14:53:45.068441 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 14:53:45.094532 systemd-logind[1534]: New seat seat0. Jul 9 14:53:45.095869 systemd-logind[1534]: Watching system buttons on /dev/input/event2 (Power Button) Jul 9 14:53:45.095894 systemd-logind[1534]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 9 14:53:45.096117 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 14:53:45.176771 bash[1588]: Updated "/home/core/.ssh/authorized_keys" Jul 9 14:53:45.178363 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 14:53:45.189516 systemd[1]: Starting sshkeys.service... Jul 9 14:53:45.234101 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 9 14:53:45.236996 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 9 14:53:45.267957 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:53:45.395336 containerd[1557]: time="2025-07-09T14:53:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 9 14:53:45.399603 locksmithd[1570]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 14:53:45.400375 containerd[1557]: time="2025-07-09T14:53:45.400343638Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 9 14:53:45.419144 containerd[1557]: time="2025-07-09T14:53:45.419100246Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.989µs" Jul 9 14:53:45.419144 containerd[1557]: time="2025-07-09T14:53:45.419139089Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 9 14:53:45.419243 containerd[1557]: time="2025-07-09T14:53:45.419158536Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 9 14:53:45.419447 containerd[1557]: time="2025-07-09T14:53:45.419421569Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 9 14:53:45.419479 containerd[1557]: time="2025-07-09T14:53:45.419447427Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 9 14:53:45.419501 containerd[1557]: time="2025-07-09T14:53:45.419475991Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 14:53:45.419564 containerd[1557]: time="2025-07-09T14:53:45.419540552Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 14:53:45.419564 containerd[1557]: time="2025-07-09T14:53:45.419560870Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 14:53:45.419831 containerd[1557]: time="2025-07-09T14:53:45.419802463Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 14:53:45.419831 containerd[1557]: time="2025-07-09T14:53:45.419827180Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 14:53:45.419880 containerd[1557]: time="2025-07-09T14:53:45.419841226Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 14:53:45.419880 containerd[1557]: time="2025-07-09T14:53:45.419850904Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 9 14:53:45.427440 containerd[1557]: time="2025-07-09T14:53:45.419926496Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 9 14:53:45.427704 containerd[1557]: time="2025-07-09T14:53:45.427677994Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 14:53:45.427734 containerd[1557]: time="2025-07-09T14:53:45.427717147Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 14:53:45.427763 containerd[1557]: time="2025-07-09T14:53:45.427731314Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 9 14:53:45.427796 containerd[1557]: time="2025-07-09T14:53:45.427762903Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 9 14:53:45.428060 containerd[1557]: time="2025-07-09T14:53:45.428030675Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 9 14:53:45.429950 containerd[1557]: time="2025-07-09T14:53:45.428178002Z" level=info msg="metadata content store policy set" policy=shared Jul 9 14:53:45.441278 containerd[1557]: time="2025-07-09T14:53:45.441249878Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 9 14:53:45.441394 containerd[1557]: time="2025-07-09T14:53:45.441372859Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 9 14:53:45.441531 containerd[1557]: time="2025-07-09T14:53:45.441512350Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 9 14:53:45.441604 containerd[1557]: time="2025-07-09T14:53:45.441587972Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 9 14:53:45.441669 containerd[1557]: time="2025-07-09T14:53:45.441654196Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 9 14:53:45.441732 containerd[1557]: time="2025-07-09T14:53:45.441717525Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 9 14:53:45.441797 containerd[1557]: time="2025-07-09T14:53:45.441782477Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 9 14:53:45.443961 containerd[1557]: time="2025-07-09T14:53:45.442972288Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 9 14:53:45.443961 containerd[1557]: time="2025-07-09T14:53:45.442995873Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 9 14:53:45.443961 containerd[1557]: time="2025-07-09T14:53:45.443008506Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 9 14:53:45.443961 containerd[1557]: time="2025-07-09T14:53:45.443019487Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 9 14:53:45.443961 containerd[1557]: time="2025-07-09T14:53:45.443041829Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 9 14:53:45.443961 containerd[1557]: time="2025-07-09T14:53:45.443155141Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 9 14:53:45.443961 containerd[1557]: time="2025-07-09T14:53:45.443177032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 9 14:53:45.443961 containerd[1557]: time="2025-07-09T14:53:45.443197911Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 9 14:53:45.443961 containerd[1557]: time="2025-07-09T14:53:45.443216907Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 9 14:53:45.443961 containerd[1557]: time="2025-07-09T14:53:45.443234250Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 9 14:53:45.443961 containerd[1557]: time="2025-07-09T14:53:45.443248045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 9 14:53:45.443961 containerd[1557]: time="2025-07-09T14:53:45.443262082Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 9 14:53:45.443961 containerd[1557]: time="2025-07-09T14:53:45.443275767Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 9 14:53:45.443961 containerd[1557]: time="2025-07-09T14:53:45.443287319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 9 14:53:45.443961 containerd[1557]: time="2025-07-09T14:53:45.443299312Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 9 14:53:45.444302 containerd[1557]: time="2025-07-09T14:53:45.443311585Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 9 14:53:45.444302 containerd[1557]: time="2025-07-09T14:53:45.443372629Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 9 14:53:45.444302 containerd[1557]: time="2025-07-09T14:53:45.443387196Z" level=info msg="Start snapshots syncer" Jul 9 14:53:45.444302 containerd[1557]: time="2025-07-09T14:53:45.443409047Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 9 14:53:45.444388 containerd[1557]: time="2025-07-09T14:53:45.443638989Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 9 14:53:45.444388 containerd[1557]: time="2025-07-09T14:53:45.443698390Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 9 14:53:45.444388 containerd[1557]: time="2025-07-09T14:53:45.443752582Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 9 14:53:45.444388 containerd[1557]: time="2025-07-09T14:53:45.443840156Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 9 14:53:45.444388 containerd[1557]: time="2025-07-09T14:53:45.443862909Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 9 14:53:45.444388 containerd[1557]: time="2025-07-09T14:53:45.443876063Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 9 14:53:45.444388 containerd[1557]: time="2025-07-09T14:53:45.443893276Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 9 14:53:45.444388 containerd[1557]: time="2025-07-09T14:53:45.443906621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 9 14:53:45.444388 containerd[1557]: time="2025-07-09T14:53:45.443917782Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 9 14:53:45.448670 containerd[1557]: time="2025-07-09T14:53:45.443930445Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 9 14:53:45.448670 containerd[1557]: time="2025-07-09T14:53:45.447047842Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 9 14:53:45.448670 containerd[1557]: time="2025-07-09T14:53:45.447099809Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 9 14:53:45.448670 containerd[1557]: time="2025-07-09T14:53:45.447113144Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 9 14:53:45.448670 containerd[1557]: time="2025-07-09T14:53:45.447156546Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 14:53:45.448670 containerd[1557]: time="2025-07-09T14:53:45.447176644Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 14:53:45.448670 containerd[1557]: time="2025-07-09T14:53:45.447187494Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 14:53:45.448670 containerd[1557]: time="2025-07-09T14:53:45.447198084Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 14:53:45.448670 containerd[1557]: time="2025-07-09T14:53:45.447261052Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 9 14:53:45.448670 containerd[1557]: time="2025-07-09T14:53:45.447274116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 9 14:53:45.448670 containerd[1557]: time="2025-07-09T14:53:45.447287842Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 9 14:53:45.448670 containerd[1557]: time="2025-07-09T14:53:45.447307890Z" level=info msg="runtime interface created" Jul 9 14:53:45.448670 containerd[1557]: time="2025-07-09T14:53:45.447315113Z" level=info msg="created NRI interface" Jul 9 14:53:45.448670 containerd[1557]: time="2025-07-09T14:53:45.447325914Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 9 14:53:45.448670 containerd[1557]: time="2025-07-09T14:53:45.447339880Z" level=info msg="Connect containerd service" Jul 9 14:53:45.448670 containerd[1557]: time="2025-07-09T14:53:45.447367341Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 14:53:45.448670 containerd[1557]: time="2025-07-09T14:53:45.448267590Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 14:53:45.709751 containerd[1557]: time="2025-07-09T14:53:45.709631598Z" level=info msg="Start subscribing containerd event" Jul 9 14:53:45.709751 containerd[1557]: time="2025-07-09T14:53:45.709691280Z" level=info msg="Start recovering state" Jul 9 14:53:45.709865 containerd[1557]: time="2025-07-09T14:53:45.709797008Z" level=info msg="Start event monitor" Jul 9 14:53:45.709865 containerd[1557]: time="2025-07-09T14:53:45.709812768Z" level=info msg="Start cni network conf syncer for default" Jul 9 14:53:45.709865 containerd[1557]: time="2025-07-09T14:53:45.709820692Z" level=info msg="Start streaming server" Jul 9 14:53:45.709865 containerd[1557]: time="2025-07-09T14:53:45.709836161Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 9 14:53:45.709865 containerd[1557]: time="2025-07-09T14:53:45.709844106Z" level=info msg="runtime interface starting up..." Jul 9 14:53:45.709865 containerd[1557]: time="2025-07-09T14:53:45.709849967Z" level=info msg="starting plugins..." Jul 9 14:53:45.710030 containerd[1557]: time="2025-07-09T14:53:45.709867039Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 9 14:53:45.712290 containerd[1557]: time="2025-07-09T14:53:45.712254096Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 14:53:45.713185 containerd[1557]: time="2025-07-09T14:53:45.712343524Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 14:53:45.713185 containerd[1557]: time="2025-07-09T14:53:45.712496842Z" level=info msg="containerd successfully booted in 0.317920s" Jul 9 14:53:45.712580 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 14:53:45.736719 tar[1541]: linux-amd64/README.md Jul 9 14:53:45.754791 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 9 14:53:45.815964 sshd_keygen[1560]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 14:53:45.857470 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 14:53:45.864599 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 14:53:45.869658 systemd[1]: Started sshd@0-172.24.4.253:22-172.24.4.1:44574.service - OpenSSH per-connection server daemon (172.24.4.1:44574). Jul 9 14:53:45.893327 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 14:53:45.893690 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 14:53:45.900216 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 14:53:45.905978 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:53:45.915201 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 14:53:45.922390 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 14:53:45.929427 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 9 14:53:45.931409 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 14:53:46.083193 systemd-networkd[1452]: eth0: Gained IPv6LL Jul 9 14:53:46.088645 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 14:53:46.090685 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 14:53:46.096230 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 14:53:46.100563 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 14:53:46.162683 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 14:53:46.293871 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:53:46.818361 sshd[1628]: Accepted publickey for core from 172.24.4.1 port 44574 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:53:46.823702 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:53:46.844763 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 14:53:46.848121 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 14:53:46.873953 systemd-logind[1534]: New session 1 of user core. Jul 9 14:53:46.886321 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 14:53:46.892690 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 14:53:46.911927 (systemd)[1654]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 14:53:46.914876 systemd-logind[1534]: New session c1 of user core. Jul 9 14:53:47.145187 systemd[1654]: Queued start job for default target default.target. Jul 9 14:53:47.155150 systemd[1654]: Created slice app.slice - User Application Slice. Jul 9 14:53:47.155373 systemd[1654]: Reached target paths.target - Paths. Jul 9 14:53:47.155519 systemd[1654]: Reached target timers.target - Timers. Jul 9 14:53:47.156960 systemd[1654]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 14:53:47.168781 systemd[1654]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 14:53:47.169897 systemd[1654]: Reached target sockets.target - Sockets. Jul 9 14:53:47.170074 systemd[1654]: Reached target basic.target - Basic System. Jul 9 14:53:47.170204 systemd[1654]: Reached target default.target - Main User Target. Jul 9 14:53:47.170321 systemd[1654]: Startup finished in 243ms. Jul 9 14:53:47.170876 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 14:53:47.179140 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 14:53:47.677795 systemd[1]: Started sshd@1-172.24.4.253:22-172.24.4.1:43690.service - OpenSSH per-connection server daemon (172.24.4.1:43690). Jul 9 14:53:47.928005 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:53:48.177120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 14:53:48.195926 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 14:53:48.315312 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:53:49.440504 sshd[1665]: Accepted publickey for core from 172.24.4.1 port 43690 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:53:49.444031 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:53:49.457764 systemd-logind[1534]: New session 2 of user core. Jul 9 14:53:49.471465 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 14:53:49.732636 kubelet[1673]: E0709 14:53:49.732398 1673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 14:53:49.734907 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 14:53:49.735267 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 14:53:49.737122 systemd[1]: kubelet.service: Consumed 2.346s CPU time, 267.2M memory peak. Jul 9 14:53:50.045725 sshd[1681]: Connection closed by 172.24.4.1 port 43690 Jul 9 14:53:50.047189 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Jul 9 14:53:50.063410 systemd[1]: sshd@1-172.24.4.253:22-172.24.4.1:43690.service: Deactivated successfully. Jul 9 14:53:50.067745 systemd[1]: session-2.scope: Deactivated successfully. Jul 9 14:53:50.071061 systemd-logind[1534]: Session 2 logged out. Waiting for processes to exit. Jul 9 14:53:50.078584 systemd[1]: Started sshd@2-172.24.4.253:22-172.24.4.1:43696.service - OpenSSH per-connection server daemon (172.24.4.1:43696). Jul 9 14:53:50.081432 systemd-logind[1534]: Removed session 2. Jul 9 14:53:50.999915 login[1636]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 9 14:53:51.003739 login[1637]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 9 14:53:51.015231 systemd-logind[1534]: New session 4 of user core. Jul 9 14:53:51.023429 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 14:53:51.031365 systemd-logind[1534]: New session 3 of user core. Jul 9 14:53:51.038395 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 14:53:51.543813 sshd[1688]: Accepted publickey for core from 172.24.4.1 port 43696 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:53:51.546859 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:53:51.558411 systemd-logind[1534]: New session 5 of user core. Jul 9 14:53:51.580347 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 14:53:51.971999 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:53:51.986347 coreos-metadata[1519]: Jul 09 14:53:51.986 WARN failed to locate config-drive, using the metadata service API instead Jul 9 14:53:52.038385 coreos-metadata[1519]: Jul 09 14:53:52.038 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jul 9 14:53:52.291860 sshd[1720]: Connection closed by 172.24.4.1 port 43696 Jul 9 14:53:52.293090 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Jul 9 14:53:52.302357 systemd[1]: sshd@2-172.24.4.253:22-172.24.4.1:43696.service: Deactivated successfully. Jul 9 14:53:52.305913 coreos-metadata[1519]: Jul 09 14:53:52.305 INFO Fetch successful Jul 9 14:53:52.306455 coreos-metadata[1519]: Jul 09 14:53:52.306 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 9 14:53:52.308052 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 14:53:52.310183 systemd-logind[1534]: Session 5 logged out. Waiting for processes to exit. Jul 9 14:53:52.313411 systemd-logind[1534]: Removed session 5. Jul 9 14:53:52.317148 coreos-metadata[1519]: Jul 09 14:53:52.317 INFO Fetch successful Jul 9 14:53:52.317148 coreos-metadata[1519]: Jul 09 14:53:52.317 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 9 14:53:52.328232 coreos-metadata[1519]: Jul 09 14:53:52.328 INFO Fetch successful Jul 9 14:53:52.328397 coreos-metadata[1519]: Jul 09 14:53:52.328 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 9 14:53:52.343116 coreos-metadata[1519]: Jul 09 14:53:52.343 INFO Fetch successful Jul 9 14:53:52.343244 coreos-metadata[1519]: Jul 09 14:53:52.343 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 9 14:53:52.357605 coreos-metadata[1519]: Jul 09 14:53:52.357 INFO Fetch successful Jul 9 14:53:52.357605 coreos-metadata[1519]: Jul 09 14:53:52.357 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 9 14:53:52.371497 coreos-metadata[1519]: Jul 09 14:53:52.371 INFO Fetch successful Jul 9 14:53:52.380509 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:53:52.396702 coreos-metadata[1595]: Jul 09 14:53:52.396 WARN failed to locate config-drive, using the metadata service API instead Jul 9 14:53:52.427310 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 9 14:53:52.430081 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 14:53:52.444801 coreos-metadata[1595]: Jul 09 14:53:52.444 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 9 14:53:52.460444 coreos-metadata[1595]: Jul 09 14:53:52.460 INFO Fetch successful Jul 9 14:53:52.460757 coreos-metadata[1595]: Jul 09 14:53:52.460 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 9 14:53:52.477706 coreos-metadata[1595]: Jul 09 14:53:52.477 INFO Fetch successful Jul 9 14:53:52.484197 unknown[1595]: wrote ssh authorized keys file for user: core Jul 9 14:53:52.538220 update-ssh-keys[1735]: Updated "/home/core/.ssh/authorized_keys" Jul 9 14:53:52.541181 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 9 14:53:52.545620 systemd[1]: Finished sshkeys.service. Jul 9 14:53:52.551877 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 14:53:52.552624 systemd[1]: Startup finished in 4.155s (kernel) + 16.597s (initrd) + 11.225s (userspace) = 31.978s. Jul 9 14:53:59.988831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 9 14:53:59.997900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 14:54:00.419658 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 14:54:00.434524 (kubelet)[1746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 14:54:00.537307 kubelet[1746]: E0709 14:54:00.537116 1746 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 14:54:00.544402 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 14:54:00.544738 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 14:54:00.545736 systemd[1]: kubelet.service: Consumed 411ms CPU time, 110.5M memory peak. Jul 9 14:54:02.317664 systemd[1]: Started sshd@3-172.24.4.253:22-172.24.4.1:51572.service - OpenSSH per-connection server daemon (172.24.4.1:51572). Jul 9 14:54:03.980285 sshd[1755]: Accepted publickey for core from 172.24.4.1 port 51572 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:54:03.984439 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:54:03.998809 systemd-logind[1534]: New session 6 of user core. Jul 9 14:54:04.013261 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 14:54:04.605448 sshd[1758]: Connection closed by 172.24.4.1 port 51572 Jul 9 14:54:04.608120 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Jul 9 14:54:04.627581 systemd[1]: sshd@3-172.24.4.253:22-172.24.4.1:51572.service: Deactivated successfully. Jul 9 14:54:04.632033 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 14:54:04.635550 systemd-logind[1534]: Session 6 logged out. Waiting for processes to exit. Jul 9 14:54:04.643373 systemd[1]: Started sshd@4-172.24.4.253:22-172.24.4.1:44386.service - OpenSSH per-connection server daemon (172.24.4.1:44386). Jul 9 14:54:04.646018 systemd-logind[1534]: Removed session 6. Jul 9 14:54:05.968732 sshd[1764]: Accepted publickey for core from 172.24.4.1 port 44386 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:54:05.971725 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:54:05.986047 systemd-logind[1534]: New session 7 of user core. Jul 9 14:54:05.993303 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 14:54:06.604449 sshd[1767]: Connection closed by 172.24.4.1 port 44386 Jul 9 14:54:06.605429 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Jul 9 14:54:06.618263 systemd[1]: sshd@4-172.24.4.253:22-172.24.4.1:44386.service: Deactivated successfully. Jul 9 14:54:06.622322 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 14:54:06.624704 systemd-logind[1534]: Session 7 logged out. Waiting for processes to exit. Jul 9 14:54:06.632526 systemd[1]: Started sshd@5-172.24.4.253:22-172.24.4.1:44394.service - OpenSSH per-connection server daemon (172.24.4.1:44394). Jul 9 14:54:06.635524 systemd-logind[1534]: Removed session 7. Jul 9 14:54:08.056538 sshd[1773]: Accepted publickey for core from 172.24.4.1 port 44394 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:54:08.059294 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:54:08.070980 systemd-logind[1534]: New session 8 of user core. Jul 9 14:54:08.087309 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 9 14:54:08.979000 sshd[1776]: Connection closed by 172.24.4.1 port 44394 Jul 9 14:54:08.978205 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Jul 9 14:54:08.996782 systemd[1]: sshd@5-172.24.4.253:22-172.24.4.1:44394.service: Deactivated successfully. Jul 9 14:54:09.000290 systemd[1]: session-8.scope: Deactivated successfully. Jul 9 14:54:09.003444 systemd-logind[1534]: Session 8 logged out. Waiting for processes to exit. Jul 9 14:54:09.009290 systemd[1]: Started sshd@6-172.24.4.253:22-172.24.4.1:44406.service - OpenSSH per-connection server daemon (172.24.4.1:44406). Jul 9 14:54:09.013818 systemd-logind[1534]: Removed session 8. Jul 9 14:54:10.391341 sshd[1782]: Accepted publickey for core from 172.24.4.1 port 44406 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:54:10.394293 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:54:10.406094 systemd-logind[1534]: New session 9 of user core. Jul 9 14:54:10.423321 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 9 14:54:10.571454 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 9 14:54:10.576201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 14:54:10.968777 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 14:54:10.970389 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 14:54:10.999121 sudo[1789]: pam_unix(sudo:session): session closed for user root Jul 9 14:54:11.125560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 14:54:11.140504 (kubelet)[1796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 14:54:11.147480 sshd[1785]: Connection closed by 172.24.4.1 port 44406 Jul 9 14:54:11.146411 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Jul 9 14:54:11.164895 systemd[1]: sshd@6-172.24.4.253:22-172.24.4.1:44406.service: Deactivated successfully. Jul 9 14:54:11.174391 systemd[1]: session-9.scope: Deactivated successfully. Jul 9 14:54:11.179375 systemd-logind[1534]: Session 9 logged out. Waiting for processes to exit. Jul 9 14:54:11.188815 systemd[1]: Started sshd@7-172.24.4.253:22-172.24.4.1:44414.service - OpenSSH per-connection server daemon (172.24.4.1:44414). Jul 9 14:54:11.192257 systemd-logind[1534]: Removed session 9. Jul 9 14:54:11.260189 kubelet[1796]: E0709 14:54:11.259254 1796 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 14:54:11.262129 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 14:54:11.262307 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 14:54:11.262911 systemd[1]: kubelet.service: Consumed 420ms CPU time, 108.7M memory peak. Jul 9 14:54:12.586387 sshd[1805]: Accepted publickey for core from 172.24.4.1 port 44414 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:54:12.589176 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:54:12.602060 systemd-logind[1534]: New session 10 of user core. Jul 9 14:54:12.612320 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 9 14:54:13.009798 sudo[1812]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 14:54:13.010645 sudo[1812]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 14:54:13.024490 sudo[1812]: pam_unix(sudo:session): session closed for user root Jul 9 14:54:13.037568 sudo[1811]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 14:54:13.039102 sudo[1811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 14:54:13.065130 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 14:54:13.154527 augenrules[1834]: No rules Jul 9 14:54:13.156587 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 14:54:13.157154 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 14:54:13.159238 sudo[1811]: pam_unix(sudo:session): session closed for user root Jul 9 14:54:13.409060 sshd[1810]: Connection closed by 172.24.4.1 port 44414 Jul 9 14:54:13.410018 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Jul 9 14:54:13.426183 systemd[1]: sshd@7-172.24.4.253:22-172.24.4.1:44414.service: Deactivated successfully. Jul 9 14:54:13.430577 systemd[1]: session-10.scope: Deactivated successfully. Jul 9 14:54:13.433521 systemd-logind[1534]: Session 10 logged out. Waiting for processes to exit. Jul 9 14:54:13.439506 systemd[1]: Started sshd@8-172.24.4.253:22-172.24.4.1:44418.service - OpenSSH per-connection server daemon (172.24.4.1:44418). Jul 9 14:54:13.441485 systemd-logind[1534]: Removed session 10. Jul 9 14:54:14.854694 sshd[1843]: Accepted publickey for core from 172.24.4.1 port 44418 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:54:14.858417 sshd-session[1843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:54:14.871041 systemd-logind[1534]: New session 11 of user core. Jul 9 14:54:14.878295 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 9 14:54:15.416758 sudo[1847]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 14:54:15.417636 sudo[1847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 14:54:16.158026 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 9 14:54:16.172207 (dockerd)[1864]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 9 14:54:16.652848 dockerd[1864]: time="2025-07-09T14:54:16.652751341Z" level=info msg="Starting up" Jul 9 14:54:16.654037 dockerd[1864]: time="2025-07-09T14:54:16.653816459Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 9 14:54:16.682522 dockerd[1864]: time="2025-07-09T14:54:16.682451253Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 9 14:54:16.728839 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3537742103-merged.mount: Deactivated successfully. Jul 9 14:54:16.774327 systemd[1]: var-lib-docker-metacopy\x2dcheck3068403437-merged.mount: Deactivated successfully. Jul 9 14:54:16.818292 dockerd[1864]: time="2025-07-09T14:54:16.818156037Z" level=info msg="Loading containers: start." Jul 9 14:54:16.845056 kernel: Initializing XFRM netlink socket Jul 9 14:54:17.203481 systemd-networkd[1452]: docker0: Link UP Jul 9 14:54:17.214183 dockerd[1864]: time="2025-07-09T14:54:17.214052832Z" level=info msg="Loading containers: done." Jul 9 14:54:17.246185 dockerd[1864]: time="2025-07-09T14:54:17.246125584Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 9 14:54:17.246505 dockerd[1864]: time="2025-07-09T14:54:17.246222286Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 9 14:54:17.246505 dockerd[1864]: time="2025-07-09T14:54:17.246309439Z" level=info msg="Initializing buildkit" Jul 9 14:54:17.304401 dockerd[1864]: time="2025-07-09T14:54:17.304206575Z" level=info msg="Completed buildkit initialization" Jul 9 14:54:17.328043 dockerd[1864]: time="2025-07-09T14:54:17.327868804Z" level=info msg="Daemon has completed initialization" Jul 9 14:54:17.330103 dockerd[1864]: time="2025-07-09T14:54:17.328363682Z" level=info msg="API listen on /run/docker.sock" Jul 9 14:54:17.328666 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 9 14:54:18.927678 containerd[1557]: time="2025-07-09T14:54:18.927327120Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 9 14:54:19.718571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2747524895.mount: Deactivated successfully. Jul 9 14:54:21.320560 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 9 14:54:21.324116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 14:54:21.529188 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 14:54:21.538298 (kubelet)[2137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 14:54:21.759442 containerd[1557]: time="2025-07-09T14:54:21.756293714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:54:21.764369 containerd[1557]: time="2025-07-09T14:54:21.764254881Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799053" Jul 9 14:54:21.767997 containerd[1557]: time="2025-07-09T14:54:21.767845279Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:54:21.781906 containerd[1557]: time="2025-07-09T14:54:21.781774662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:54:21.792352 containerd[1557]: time="2025-07-09T14:54:21.791911587Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.864089703s" Jul 9 14:54:21.792352 containerd[1557]: time="2025-07-09T14:54:21.792064676Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 9 14:54:21.800535 containerd[1557]: time="2025-07-09T14:54:21.800461887Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 9 14:54:21.829755 kubelet[2137]: E0709 14:54:21.829658 2137 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 14:54:21.834717 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 14:54:21.835352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 14:54:21.837243 systemd[1]: kubelet.service: Consumed 273ms CPU time, 110.5M memory peak. Jul 9 14:54:23.882161 containerd[1557]: time="2025-07-09T14:54:23.881962585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:54:23.883481 containerd[1557]: time="2025-07-09T14:54:23.883292714Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783920" Jul 9 14:54:23.885147 containerd[1557]: time="2025-07-09T14:54:23.885100213Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:54:23.889054 containerd[1557]: time="2025-07-09T14:54:23.888996302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:54:23.890476 containerd[1557]: time="2025-07-09T14:54:23.890438512Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 2.089702598s" Jul 9 14:54:23.890583 containerd[1557]: time="2025-07-09T14:54:23.890481043Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 9 14:54:23.892224 containerd[1557]: time="2025-07-09T14:54:23.892179446Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 9 14:54:25.792059 containerd[1557]: time="2025-07-09T14:54:25.790618838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:54:25.794149 containerd[1557]: time="2025-07-09T14:54:25.792883587Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176924" Jul 9 14:54:25.795973 containerd[1557]: time="2025-07-09T14:54:25.794762779Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:54:25.799406 containerd[1557]: time="2025-07-09T14:54:25.799367377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:54:25.800541 containerd[1557]: time="2025-07-09T14:54:25.800510272Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.908290239s" Jul 9 14:54:25.800731 containerd[1557]: time="2025-07-09T14:54:25.800712523Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 9 14:54:25.802753 containerd[1557]: time="2025-07-09T14:54:25.802295978Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 9 14:54:27.356879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount37341652.mount: Deactivated successfully. Jul 9 14:54:27.948340 containerd[1557]: time="2025-07-09T14:54:27.948285670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:54:27.949724 containerd[1557]: time="2025-07-09T14:54:27.949695507Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895371" Jul 9 14:54:27.950597 containerd[1557]: time="2025-07-09T14:54:27.950552260Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:54:27.954243 containerd[1557]: time="2025-07-09T14:54:27.954214288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:54:27.954925 containerd[1557]: time="2025-07-09T14:54:27.954718047Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.151817911s" Jul 9 14:54:27.955078 containerd[1557]: time="2025-07-09T14:54:27.955058138Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 9 14:54:27.956032 containerd[1557]: time="2025-07-09T14:54:27.955907548Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 9 14:54:28.634971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount347630546.mount: Deactivated successfully. Jul 9 14:54:30.121700 containerd[1557]: time="2025-07-09T14:54:30.121625153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:54:30.123130 containerd[1557]: time="2025-07-09T14:54:30.122907116Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 9 14:54:30.124232 containerd[1557]: time="2025-07-09T14:54:30.124189650Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:54:30.127822 containerd[1557]: time="2025-07-09T14:54:30.127758418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:54:30.129072 containerd[1557]: time="2025-07-09T14:54:30.128907420Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.172732238s" Jul 9 14:54:30.129072 containerd[1557]: time="2025-07-09T14:54:30.128965910Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 9 14:54:30.129727 containerd[1557]: time="2025-07-09T14:54:30.129475179Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 9 14:54:30.575997 update_engine[1535]: I20250709 14:54:30.574842 1535 update_attempter.cc:509] Updating boot flags... Jul 9 14:54:30.726793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3871615800.mount: Deactivated successfully. Jul 9 14:54:30.751193 containerd[1557]: time="2025-07-09T14:54:30.751033261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 14:54:30.753411 containerd[1557]: time="2025-07-09T14:54:30.753382463Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 9 14:54:30.755052 containerd[1557]: time="2025-07-09T14:54:30.755016419Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 14:54:30.760568 containerd[1557]: time="2025-07-09T14:54:30.760527071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 14:54:30.762271 containerd[1557]: time="2025-07-09T14:54:30.762233924Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 632.730893ms" Jul 9 14:54:30.762510 containerd[1557]: time="2025-07-09T14:54:30.762274350Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 9 14:54:30.764125 containerd[1557]: time="2025-07-09T14:54:30.762673852Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 9 14:54:31.446222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1359997871.mount: Deactivated successfully. Jul 9 14:54:32.070779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 9 14:54:32.075955 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 14:54:32.460095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 14:54:32.474405 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 14:54:32.773427 kubelet[2276]: E0709 14:54:32.773276 2276 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 14:54:32.777350 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 14:54:32.777674 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 14:54:32.778997 systemd[1]: kubelet.service: Consumed 384ms CPU time, 110.5M memory peak. Jul 9 14:54:35.567237 containerd[1557]: time="2025-07-09T14:54:35.565319399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:54:35.573222 containerd[1557]: time="2025-07-09T14:54:35.570005350Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Jul 9 14:54:35.573222 containerd[1557]: time="2025-07-09T14:54:35.571264959Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:54:35.582996 containerd[1557]: time="2025-07-09T14:54:35.582145629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:54:35.585892 containerd[1557]: time="2025-07-09T14:54:35.585816783Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.823077818s" Jul 9 14:54:35.586274 containerd[1557]: time="2025-07-09T14:54:35.586196637Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 9 14:54:40.867696 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 14:54:40.869572 systemd[1]: kubelet.service: Consumed 384ms CPU time, 110.5M memory peak. Jul 9 14:54:40.883160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 14:54:40.925728 systemd[1]: Reload requested from client PID 2327 ('systemctl') (unit session-11.scope)... Jul 9 14:54:40.926133 systemd[1]: Reloading... Jul 9 14:54:41.034982 zram_generator::config[2372]: No configuration found. Jul 9 14:54:41.549906 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 14:54:41.700497 systemd[1]: Reloading finished in 773 ms. Jul 9 14:54:41.756754 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 9 14:54:41.756855 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 9 14:54:41.757388 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 14:54:41.757432 systemd[1]: kubelet.service: Consumed 296ms CPU time, 98.3M memory peak. Jul 9 14:54:41.760220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 14:54:42.363109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 14:54:42.385052 (kubelet)[2439]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 14:54:42.485533 kubelet[2439]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 14:54:42.485533 kubelet[2439]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 14:54:42.485533 kubelet[2439]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 14:54:42.486383 kubelet[2439]: I0709 14:54:42.485596 2439 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 14:54:42.825275 kubelet[2439]: I0709 14:54:42.825209 2439 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 9 14:54:42.825598 kubelet[2439]: I0709 14:54:42.825565 2439 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 14:54:42.827297 kubelet[2439]: I0709 14:54:42.827253 2439 server.go:954] "Client rotation is on, will bootstrap in background" Jul 9 14:54:42.878760 kubelet[2439]: I0709 14:54:42.878691 2439 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 14:54:42.880983 kubelet[2439]: E0709 14:54:42.880095 2439 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.253:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.253:6443: connect: connection refused" logger="UnhandledError" Jul 9 14:54:42.914175 kubelet[2439]: I0709 14:54:42.914122 2439 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 14:54:42.929393 kubelet[2439]: I0709 14:54:42.929320 2439 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 14:54:42.930491 kubelet[2439]: I0709 14:54:42.930413 2439 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 14:54:42.931286 kubelet[2439]: I0709 14:54:42.930653 2439 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-9999-9-100-3d8d1010bc.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 14:54:42.932584 kubelet[2439]: I0709 14:54:42.931844 2439 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 14:54:42.932584 kubelet[2439]: I0709 14:54:42.931879 2439 container_manager_linux.go:304] "Creating device plugin manager" Jul 9 14:54:42.932584 kubelet[2439]: I0709 14:54:42.932309 2439 state_mem.go:36] "Initialized new in-memory state store" Jul 9 14:54:42.946775 kubelet[2439]: I0709 14:54:42.946723 2439 kubelet.go:446] "Attempting to sync node with API server" Jul 9 14:54:42.947415 kubelet[2439]: I0709 14:54:42.947376 2439 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 14:54:42.947729 kubelet[2439]: I0709 14:54:42.947695 2439 kubelet.go:352] "Adding apiserver pod source" Jul 9 14:54:42.948160 kubelet[2439]: I0709 14:54:42.947927 2439 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 14:54:42.956709 kubelet[2439]: W0709 14:54:42.956559 2439 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-9999-9-100-3d8d1010bc.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.253:6443: connect: connection refused Jul 9 14:54:42.958013 kubelet[2439]: E0709 14:54:42.957209 2439 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-9999-9-100-3d8d1010bc.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.253:6443: connect: connection refused" logger="UnhandledError" Jul 9 14:54:42.958984 kubelet[2439]: W0709 14:54:42.958646 2439 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.253:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.253:6443: connect: connection refused Jul 9 14:54:42.958984 kubelet[2439]: E0709 14:54:42.958785 2439 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.253:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.253:6443: connect: connection refused" logger="UnhandledError" Jul 9 14:54:42.959817 kubelet[2439]: I0709 14:54:42.959752 2439 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 9 14:54:42.961986 kubelet[2439]: I0709 14:54:42.961012 2439 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 14:54:42.961986 kubelet[2439]: W0709 14:54:42.961295 2439 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 14:54:42.967807 kubelet[2439]: I0709 14:54:42.967768 2439 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 14:54:42.968101 kubelet[2439]: I0709 14:54:42.968078 2439 server.go:1287] "Started kubelet" Jul 9 14:54:42.977524 kubelet[2439]: I0709 14:54:42.977479 2439 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 14:54:42.978876 kubelet[2439]: I0709 14:54:42.978857 2439 server.go:479] "Adding debug handlers to kubelet server" Jul 9 14:54:42.982173 kubelet[2439]: I0709 14:54:42.982153 2439 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 14:54:42.982462 kubelet[2439]: I0709 14:54:42.982313 2439 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 14:54:42.982997 kubelet[2439]: I0709 14:54:42.982913 2439 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 14:54:42.987035 kubelet[2439]: E0709 14:54:42.983652 2439 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.253:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.253:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-9999-9-100-3d8d1010bc.novalocal.18509d0111e73dcf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-9999-9-100-3d8d1010bc.novalocal,UID:ci-9999-9-100-3d8d1010bc.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-9999-9-100-3d8d1010bc.novalocal,},FirstTimestamp:2025-07-09 14:54:42.968018383 +0000 UTC m=+0.561285054,LastTimestamp:2025-07-09 14:54:42.968018383 +0000 UTC m=+0.561285054,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-9999-9-100-3d8d1010bc.novalocal,}" Jul 9 14:54:42.994197 kubelet[2439]: E0709 14:54:42.994165 2439 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-9999-9-100-3d8d1010bc.novalocal\" not found" Jul 9 14:54:42.995364 kubelet[2439]: I0709 14:54:42.995298 2439 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 14:54:43.002961 kubelet[2439]: I0709 14:54:43.002261 2439 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 14:54:43.003322 kubelet[2439]: I0709 14:54:43.003287 2439 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 14:54:43.003512 kubelet[2439]: I0709 14:54:43.003482 2439 reconciler.go:26] "Reconciler: start to sync state" Jul 9 14:54:43.008093 kubelet[2439]: E0709 14:54:43.006486 2439 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-9-100-3d8d1010bc.novalocal?timeout=10s\": dial tcp 172.24.4.253:6443: connect: connection refused" interval="200ms" Jul 9 14:54:43.012784 kubelet[2439]: W0709 14:54:43.012682 2439 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.253:6443: connect: connection refused Jul 9 14:54:43.012992 kubelet[2439]: E0709 14:54:43.012820 2439 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.253:6443: connect: connection refused" logger="UnhandledError" Jul 9 14:54:43.013379 kubelet[2439]: I0709 14:54:43.013338 2439 factory.go:221] Registration of the systemd container factory successfully Jul 9 14:54:43.013613 kubelet[2439]: I0709 14:54:43.013563 2439 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 14:54:43.016456 kubelet[2439]: E0709 14:54:43.014884 2439 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 14:54:43.017192 kubelet[2439]: I0709 14:54:43.017149 2439 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 14:54:43.018285 kubelet[2439]: I0709 14:54:43.018268 2439 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 14:54:43.018863 kubelet[2439]: I0709 14:54:43.018849 2439 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 9 14:54:43.019013 kubelet[2439]: I0709 14:54:43.018998 2439 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 14:54:43.019097 kubelet[2439]: I0709 14:54:43.019086 2439 kubelet.go:2382] "Starting kubelet main sync loop" Jul 9 14:54:43.019249 kubelet[2439]: E0709 14:54:43.019222 2439 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 14:54:43.020525 kubelet[2439]: I0709 14:54:43.020480 2439 factory.go:221] Registration of the containerd container factory successfully Jul 9 14:54:43.026335 kubelet[2439]: W0709 14:54:43.026294 2439 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.253:6443: connect: connection refused Jul 9 14:54:43.026524 kubelet[2439]: E0709 14:54:43.026500 2439 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.253:6443: connect: connection refused" logger="UnhandledError" Jul 9 14:54:43.048350 kubelet[2439]: I0709 14:54:43.048314 2439 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 14:54:43.048350 kubelet[2439]: I0709 14:54:43.048335 2439 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 14:54:43.048509 kubelet[2439]: I0709 14:54:43.048362 2439 state_mem.go:36] "Initialized new in-memory state store" Jul 9 14:54:43.053574 kubelet[2439]: I0709 14:54:43.053537 2439 policy_none.go:49] "None policy: Start" Jul 9 14:54:43.053650 kubelet[2439]: I0709 14:54:43.053582 2439 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 14:54:43.053650 kubelet[2439]: I0709 14:54:43.053612 2439 state_mem.go:35] "Initializing new in-memory state store" Jul 9 14:54:43.063544 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 14:54:43.081219 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 14:54:43.086746 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 14:54:43.095156 kubelet[2439]: I0709 14:54:43.095092 2439 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 14:54:43.095978 kubelet[2439]: E0709 14:54:43.095312 2439 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-9999-9-100-3d8d1010bc.novalocal\" not found" Jul 9 14:54:43.096564 kubelet[2439]: I0709 14:54:43.096532 2439 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 14:54:43.097774 kubelet[2439]: I0709 14:54:43.096577 2439 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 14:54:43.097774 kubelet[2439]: I0709 14:54:43.097072 2439 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 14:54:43.100358 kubelet[2439]: E0709 14:54:43.100343 2439 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 14:54:43.100678 kubelet[2439]: E0709 14:54:43.100661 2439 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-9999-9-100-3d8d1010bc.novalocal\" not found" Jul 9 14:54:43.144442 systemd[1]: Created slice kubepods-burstable-pod19e668e1706749a4125dc683e501ff82.slice - libcontainer container kubepods-burstable-pod19e668e1706749a4125dc683e501ff82.slice. Jul 9 14:54:43.168777 kubelet[2439]: E0709 14:54:43.168660 2439 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-3d8d1010bc.novalocal\" not found" node="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:43.180768 systemd[1]: Created slice kubepods-burstable-pod6256d6b0ad5b6fd7dde2de9cddd9c6d5.slice - libcontainer container kubepods-burstable-pod6256d6b0ad5b6fd7dde2de9cddd9c6d5.slice. Jul 9 14:54:43.187512 kubelet[2439]: E0709 14:54:43.187449 2439 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-3d8d1010bc.novalocal\" not found" node="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:43.192455 systemd[1]: Created slice kubepods-burstable-pod83ad7c40c43be48b49bf4afcab874d5b.slice - libcontainer container kubepods-burstable-pod83ad7c40c43be48b49bf4afcab874d5b.slice. Jul 9 14:54:43.198611 kubelet[2439]: E0709 14:54:43.198162 2439 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-3d8d1010bc.novalocal\" not found" node="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:43.200728 kubelet[2439]: I0709 14:54:43.200667 2439 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:43.201719 kubelet[2439]: E0709 14:54:43.201656 2439 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.253:6443/api/v1/nodes\": dial tcp 172.24.4.253:6443: connect: connection refused" node="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:43.207706 kubelet[2439]: E0709 14:54:43.207637 2439 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-9-100-3d8d1010bc.novalocal?timeout=10s\": dial tcp 172.24.4.253:6443: connect: connection refused" interval="400ms" Jul 9 14:54:43.305049 kubelet[2439]: I0709 14:54:43.304673 2439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6256d6b0ad5b6fd7dde2de9cddd9c6d5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal\" (UID: \"6256d6b0ad5b6fd7dde2de9cddd9c6d5\") " pod="kube-system/kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:43.305324 kubelet[2439]: I0709 14:54:43.304928 2439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83ad7c40c43be48b49bf4afcab874d5b-ca-certs\") pod \"kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal\" (UID: \"83ad7c40c43be48b49bf4afcab874d5b\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:43.305506 kubelet[2439]: I0709 14:54:43.305392 2439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83ad7c40c43be48b49bf4afcab874d5b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal\" (UID: \"83ad7c40c43be48b49bf4afcab874d5b\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:43.305625 kubelet[2439]: I0709 14:54:43.305584 2439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19e668e1706749a4125dc683e501ff82-kubeconfig\") pod \"kube-scheduler-ci-9999-9-100-3d8d1010bc.novalocal\" (UID: \"19e668e1706749a4125dc683e501ff82\") " pod="kube-system/kube-scheduler-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:43.305777 kubelet[2439]: I0709 14:54:43.305720 2439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6256d6b0ad5b6fd7dde2de9cddd9c6d5-ca-certs\") pod \"kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal\" (UID: \"6256d6b0ad5b6fd7dde2de9cddd9c6d5\") " pod="kube-system/kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:43.305874 kubelet[2439]: I0709 14:54:43.305832 2439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6256d6b0ad5b6fd7dde2de9cddd9c6d5-k8s-certs\") pod \"kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal\" (UID: \"6256d6b0ad5b6fd7dde2de9cddd9c6d5\") " pod="kube-system/kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:43.305998 kubelet[2439]: I0709 14:54:43.305885 2439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/83ad7c40c43be48b49bf4afcab874d5b-flexvolume-dir\") pod \"kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal\" (UID: \"83ad7c40c43be48b49bf4afcab874d5b\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:43.305998 kubelet[2439]: I0709 14:54:43.305990 2439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83ad7c40c43be48b49bf4afcab874d5b-k8s-certs\") pod \"kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal\" (UID: \"83ad7c40c43be48b49bf4afcab874d5b\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:43.306211 kubelet[2439]: I0709 14:54:43.306039 2439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/83ad7c40c43be48b49bf4afcab874d5b-kubeconfig\") pod \"kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal\" (UID: \"83ad7c40c43be48b49bf4afcab874d5b\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:43.405711 kubelet[2439]: I0709 14:54:43.405497 2439 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:43.408893 kubelet[2439]: E0709 14:54:43.408831 2439 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.253:6443/api/v1/nodes\": dial tcp 172.24.4.253:6443: connect: connection refused" node="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:43.472977 containerd[1557]: time="2025-07-09T14:54:43.472662687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-9999-9-100-3d8d1010bc.novalocal,Uid:19e668e1706749a4125dc683e501ff82,Namespace:kube-system,Attempt:0,}" Jul 9 14:54:43.489932 containerd[1557]: time="2025-07-09T14:54:43.489784539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal,Uid:6256d6b0ad5b6fd7dde2de9cddd9c6d5,Namespace:kube-system,Attempt:0,}" Jul 9 14:54:43.503063 containerd[1557]: time="2025-07-09T14:54:43.502557211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal,Uid:83ad7c40c43be48b49bf4afcab874d5b,Namespace:kube-system,Attempt:0,}" Jul 9 14:54:43.610368 kubelet[2439]: E0709 14:54:43.610288 2439 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-9-100-3d8d1010bc.novalocal?timeout=10s\": dial tcp 172.24.4.253:6443: connect: connection refused" interval="800ms" Jul 9 14:54:43.619741 containerd[1557]: time="2025-07-09T14:54:43.619672494Z" level=info msg="connecting to shim 381c6cdd3c485038b3987e8541bfefaff528485369f23e41e4750fe08ba07a9f" address="unix:///run/containerd/s/7ea270e13a80a5c53576412116ca8fc6819e59f2f51bea4f26261444e457a472" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:54:43.625786 containerd[1557]: time="2025-07-09T14:54:43.625706700Z" level=info msg="connecting to shim 6be27c52e96e3c7f87a4eda61e82add1f5a0c31ca2a8c00cd42ae2ccd77ba710" address="unix:///run/containerd/s/14c054c382abb383451cea89b29ffb944e1224e5452f2e9d3b746fdab07823ea" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:54:43.634752 containerd[1557]: time="2025-07-09T14:54:43.634677019Z" level=info msg="connecting to shim 6f1dd5e9273a209dfb86492befc4c6ea815940d85789bcd50fc2b5af3f51cf26" address="unix:///run/containerd/s/971b3f1a29cba9adb94eb72d594bd23a244d369026fce099d4716c79f3abd835" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:54:43.652980 kubelet[2439]: E0709 14:54:43.652840 2439 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.253:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.253:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-9999-9-100-3d8d1010bc.novalocal.18509d0111e73dcf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-9999-9-100-3d8d1010bc.novalocal,UID:ci-9999-9-100-3d8d1010bc.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-9999-9-100-3d8d1010bc.novalocal,},FirstTimestamp:2025-07-09 14:54:42.968018383 +0000 UTC m=+0.561285054,LastTimestamp:2025-07-09 14:54:42.968018383 +0000 UTC m=+0.561285054,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-9999-9-100-3d8d1010bc.novalocal,}" Jul 9 14:54:43.700155 systemd[1]: Started cri-containerd-381c6cdd3c485038b3987e8541bfefaff528485369f23e41e4750fe08ba07a9f.scope - libcontainer container 381c6cdd3c485038b3987e8541bfefaff528485369f23e41e4750fe08ba07a9f. Jul 9 14:54:43.710780 systemd[1]: Started cri-containerd-6be27c52e96e3c7f87a4eda61e82add1f5a0c31ca2a8c00cd42ae2ccd77ba710.scope - libcontainer container 6be27c52e96e3c7f87a4eda61e82add1f5a0c31ca2a8c00cd42ae2ccd77ba710. Jul 9 14:54:43.728125 systemd[1]: Started cri-containerd-6f1dd5e9273a209dfb86492befc4c6ea815940d85789bcd50fc2b5af3f51cf26.scope - libcontainer container 6f1dd5e9273a209dfb86492befc4c6ea815940d85789bcd50fc2b5af3f51cf26. Jul 9 14:54:43.811104 containerd[1557]: time="2025-07-09T14:54:43.810915631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal,Uid:6256d6b0ad5b6fd7dde2de9cddd9c6d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"6be27c52e96e3c7f87a4eda61e82add1f5a0c31ca2a8c00cd42ae2ccd77ba710\"" Jul 9 14:54:43.811532 kubelet[2439]: I0709 14:54:43.811422 2439 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:43.812752 kubelet[2439]: E0709 14:54:43.812645 2439 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.253:6443/api/v1/nodes\": dial tcp 172.24.4.253:6443: connect: connection refused" node="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:43.819930 containerd[1557]: time="2025-07-09T14:54:43.819768631Z" level=info msg="CreateContainer within sandbox \"6be27c52e96e3c7f87a4eda61e82add1f5a0c31ca2a8c00cd42ae2ccd77ba710\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 9 14:54:43.839889 containerd[1557]: time="2025-07-09T14:54:43.839811549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-9999-9-100-3d8d1010bc.novalocal,Uid:19e668e1706749a4125dc683e501ff82,Namespace:kube-system,Attempt:0,} returns sandbox id \"381c6cdd3c485038b3987e8541bfefaff528485369f23e41e4750fe08ba07a9f\"" Jul 9 14:54:43.844929 containerd[1557]: time="2025-07-09T14:54:43.844707206Z" level=info msg="CreateContainer within sandbox \"381c6cdd3c485038b3987e8541bfefaff528485369f23e41e4750fe08ba07a9f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 9 14:54:43.847272 containerd[1557]: time="2025-07-09T14:54:43.846632201Z" level=info msg="Container 6a0e7e8118789c6a17ccff4a86f9653ad5fd98fe4ebd61b0907e52ea73f0037c: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:54:43.849492 containerd[1557]: time="2025-07-09T14:54:43.849458801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal,Uid:83ad7c40c43be48b49bf4afcab874d5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f1dd5e9273a209dfb86492befc4c6ea815940d85789bcd50fc2b5af3f51cf26\"" Jul 9 14:54:43.853241 containerd[1557]: time="2025-07-09T14:54:43.853198807Z" level=info msg="CreateContainer within sandbox \"6f1dd5e9273a209dfb86492befc4c6ea815940d85789bcd50fc2b5af3f51cf26\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 9 14:54:43.860960 containerd[1557]: time="2025-07-09T14:54:43.860844198Z" level=info msg="CreateContainer within sandbox \"6be27c52e96e3c7f87a4eda61e82add1f5a0c31ca2a8c00cd42ae2ccd77ba710\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6a0e7e8118789c6a17ccff4a86f9653ad5fd98fe4ebd61b0907e52ea73f0037c\"" Jul 9 14:54:43.862266 containerd[1557]: time="2025-07-09T14:54:43.862224130Z" level=info msg="StartContainer for \"6a0e7e8118789c6a17ccff4a86f9653ad5fd98fe4ebd61b0907e52ea73f0037c\"" Jul 9 14:54:43.865102 containerd[1557]: time="2025-07-09T14:54:43.865057753Z" level=info msg="connecting to shim 6a0e7e8118789c6a17ccff4a86f9653ad5fd98fe4ebd61b0907e52ea73f0037c" address="unix:///run/containerd/s/14c054c382abb383451cea89b29ffb944e1224e5452f2e9d3b746fdab07823ea" protocol=ttrpc version=3 Jul 9 14:54:43.866119 containerd[1557]: time="2025-07-09T14:54:43.866070626Z" level=info msg="Container 543b5d3b60f9e23f0e3db53b0f9ee358399adbd7b8e0254cc58d1b83adfd66f0: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:54:43.879239 containerd[1557]: time="2025-07-09T14:54:43.878595151Z" level=info msg="Container 0fd23b9d426cead4abb72733841d852addd332eb8f40fbde8dc04300e7d1f439: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:54:43.880049 containerd[1557]: time="2025-07-09T14:54:43.879909170Z" level=info msg="CreateContainer within sandbox \"381c6cdd3c485038b3987e8541bfefaff528485369f23e41e4750fe08ba07a9f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"543b5d3b60f9e23f0e3db53b0f9ee358399adbd7b8e0254cc58d1b83adfd66f0\"" Jul 9 14:54:43.880735 containerd[1557]: time="2025-07-09T14:54:43.880658978Z" level=info msg="StartContainer for \"543b5d3b60f9e23f0e3db53b0f9ee358399adbd7b8e0254cc58d1b83adfd66f0\"" Jul 9 14:54:43.881828 containerd[1557]: time="2025-07-09T14:54:43.881792738Z" level=info msg="connecting to shim 543b5d3b60f9e23f0e3db53b0f9ee358399adbd7b8e0254cc58d1b83adfd66f0" address="unix:///run/containerd/s/7ea270e13a80a5c53576412116ca8fc6819e59f2f51bea4f26261444e457a472" protocol=ttrpc version=3 Jul 9 14:54:43.895311 containerd[1557]: time="2025-07-09T14:54:43.895239787Z" level=info msg="CreateContainer within sandbox \"6f1dd5e9273a209dfb86492befc4c6ea815940d85789bcd50fc2b5af3f51cf26\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0fd23b9d426cead4abb72733841d852addd332eb8f40fbde8dc04300e7d1f439\"" Jul 9 14:54:43.897399 containerd[1557]: time="2025-07-09T14:54:43.897188198Z" level=info msg="StartContainer for \"0fd23b9d426cead4abb72733841d852addd332eb8f40fbde8dc04300e7d1f439\"" Jul 9 14:54:43.900322 containerd[1557]: time="2025-07-09T14:54:43.900297047Z" level=info msg="connecting to shim 0fd23b9d426cead4abb72733841d852addd332eb8f40fbde8dc04300e7d1f439" address="unix:///run/containerd/s/971b3f1a29cba9adb94eb72d594bd23a244d369026fce099d4716c79f3abd835" protocol=ttrpc version=3 Jul 9 14:54:43.902234 systemd[1]: Started cri-containerd-6a0e7e8118789c6a17ccff4a86f9653ad5fd98fe4ebd61b0907e52ea73f0037c.scope - libcontainer container 6a0e7e8118789c6a17ccff4a86f9653ad5fd98fe4ebd61b0907e52ea73f0037c. Jul 9 14:54:43.915106 systemd[1]: Started cri-containerd-543b5d3b60f9e23f0e3db53b0f9ee358399adbd7b8e0254cc58d1b83adfd66f0.scope - libcontainer container 543b5d3b60f9e23f0e3db53b0f9ee358399adbd7b8e0254cc58d1b83adfd66f0. Jul 9 14:54:43.940097 systemd[1]: Started cri-containerd-0fd23b9d426cead4abb72733841d852addd332eb8f40fbde8dc04300e7d1f439.scope - libcontainer container 0fd23b9d426cead4abb72733841d852addd332eb8f40fbde8dc04300e7d1f439. Jul 9 14:54:43.995606 containerd[1557]: time="2025-07-09T14:54:43.995475222Z" level=info msg="StartContainer for \"6a0e7e8118789c6a17ccff4a86f9653ad5fd98fe4ebd61b0907e52ea73f0037c\" returns successfully" Jul 9 14:54:44.072673 containerd[1557]: time="2025-07-09T14:54:44.072315438Z" level=info msg="StartContainer for \"0fd23b9d426cead4abb72733841d852addd332eb8f40fbde8dc04300e7d1f439\" returns successfully" Jul 9 14:54:44.078041 kubelet[2439]: E0709 14:54:44.077603 2439 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-3d8d1010bc.novalocal\" not found" node="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:44.088652 containerd[1557]: time="2025-07-09T14:54:44.088608350Z" level=info msg="StartContainer for \"543b5d3b60f9e23f0e3db53b0f9ee358399adbd7b8e0254cc58d1b83adfd66f0\" returns successfully" Jul 9 14:54:44.615507 kubelet[2439]: I0709 14:54:44.615469 2439 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:45.088259 kubelet[2439]: E0709 14:54:45.088216 2439 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-3d8d1010bc.novalocal\" not found" node="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:45.092128 kubelet[2439]: E0709 14:54:45.091846 2439 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-3d8d1010bc.novalocal\" not found" node="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:45.092515 kubelet[2439]: E0709 14:54:45.092476 2439 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-3d8d1010bc.novalocal\" not found" node="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:45.637573 kubelet[2439]: E0709 14:54:45.637524 2439 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-9999-9-100-3d8d1010bc.novalocal\" not found" node="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:45.711719 kubelet[2439]: I0709 14:54:45.711492 2439 kubelet_node_status.go:78] "Successfully registered node" node="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:45.795858 kubelet[2439]: I0709 14:54:45.795820 2439 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:45.817108 kubelet[2439]: E0709 14:54:45.816351 2439 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999-9-100-3d8d1010bc.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:45.817108 kubelet[2439]: I0709 14:54:45.817050 2439 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:45.819992 kubelet[2439]: E0709 14:54:45.819852 2439 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:45.820293 kubelet[2439]: I0709 14:54:45.820126 2439 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:45.822737 kubelet[2439]: E0709 14:54:45.822709 2439 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:45.960762 kubelet[2439]: I0709 14:54:45.960469 2439 apiserver.go:52] "Watching apiserver" Jul 9 14:54:46.004034 kubelet[2439]: I0709 14:54:46.003906 2439 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 14:54:46.093981 kubelet[2439]: I0709 14:54:46.093562 2439 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:46.096394 kubelet[2439]: I0709 14:54:46.096353 2439 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:46.098495 kubelet[2439]: E0709 14:54:46.098431 2439 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999-9-100-3d8d1010bc.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:46.101990 kubelet[2439]: E0709 14:54:46.101892 2439 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:47.098425 kubelet[2439]: I0709 14:54:47.098344 2439 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:47.112000 kubelet[2439]: W0709 14:54:47.111681 2439 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 14:54:48.421476 systemd[1]: Reload requested from client PID 2708 ('systemctl') (unit session-11.scope)... Jul 9 14:54:48.421560 systemd[1]: Reloading... Jul 9 14:54:48.552010 zram_generator::config[2753]: No configuration found. Jul 9 14:54:48.683110 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 14:54:48.849289 systemd[1]: Reloading finished in 426 ms. Jul 9 14:54:48.884235 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 14:54:48.906826 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 14:54:48.907221 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 14:54:48.907350 systemd[1]: kubelet.service: Consumed 1.286s CPU time, 131.3M memory peak. Jul 9 14:54:48.910824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 14:54:49.270314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 14:54:49.287554 (kubelet)[2817]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 14:54:49.412690 kubelet[2817]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 14:54:49.412690 kubelet[2817]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 14:54:49.412690 kubelet[2817]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 14:54:49.412690 kubelet[2817]: I0709 14:54:49.411153 2817 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 14:54:49.428428 kubelet[2817]: I0709 14:54:49.428388 2817 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 9 14:54:49.428606 kubelet[2817]: I0709 14:54:49.428594 2817 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 14:54:49.429680 kubelet[2817]: I0709 14:54:49.429663 2817 server.go:954] "Client rotation is on, will bootstrap in background" Jul 9 14:54:49.434209 kubelet[2817]: I0709 14:54:49.434153 2817 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 9 14:54:49.446282 kubelet[2817]: I0709 14:54:49.445893 2817 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 14:54:49.460510 kubelet[2817]: I0709 14:54:49.460476 2817 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 14:54:49.465633 kubelet[2817]: I0709 14:54:49.465404 2817 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 14:54:49.466027 kubelet[2817]: I0709 14:54:49.465987 2817 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 14:54:49.466386 kubelet[2817]: I0709 14:54:49.466111 2817 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-9999-9-100-3d8d1010bc.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 14:54:49.466709 kubelet[2817]: I0709 14:54:49.466694 2817 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 14:54:49.467298 kubelet[2817]: I0709 14:54:49.466779 2817 container_manager_linux.go:304] "Creating device plugin manager" Jul 9 14:54:49.467546 kubelet[2817]: I0709 14:54:49.467473 2817 state_mem.go:36] "Initialized new in-memory state store" Jul 9 14:54:49.468448 kubelet[2817]: I0709 14:54:49.468431 2817 kubelet.go:446] "Attempting to sync node with API server" Jul 9 14:54:49.468624 kubelet[2817]: I0709 14:54:49.468529 2817 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 14:54:49.468732 kubelet[2817]: I0709 14:54:49.468720 2817 kubelet.go:352] "Adding apiserver pod source" Jul 9 14:54:49.468897 kubelet[2817]: I0709 14:54:49.468881 2817 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 14:54:49.482644 kubelet[2817]: I0709 14:54:49.482602 2817 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 9 14:54:49.485596 kubelet[2817]: I0709 14:54:49.485158 2817 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 14:54:49.489868 kubelet[2817]: I0709 14:54:49.489516 2817 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 14:54:49.492388 kubelet[2817]: I0709 14:54:49.491009 2817 server.go:1287] "Started kubelet" Jul 9 14:54:49.495770 kubelet[2817]: I0709 14:54:49.494513 2817 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 14:54:49.501603 kubelet[2817]: I0709 14:54:49.501480 2817 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 14:54:49.508898 kubelet[2817]: I0709 14:54:49.494627 2817 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 14:54:49.523375 kubelet[2817]: I0709 14:54:49.523230 2817 server.go:479] "Adding debug handlers to kubelet server" Jul 9 14:54:49.534699 kubelet[2817]: I0709 14:54:49.534669 2817 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 14:54:49.535123 kubelet[2817]: E0709 14:54:49.535102 2817 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-9999-9-100-3d8d1010bc.novalocal\" not found" Jul 9 14:54:49.535650 kubelet[2817]: I0709 14:54:49.494698 2817 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 14:54:49.536268 kubelet[2817]: I0709 14:54:49.536251 2817 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 14:54:49.538969 kubelet[2817]: I0709 14:54:49.538267 2817 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 14:54:49.538969 kubelet[2817]: I0709 14:54:49.538421 2817 reconciler.go:26] "Reconciler: start to sync state" Jul 9 14:54:49.544523 kubelet[2817]: I0709 14:54:49.544485 2817 factory.go:221] Registration of the systemd container factory successfully Jul 9 14:54:49.544661 kubelet[2817]: I0709 14:54:49.544624 2817 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 14:54:49.549614 kubelet[2817]: I0709 14:54:49.549562 2817 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 14:54:49.551725 kubelet[2817]: I0709 14:54:49.551702 2817 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 14:54:49.551903 kubelet[2817]: I0709 14:54:49.551883 2817 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 9 14:54:49.552094 kubelet[2817]: I0709 14:54:49.552075 2817 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 14:54:49.552209 kubelet[2817]: I0709 14:54:49.552193 2817 kubelet.go:2382] "Starting kubelet main sync loop" Jul 9 14:54:49.552410 kubelet[2817]: E0709 14:54:49.552370 2817 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 14:54:49.564040 kubelet[2817]: I0709 14:54:49.562893 2817 factory.go:221] Registration of the containerd container factory successfully Jul 9 14:54:49.570018 kubelet[2817]: E0709 14:54:49.569418 2817 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 14:54:49.644235 kubelet[2817]: I0709 14:54:49.644211 2817 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 14:54:49.644431 kubelet[2817]: I0709 14:54:49.644416 2817 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 14:54:49.644545 kubelet[2817]: I0709 14:54:49.644534 2817 state_mem.go:36] "Initialized new in-memory state store" Jul 9 14:54:49.644823 kubelet[2817]: I0709 14:54:49.644804 2817 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 9 14:54:49.644931 kubelet[2817]: I0709 14:54:49.644903 2817 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 9 14:54:49.645056 kubelet[2817]: I0709 14:54:49.645044 2817 policy_none.go:49] "None policy: Start" Jul 9 14:54:49.645179 kubelet[2817]: I0709 14:54:49.645166 2817 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 14:54:49.645275 kubelet[2817]: I0709 14:54:49.645264 2817 state_mem.go:35] "Initializing new in-memory state store" Jul 9 14:54:49.645494 kubelet[2817]: I0709 14:54:49.645476 2817 state_mem.go:75] "Updated machine memory state" Jul 9 14:54:49.652356 kubelet[2817]: I0709 14:54:49.652318 2817 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 14:54:49.653713 kubelet[2817]: I0709 14:54:49.652526 2817 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 14:54:49.653713 kubelet[2817]: I0709 14:54:49.652562 2817 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 14:54:49.655210 kubelet[2817]: I0709 14:54:49.654873 2817 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:49.657075 kubelet[2817]: I0709 14:54:49.655843 2817 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 14:54:49.657075 kubelet[2817]: I0709 14:54:49.656265 2817 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:49.657450 kubelet[2817]: I0709 14:54:49.657429 2817 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:49.664020 kubelet[2817]: E0709 14:54:49.663356 2817 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 14:54:49.676581 kubelet[2817]: W0709 14:54:49.676541 2817 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 14:54:49.679835 kubelet[2817]: W0709 14:54:49.679498 2817 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 14:54:49.680954 kubelet[2817]: W0709 14:54:49.679980 2817 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 14:54:49.680954 kubelet[2817]: E0709 14:54:49.680175 2817 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999-9-100-3d8d1010bc.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:49.741198 kubelet[2817]: I0709 14:54:49.741148 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/83ad7c40c43be48b49bf4afcab874d5b-flexvolume-dir\") pod \"kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal\" (UID: \"83ad7c40c43be48b49bf4afcab874d5b\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:49.741794 kubelet[2817]: I0709 14:54:49.741469 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83ad7c40c43be48b49bf4afcab874d5b-k8s-certs\") pod \"kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal\" (UID: \"83ad7c40c43be48b49bf4afcab874d5b\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:49.741794 kubelet[2817]: I0709 14:54:49.741549 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83ad7c40c43be48b49bf4afcab874d5b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal\" (UID: \"83ad7c40c43be48b49bf4afcab874d5b\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:49.741794 kubelet[2817]: I0709 14:54:49.741601 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19e668e1706749a4125dc683e501ff82-kubeconfig\") pod \"kube-scheduler-ci-9999-9-100-3d8d1010bc.novalocal\" (UID: \"19e668e1706749a4125dc683e501ff82\") " pod="kube-system/kube-scheduler-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:49.741794 kubelet[2817]: I0709 14:54:49.741641 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6256d6b0ad5b6fd7dde2de9cddd9c6d5-ca-certs\") pod \"kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal\" (UID: \"6256d6b0ad5b6fd7dde2de9cddd9c6d5\") " pod="kube-system/kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:49.743501 kubelet[2817]: I0709 14:54:49.741671 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6256d6b0ad5b6fd7dde2de9cddd9c6d5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal\" (UID: \"6256d6b0ad5b6fd7dde2de9cddd9c6d5\") " pod="kube-system/kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:49.743501 kubelet[2817]: I0709 14:54:49.741700 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83ad7c40c43be48b49bf4afcab874d5b-ca-certs\") pod \"kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal\" (UID: \"83ad7c40c43be48b49bf4afcab874d5b\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:49.743501 kubelet[2817]: I0709 14:54:49.741722 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/83ad7c40c43be48b49bf4afcab874d5b-kubeconfig\") pod \"kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal\" (UID: \"83ad7c40c43be48b49bf4afcab874d5b\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:49.743892 kubelet[2817]: I0709 14:54:49.743730 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6256d6b0ad5b6fd7dde2de9cddd9c6d5-k8s-certs\") pod \"kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal\" (UID: \"6256d6b0ad5b6fd7dde2de9cddd9c6d5\") " pod="kube-system/kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:49.775062 kubelet[2817]: I0709 14:54:49.774620 2817 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:49.801655 kubelet[2817]: I0709 14:54:49.801613 2817 kubelet_node_status.go:124] "Node was previously registered" node="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:49.801809 kubelet[2817]: I0709 14:54:49.801713 2817 kubelet_node_status.go:78] "Successfully registered node" node="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:50.471991 kubelet[2817]: I0709 14:54:50.471898 2817 apiserver.go:52] "Watching apiserver" Jul 9 14:54:50.538706 kubelet[2817]: I0709 14:54:50.538594 2817 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 14:54:50.606174 kubelet[2817]: I0709 14:54:50.605480 2817 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:50.632288 kubelet[2817]: W0709 14:54:50.632232 2817 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 14:54:50.632517 kubelet[2817]: E0709 14:54:50.632315 2817 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:54:50.656188 kubelet[2817]: I0709 14:54:50.655138 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-9999-9-100-3d8d1010bc.novalocal" podStartSLOduration=3.655094238 podStartE2EDuration="3.655094238s" podCreationTimestamp="2025-07-09 14:54:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 14:54:50.652766268 +0000 UTC m=+1.346710582" watchObservedRunningTime="2025-07-09 14:54:50.655094238 +0000 UTC m=+1.349038502" Jul 9 14:54:50.696963 kubelet[2817]: I0709 14:54:50.696183 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal" podStartSLOduration=1.696161136 podStartE2EDuration="1.696161136s" podCreationTimestamp="2025-07-09 14:54:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 14:54:50.672061753 +0000 UTC m=+1.366005987" watchObservedRunningTime="2025-07-09 14:54:50.696161136 +0000 UTC m=+1.390105370" Jul 9 14:54:50.721616 kubelet[2817]: I0709 14:54:50.721476 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-9999-9-100-3d8d1010bc.novalocal" podStartSLOduration=1.7214448629999999 podStartE2EDuration="1.721444863s" podCreationTimestamp="2025-07-09 14:54:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 14:54:50.697055474 +0000 UTC m=+1.390999729" watchObservedRunningTime="2025-07-09 14:54:50.721444863 +0000 UTC m=+1.415389147" Jul 9 14:54:53.782467 kubelet[2817]: I0709 14:54:53.782303 2817 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 9 14:54:53.785812 containerd[1557]: time="2025-07-09T14:54:53.785598722Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 14:54:53.788192 kubelet[2817]: I0709 14:54:53.787664 2817 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 9 14:54:54.534281 systemd[1]: Created slice kubepods-besteffort-podc6f8291c_00b2_4ea6_b297_813dfab5838f.slice - libcontainer container kubepods-besteffort-podc6f8291c_00b2_4ea6_b297_813dfab5838f.slice. Jul 9 14:54:54.576902 kubelet[2817]: I0709 14:54:54.576862 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk2q4\" (UniqueName: \"kubernetes.io/projected/c6f8291c-00b2-4ea6-b297-813dfab5838f-kube-api-access-xk2q4\") pod \"kube-proxy-pvl8c\" (UID: \"c6f8291c-00b2-4ea6-b297-813dfab5838f\") " pod="kube-system/kube-proxy-pvl8c" Jul 9 14:54:54.577322 kubelet[2817]: I0709 14:54:54.577180 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6f8291c-00b2-4ea6-b297-813dfab5838f-xtables-lock\") pod \"kube-proxy-pvl8c\" (UID: \"c6f8291c-00b2-4ea6-b297-813dfab5838f\") " pod="kube-system/kube-proxy-pvl8c" Jul 9 14:54:54.577322 kubelet[2817]: I0709 14:54:54.577222 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c6f8291c-00b2-4ea6-b297-813dfab5838f-kube-proxy\") pod \"kube-proxy-pvl8c\" (UID: \"c6f8291c-00b2-4ea6-b297-813dfab5838f\") " pod="kube-system/kube-proxy-pvl8c" Jul 9 14:54:54.577322 kubelet[2817]: I0709 14:54:54.577263 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6f8291c-00b2-4ea6-b297-813dfab5838f-lib-modules\") pod \"kube-proxy-pvl8c\" (UID: \"c6f8291c-00b2-4ea6-b297-813dfab5838f\") " pod="kube-system/kube-proxy-pvl8c" Jul 9 14:54:54.841972 systemd[1]: Created slice kubepods-besteffort-podfb3593d4_ee9e_46ce_a93c_b7c5fa10975a.slice - libcontainer container kubepods-besteffort-podfb3593d4_ee9e_46ce_a93c_b7c5fa10975a.slice. Jul 9 14:54:54.846096 containerd[1557]: time="2025-07-09T14:54:54.846035287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pvl8c,Uid:c6f8291c-00b2-4ea6-b297-813dfab5838f,Namespace:kube-system,Attempt:0,}" Jul 9 14:54:54.880273 kubelet[2817]: I0709 14:54:54.880228 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fb3593d4-ee9e-46ce-a93c-b7c5fa10975a-var-lib-calico\") pod \"tigera-operator-747864d56d-rtv2w\" (UID: \"fb3593d4-ee9e-46ce-a93c-b7c5fa10975a\") " pod="tigera-operator/tigera-operator-747864d56d-rtv2w" Jul 9 14:54:54.880895 kubelet[2817]: I0709 14:54:54.880608 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhz5p\" (UniqueName: \"kubernetes.io/projected/fb3593d4-ee9e-46ce-a93c-b7c5fa10975a-kube-api-access-qhz5p\") pod \"tigera-operator-747864d56d-rtv2w\" (UID: \"fb3593d4-ee9e-46ce-a93c-b7c5fa10975a\") " pod="tigera-operator/tigera-operator-747864d56d-rtv2w" Jul 9 14:54:54.883614 containerd[1557]: time="2025-07-09T14:54:54.883168645Z" level=info msg="connecting to shim 4a486c7c41b23d11bb82d58caf542469083b6cb706c5c6cd0afc1a6d61c9e094" address="unix:///run/containerd/s/c5e9ce37bb4b60b941dd8b632f23740caa0a5442f2de0bb963f00a366a5423a9" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:54:54.919141 systemd[1]: Started cri-containerd-4a486c7c41b23d11bb82d58caf542469083b6cb706c5c6cd0afc1a6d61c9e094.scope - libcontainer container 4a486c7c41b23d11bb82d58caf542469083b6cb706c5c6cd0afc1a6d61c9e094. Jul 9 14:54:54.961723 containerd[1557]: time="2025-07-09T14:54:54.961649509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pvl8c,Uid:c6f8291c-00b2-4ea6-b297-813dfab5838f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a486c7c41b23d11bb82d58caf542469083b6cb706c5c6cd0afc1a6d61c9e094\"" Jul 9 14:54:54.968845 containerd[1557]: time="2025-07-09T14:54:54.967985130Z" level=info msg="CreateContainer within sandbox \"4a486c7c41b23d11bb82d58caf542469083b6cb706c5c6cd0afc1a6d61c9e094\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 14:54:54.995214 containerd[1557]: time="2025-07-09T14:54:54.995168672Z" level=info msg="Container c72212927a291f2b7e64f479b123544c57d39236e8e2249ee159bf940bbb2089: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:54:55.019502 containerd[1557]: time="2025-07-09T14:54:55.019443011Z" level=info msg="CreateContainer within sandbox \"4a486c7c41b23d11bb82d58caf542469083b6cb706c5c6cd0afc1a6d61c9e094\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c72212927a291f2b7e64f479b123544c57d39236e8e2249ee159bf940bbb2089\"" Jul 9 14:54:55.021962 containerd[1557]: time="2025-07-09T14:54:55.021585633Z" level=info msg="StartContainer for \"c72212927a291f2b7e64f479b123544c57d39236e8e2249ee159bf940bbb2089\"" Jul 9 14:54:55.026712 containerd[1557]: time="2025-07-09T14:54:55.026654256Z" level=info msg="connecting to shim c72212927a291f2b7e64f479b123544c57d39236e8e2249ee159bf940bbb2089" address="unix:///run/containerd/s/c5e9ce37bb4b60b941dd8b632f23740caa0a5442f2de0bb963f00a366a5423a9" protocol=ttrpc version=3 Jul 9 14:54:55.047093 systemd[1]: Started cri-containerd-c72212927a291f2b7e64f479b123544c57d39236e8e2249ee159bf940bbb2089.scope - libcontainer container c72212927a291f2b7e64f479b123544c57d39236e8e2249ee159bf940bbb2089. Jul 9 14:54:55.101872 containerd[1557]: time="2025-07-09T14:54:55.101420431Z" level=info msg="StartContainer for \"c72212927a291f2b7e64f479b123544c57d39236e8e2249ee159bf940bbb2089\" returns successfully" Jul 9 14:54:55.150334 containerd[1557]: time="2025-07-09T14:54:55.150295792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-rtv2w,Uid:fb3593d4-ee9e-46ce-a93c-b7c5fa10975a,Namespace:tigera-operator,Attempt:0,}" Jul 9 14:54:55.178520 containerd[1557]: time="2025-07-09T14:54:55.178436167Z" level=info msg="connecting to shim ecfb635511f7b19f55333522cd6409f233ddac81d71adc9b76c21fff77a5f1ad" address="unix:///run/containerd/s/35b466d41ab2056d14d96206e9c974340656d4f96ce6a3a4523a8686a0167281" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:54:55.216138 systemd[1]: Started cri-containerd-ecfb635511f7b19f55333522cd6409f233ddac81d71adc9b76c21fff77a5f1ad.scope - libcontainer container ecfb635511f7b19f55333522cd6409f233ddac81d71adc9b76c21fff77a5f1ad. Jul 9 14:54:55.282583 containerd[1557]: time="2025-07-09T14:54:55.282454284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-rtv2w,Uid:fb3593d4-ee9e-46ce-a93c-b7c5fa10975a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ecfb635511f7b19f55333522cd6409f233ddac81d71adc9b76c21fff77a5f1ad\"" Jul 9 14:54:55.285180 containerd[1557]: time="2025-07-09T14:54:55.285130988Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 9 14:54:55.664514 kubelet[2817]: I0709 14:54:55.663912 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pvl8c" podStartSLOduration=1.6638575 podStartE2EDuration="1.6638575s" podCreationTimestamp="2025-07-09 14:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 14:54:55.663322697 +0000 UTC m=+6.357267031" watchObservedRunningTime="2025-07-09 14:54:55.6638575 +0000 UTC m=+6.357801784" Jul 9 14:54:55.723859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1918152229.mount: Deactivated successfully. Jul 9 14:54:56.784146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2987067712.mount: Deactivated successfully. Jul 9 14:54:58.144109 containerd[1557]: time="2025-07-09T14:54:58.143863645Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:54:58.145596 containerd[1557]: time="2025-07-09T14:54:58.145512119Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 9 14:54:58.146959 containerd[1557]: time="2025-07-09T14:54:58.146881799Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:54:58.154058 containerd[1557]: time="2025-07-09T14:54:58.153906912Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:54:58.155643 containerd[1557]: time="2025-07-09T14:54:58.154898974Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.869582026s" Jul 9 14:54:58.155643 containerd[1557]: time="2025-07-09T14:54:58.154963966Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 9 14:54:58.160996 containerd[1557]: time="2025-07-09T14:54:58.160912649Z" level=info msg="CreateContainer within sandbox \"ecfb635511f7b19f55333522cd6409f233ddac81d71adc9b76c21fff77a5f1ad\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 9 14:54:58.175841 containerd[1557]: time="2025-07-09T14:54:58.175800338Z" level=info msg="Container dc6cb8e70a923179f706266dd6c312155afc5468cfb62e9042709defea3a0a49: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:54:58.191727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3669702015.mount: Deactivated successfully. Jul 9 14:54:58.196338 containerd[1557]: time="2025-07-09T14:54:58.196224837Z" level=info msg="CreateContainer within sandbox \"ecfb635511f7b19f55333522cd6409f233ddac81d71adc9b76c21fff77a5f1ad\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dc6cb8e70a923179f706266dd6c312155afc5468cfb62e9042709defea3a0a49\"" Jul 9 14:54:58.198121 containerd[1557]: time="2025-07-09T14:54:58.197833565Z" level=info msg="StartContainer for \"dc6cb8e70a923179f706266dd6c312155afc5468cfb62e9042709defea3a0a49\"" Jul 9 14:54:58.201067 containerd[1557]: time="2025-07-09T14:54:58.200964241Z" level=info msg="connecting to shim dc6cb8e70a923179f706266dd6c312155afc5468cfb62e9042709defea3a0a49" address="unix:///run/containerd/s/35b466d41ab2056d14d96206e9c974340656d4f96ce6a3a4523a8686a0167281" protocol=ttrpc version=3 Jul 9 14:54:58.242119 systemd[1]: Started cri-containerd-dc6cb8e70a923179f706266dd6c312155afc5468cfb62e9042709defea3a0a49.scope - libcontainer container dc6cb8e70a923179f706266dd6c312155afc5468cfb62e9042709defea3a0a49. Jul 9 14:54:58.283781 containerd[1557]: time="2025-07-09T14:54:58.283701777Z" level=info msg="StartContainer for \"dc6cb8e70a923179f706266dd6c312155afc5468cfb62e9042709defea3a0a49\" returns successfully" Jul 9 14:55:02.950165 kubelet[2817]: I0709 14:55:02.949892 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-rtv2w" podStartSLOduration=6.076122987 podStartE2EDuration="8.949812713s" podCreationTimestamp="2025-07-09 14:54:54 +0000 UTC" firstStartedPulling="2025-07-09 14:54:55.283745618 +0000 UTC m=+5.977689862" lastFinishedPulling="2025-07-09 14:54:58.157435354 +0000 UTC m=+8.851379588" observedRunningTime="2025-07-09 14:54:58.702158748 +0000 UTC m=+9.396103102" watchObservedRunningTime="2025-07-09 14:55:02.949812713 +0000 UTC m=+13.643756957" Jul 9 14:55:05.247831 sudo[1847]: pam_unix(sudo:session): session closed for user root Jul 9 14:55:05.414556 sshd[1846]: Connection closed by 172.24.4.1 port 44418 Jul 9 14:55:05.416827 sshd-session[1843]: pam_unix(sshd:session): session closed for user core Jul 9 14:55:05.428393 systemd[1]: sshd@8-172.24.4.253:22-172.24.4.1:44418.service: Deactivated successfully. Jul 9 14:55:05.433798 systemd[1]: session-11.scope: Deactivated successfully. Jul 9 14:55:05.435344 systemd[1]: session-11.scope: Consumed 8.667s CPU time, 233M memory peak. Jul 9 14:55:05.439794 systemd-logind[1534]: Session 11 logged out. Waiting for processes to exit. Jul 9 14:55:05.446502 systemd-logind[1534]: Removed session 11. Jul 9 14:55:09.678978 kubelet[2817]: I0709 14:55:09.678642 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c14a5598-3b8f-4e41-a338-ca4fc6b03463-typha-certs\") pod \"calico-typha-d7c687fd5-qs58n\" (UID: \"c14a5598-3b8f-4e41-a338-ca4fc6b03463\") " pod="calico-system/calico-typha-d7c687fd5-qs58n" Jul 9 14:55:09.678978 kubelet[2817]: I0709 14:55:09.678735 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcnk5\" (UniqueName: \"kubernetes.io/projected/c14a5598-3b8f-4e41-a338-ca4fc6b03463-kube-api-access-zcnk5\") pod \"calico-typha-d7c687fd5-qs58n\" (UID: \"c14a5598-3b8f-4e41-a338-ca4fc6b03463\") " pod="calico-system/calico-typha-d7c687fd5-qs58n" Jul 9 14:55:09.678978 kubelet[2817]: I0709 14:55:09.678763 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c14a5598-3b8f-4e41-a338-ca4fc6b03463-tigera-ca-bundle\") pod \"calico-typha-d7c687fd5-qs58n\" (UID: \"c14a5598-3b8f-4e41-a338-ca4fc6b03463\") " pod="calico-system/calico-typha-d7c687fd5-qs58n" Jul 9 14:55:09.683453 systemd[1]: Created slice kubepods-besteffort-podc14a5598_3b8f_4e41_a338_ca4fc6b03463.slice - libcontainer container kubepods-besteffort-podc14a5598_3b8f_4e41_a338_ca4fc6b03463.slice. Jul 9 14:55:09.991924 containerd[1557]: time="2025-07-09T14:55:09.991376406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d7c687fd5-qs58n,Uid:c14a5598-3b8f-4e41-a338-ca4fc6b03463,Namespace:calico-system,Attempt:0,}" Jul 9 14:55:10.047951 containerd[1557]: time="2025-07-09T14:55:10.047215501Z" level=info msg="connecting to shim fce36fcc985510e9e8570f862123d50170db83f9501276e11a6e4155c40627ab" address="unix:///run/containerd/s/5eb0d533c3f3032073885f82cc0cfbc8a065830ddd5964ab16f5098f18402c00" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:55:10.095252 systemd[1]: Started cri-containerd-fce36fcc985510e9e8570f862123d50170db83f9501276e11a6e4155c40627ab.scope - libcontainer container fce36fcc985510e9e8570f862123d50170db83f9501276e11a6e4155c40627ab. Jul 9 14:55:10.161399 systemd[1]: Created slice kubepods-besteffort-pod7712052e_f0a2_41fc_b8f4_ca44ae9074da.slice - libcontainer container kubepods-besteffort-pod7712052e_f0a2_41fc_b8f4_ca44ae9074da.slice. Jul 9 14:55:10.181410 kubelet[2817]: I0709 14:55:10.180768 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7712052e-f0a2-41fc-b8f4-ca44ae9074da-lib-modules\") pod \"calico-node-8zt95\" (UID: \"7712052e-f0a2-41fc-b8f4-ca44ae9074da\") " pod="calico-system/calico-node-8zt95" Jul 9 14:55:10.181724 kubelet[2817]: I0709 14:55:10.181632 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7712052e-f0a2-41fc-b8f4-ca44ae9074da-node-certs\") pod \"calico-node-8zt95\" (UID: \"7712052e-f0a2-41fc-b8f4-ca44ae9074da\") " pod="calico-system/calico-node-8zt95" Jul 9 14:55:10.181847 kubelet[2817]: I0709 14:55:10.181663 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7712052e-f0a2-41fc-b8f4-ca44ae9074da-policysync\") pod \"calico-node-8zt95\" (UID: \"7712052e-f0a2-41fc-b8f4-ca44ae9074da\") " pod="calico-system/calico-node-8zt95" Jul 9 14:55:10.182085 kubelet[2817]: I0709 14:55:10.182002 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7712052e-f0a2-41fc-b8f4-ca44ae9074da-xtables-lock\") pod \"calico-node-8zt95\" (UID: \"7712052e-f0a2-41fc-b8f4-ca44ae9074da\") " pod="calico-system/calico-node-8zt95" Jul 9 14:55:10.182219 kubelet[2817]: I0709 14:55:10.182170 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7712052e-f0a2-41fc-b8f4-ca44ae9074da-var-lib-calico\") pod \"calico-node-8zt95\" (UID: \"7712052e-f0a2-41fc-b8f4-ca44ae9074da\") " pod="calico-system/calico-node-8zt95" Jul 9 14:55:10.182660 kubelet[2817]: I0709 14:55:10.182199 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7712052e-f0a2-41fc-b8f4-ca44ae9074da-var-run-calico\") pod \"calico-node-8zt95\" (UID: \"7712052e-f0a2-41fc-b8f4-ca44ae9074da\") " pod="calico-system/calico-node-8zt95" Jul 9 14:55:10.182660 kubelet[2817]: I0709 14:55:10.182349 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n7mz\" (UniqueName: \"kubernetes.io/projected/7712052e-f0a2-41fc-b8f4-ca44ae9074da-kube-api-access-8n7mz\") pod \"calico-node-8zt95\" (UID: \"7712052e-f0a2-41fc-b8f4-ca44ae9074da\") " pod="calico-system/calico-node-8zt95" Jul 9 14:55:10.182660 kubelet[2817]: I0709 14:55:10.182375 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7712052e-f0a2-41fc-b8f4-ca44ae9074da-cni-net-dir\") pod \"calico-node-8zt95\" (UID: \"7712052e-f0a2-41fc-b8f4-ca44ae9074da\") " pod="calico-system/calico-node-8zt95" Jul 9 14:55:10.182660 kubelet[2817]: I0709 14:55:10.182393 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7712052e-f0a2-41fc-b8f4-ca44ae9074da-cni-bin-dir\") pod \"calico-node-8zt95\" (UID: \"7712052e-f0a2-41fc-b8f4-ca44ae9074da\") " pod="calico-system/calico-node-8zt95" Jul 9 14:55:10.182660 kubelet[2817]: I0709 14:55:10.182424 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7712052e-f0a2-41fc-b8f4-ca44ae9074da-flexvol-driver-host\") pod \"calico-node-8zt95\" (UID: \"7712052e-f0a2-41fc-b8f4-ca44ae9074da\") " pod="calico-system/calico-node-8zt95" Jul 9 14:55:10.182854 kubelet[2817]: I0709 14:55:10.182443 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7712052e-f0a2-41fc-b8f4-ca44ae9074da-tigera-ca-bundle\") pod \"calico-node-8zt95\" (UID: \"7712052e-f0a2-41fc-b8f4-ca44ae9074da\") " pod="calico-system/calico-node-8zt95" Jul 9 14:55:10.182854 kubelet[2817]: I0709 14:55:10.182466 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7712052e-f0a2-41fc-b8f4-ca44ae9074da-cni-log-dir\") pod \"calico-node-8zt95\" (UID: \"7712052e-f0a2-41fc-b8f4-ca44ae9074da\") " pod="calico-system/calico-node-8zt95" Jul 9 14:55:10.272200 containerd[1557]: time="2025-07-09T14:55:10.271864656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d7c687fd5-qs58n,Uid:c14a5598-3b8f-4e41-a338-ca4fc6b03463,Namespace:calico-system,Attempt:0,} returns sandbox id \"fce36fcc985510e9e8570f862123d50170db83f9501276e11a6e4155c40627ab\"" Jul 9 14:55:10.276426 containerd[1557]: time="2025-07-09T14:55:10.276255923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 9 14:55:10.292721 kubelet[2817]: E0709 14:55:10.292552 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.292721 kubelet[2817]: W0709 14:55:10.292598 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.292721 kubelet[2817]: E0709 14:55:10.292674 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.300713 kubelet[2817]: E0709 14:55:10.300632 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.300713 kubelet[2817]: W0709 14:55:10.300655 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.300713 kubelet[2817]: E0709 14:55:10.300675 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.368912 kubelet[2817]: E0709 14:55:10.367828 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ntrc4" podUID="49c2fd72-8bb2-49f1-b111-af8c92749a93" Jul 9 14:55:10.380192 kubelet[2817]: E0709 14:55:10.380156 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.380411 kubelet[2817]: W0709 14:55:10.380387 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.380535 kubelet[2817]: E0709 14:55:10.380517 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.380969 kubelet[2817]: E0709 14:55:10.380906 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.382018 kubelet[2817]: W0709 14:55:10.381017 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.382018 kubelet[2817]: E0709 14:55:10.381033 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.382247 kubelet[2817]: E0709 14:55:10.382232 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.382344 kubelet[2817]: W0709 14:55:10.382330 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.382604 kubelet[2817]: E0709 14:55:10.382401 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.382760 kubelet[2817]: E0709 14:55:10.382747 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.382855 kubelet[2817]: W0709 14:55:10.382841 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.382968 kubelet[2817]: E0709 14:55:10.382925 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.383380 kubelet[2817]: E0709 14:55:10.383270 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.383380 kubelet[2817]: W0709 14:55:10.383285 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.383380 kubelet[2817]: E0709 14:55:10.383295 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.384237 kubelet[2817]: E0709 14:55:10.384222 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.384340 kubelet[2817]: W0709 14:55:10.384322 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.384594 kubelet[2817]: E0709 14:55:10.384471 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.384759 kubelet[2817]: E0709 14:55:10.384743 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.385003 kubelet[2817]: W0709 14:55:10.384857 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.385003 kubelet[2817]: E0709 14:55:10.384888 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.387153 kubelet[2817]: E0709 14:55:10.387094 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.387153 kubelet[2817]: W0709 14:55:10.387112 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.387153 kubelet[2817]: E0709 14:55:10.387128 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.387851 kubelet[2817]: E0709 14:55:10.387747 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.387851 kubelet[2817]: W0709 14:55:10.387761 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.387851 kubelet[2817]: E0709 14:55:10.387773 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.388706 kubelet[2817]: E0709 14:55:10.388691 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.389295 kubelet[2817]: W0709 14:55:10.389181 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.389295 kubelet[2817]: E0709 14:55:10.389202 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.391080 kubelet[2817]: E0709 14:55:10.391056 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.391313 kubelet[2817]: W0709 14:55:10.391193 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.391313 kubelet[2817]: E0709 14:55:10.391214 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.391970 kubelet[2817]: E0709 14:55:10.391683 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.391970 kubelet[2817]: W0709 14:55:10.391717 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.391970 kubelet[2817]: E0709 14:55:10.391729 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.392287 kubelet[2817]: E0709 14:55:10.392248 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.392364 kubelet[2817]: W0709 14:55:10.392283 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.392364 kubelet[2817]: E0709 14:55:10.392315 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.392899 kubelet[2817]: E0709 14:55:10.392455 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.392899 kubelet[2817]: W0709 14:55:10.392471 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.392899 kubelet[2817]: E0709 14:55:10.392480 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.392899 kubelet[2817]: E0709 14:55:10.392609 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.392899 kubelet[2817]: W0709 14:55:10.392619 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.392899 kubelet[2817]: E0709 14:55:10.392629 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.392899 kubelet[2817]: E0709 14:55:10.392777 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.392899 kubelet[2817]: W0709 14:55:10.392786 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.392899 kubelet[2817]: E0709 14:55:10.392797 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.393654 kubelet[2817]: E0709 14:55:10.393422 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.393654 kubelet[2817]: W0709 14:55:10.393436 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.393654 kubelet[2817]: E0709 14:55:10.393447 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.394914 kubelet[2817]: E0709 14:55:10.394842 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.394914 kubelet[2817]: W0709 14:55:10.394860 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.394914 kubelet[2817]: E0709 14:55:10.394872 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.395330 kubelet[2817]: E0709 14:55:10.395247 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.395330 kubelet[2817]: W0709 14:55:10.395275 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.395330 kubelet[2817]: E0709 14:55:10.395287 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.395742 kubelet[2817]: E0709 14:55:10.395650 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.395742 kubelet[2817]: W0709 14:55:10.395665 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.395742 kubelet[2817]: E0709 14:55:10.395675 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.396052 kubelet[2817]: E0709 14:55:10.396031 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.396052 kubelet[2817]: W0709 14:55:10.396046 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.396339 kubelet[2817]: E0709 14:55:10.396057 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.396339 kubelet[2817]: I0709 14:55:10.396089 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49c2fd72-8bb2-49f1-b111-af8c92749a93-kubelet-dir\") pod \"csi-node-driver-ntrc4\" (UID: \"49c2fd72-8bb2-49f1-b111-af8c92749a93\") " pod="calico-system/csi-node-driver-ntrc4" Jul 9 14:55:10.396339 kubelet[2817]: E0709 14:55:10.396260 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.396339 kubelet[2817]: W0709 14:55:10.396273 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.396339 kubelet[2817]: E0709 14:55:10.396294 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.396339 kubelet[2817]: I0709 14:55:10.396326 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk7bn\" (UniqueName: \"kubernetes.io/projected/49c2fd72-8bb2-49f1-b111-af8c92749a93-kube-api-access-dk7bn\") pod \"csi-node-driver-ntrc4\" (UID: \"49c2fd72-8bb2-49f1-b111-af8c92749a93\") " pod="calico-system/csi-node-driver-ntrc4" Jul 9 14:55:10.397039 kubelet[2817]: E0709 14:55:10.396495 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.397039 kubelet[2817]: W0709 14:55:10.396507 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.397039 kubelet[2817]: E0709 14:55:10.396526 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.397039 kubelet[2817]: I0709 14:55:10.396544 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/49c2fd72-8bb2-49f1-b111-af8c92749a93-socket-dir\") pod \"csi-node-driver-ntrc4\" (UID: \"49c2fd72-8bb2-49f1-b111-af8c92749a93\") " pod="calico-system/csi-node-driver-ntrc4" Jul 9 14:55:10.397039 kubelet[2817]: E0709 14:55:10.396749 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.397039 kubelet[2817]: W0709 14:55:10.396760 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.397039 kubelet[2817]: E0709 14:55:10.396777 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.397039 kubelet[2817]: I0709 14:55:10.396794 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/49c2fd72-8bb2-49f1-b111-af8c92749a93-varrun\") pod \"csi-node-driver-ntrc4\" (UID: \"49c2fd72-8bb2-49f1-b111-af8c92749a93\") " pod="calico-system/csi-node-driver-ntrc4" Jul 9 14:55:10.397605 kubelet[2817]: E0709 14:55:10.397443 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.397605 kubelet[2817]: W0709 14:55:10.397460 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.397605 kubelet[2817]: E0709 14:55:10.397484 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.397977 kubelet[2817]: E0709 14:55:10.397861 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.397977 kubelet[2817]: W0709 14:55:10.397876 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.398156 kubelet[2817]: E0709 14:55:10.398132 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.398515 kubelet[2817]: E0709 14:55:10.398441 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.398515 kubelet[2817]: W0709 14:55:10.398457 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.398515 kubelet[2817]: E0709 14:55:10.398499 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.398908 kubelet[2817]: E0709 14:55:10.398839 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.398908 kubelet[2817]: W0709 14:55:10.398853 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.398908 kubelet[2817]: E0709 14:55:10.398893 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.399983 kubelet[2817]: E0709 14:55:10.399371 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.399983 kubelet[2817]: W0709 14:55:10.399387 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.399983 kubelet[2817]: E0709 14:55:10.399431 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.399983 kubelet[2817]: I0709 14:55:10.399457 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/49c2fd72-8bb2-49f1-b111-af8c92749a93-registration-dir\") pod \"csi-node-driver-ntrc4\" (UID: \"49c2fd72-8bb2-49f1-b111-af8c92749a93\") " pod="calico-system/csi-node-driver-ntrc4" Jul 9 14:55:10.400546 kubelet[2817]: E0709 14:55:10.400474 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.400546 kubelet[2817]: W0709 14:55:10.400490 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.400546 kubelet[2817]: E0709 14:55:10.400529 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.401273 kubelet[2817]: E0709 14:55:10.401181 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.401273 kubelet[2817]: W0709 14:55:10.401195 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.401273 kubelet[2817]: E0709 14:55:10.401207 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.401431 kubelet[2817]: E0709 14:55:10.401377 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.401431 kubelet[2817]: W0709 14:55:10.401389 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.401431 kubelet[2817]: E0709 14:55:10.401408 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.401776 kubelet[2817]: E0709 14:55:10.401535 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.401776 kubelet[2817]: W0709 14:55:10.401545 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.401776 kubelet[2817]: E0709 14:55:10.401555 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.401776 kubelet[2817]: E0709 14:55:10.401694 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.401776 kubelet[2817]: W0709 14:55:10.401703 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.401776 kubelet[2817]: E0709 14:55:10.401712 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.402000 kubelet[2817]: E0709 14:55:10.401840 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.402000 kubelet[2817]: W0709 14:55:10.401849 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.402000 kubelet[2817]: E0709 14:55:10.401860 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.466901 containerd[1557]: time="2025-07-09T14:55:10.466835904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8zt95,Uid:7712052e-f0a2-41fc-b8f4-ca44ae9074da,Namespace:calico-system,Attempt:0,}" Jul 9 14:55:10.503869 kubelet[2817]: E0709 14:55:10.503735 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.503869 kubelet[2817]: W0709 14:55:10.503763 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.503869 kubelet[2817]: E0709 14:55:10.503788 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.504954 kubelet[2817]: E0709 14:55:10.504869 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.504954 kubelet[2817]: W0709 14:55:10.504883 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.504954 kubelet[2817]: E0709 14:55:10.504904 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.505418 kubelet[2817]: E0709 14:55:10.505363 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.505662 kubelet[2817]: W0709 14:55:10.505414 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.505662 kubelet[2817]: E0709 14:55:10.505467 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.506856 kubelet[2817]: E0709 14:55:10.506819 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.506856 kubelet[2817]: W0709 14:55:10.506847 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.508232 kubelet[2817]: E0709 14:55:10.506882 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.508232 kubelet[2817]: E0709 14:55:10.507304 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.508232 kubelet[2817]: W0709 14:55:10.507325 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.508232 kubelet[2817]: E0709 14:55:10.507345 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.509210 kubelet[2817]: E0709 14:55:10.509180 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.509210 kubelet[2817]: W0709 14:55:10.509207 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.509459 kubelet[2817]: E0709 14:55:10.509314 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.509502 kubelet[2817]: E0709 14:55:10.509478 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.509502 kubelet[2817]: W0709 14:55:10.509498 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.509739 kubelet[2817]: E0709 14:55:10.509647 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.510293 kubelet[2817]: E0709 14:55:10.510262 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.510293 kubelet[2817]: W0709 14:55:10.510288 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.510548 kubelet[2817]: E0709 14:55:10.510321 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.511154 kubelet[2817]: E0709 14:55:10.511123 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.511154 kubelet[2817]: W0709 14:55:10.511150 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.511477 kubelet[2817]: E0709 14:55:10.511182 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.512336 kubelet[2817]: E0709 14:55:10.512182 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.512336 kubelet[2817]: W0709 14:55:10.512227 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.512336 kubelet[2817]: E0709 14:55:10.512252 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.513277 kubelet[2817]: E0709 14:55:10.513240 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.513277 kubelet[2817]: W0709 14:55:10.513268 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.513998 kubelet[2817]: E0709 14:55:10.513300 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.514776 kubelet[2817]: E0709 14:55:10.514715 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.515148 kubelet[2817]: W0709 14:55:10.514958 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.515438 kubelet[2817]: E0709 14:55:10.515244 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.520566 kubelet[2817]: E0709 14:55:10.518509 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.520566 kubelet[2817]: W0709 14:55:10.519498 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.521111 kubelet[2817]: E0709 14:55:10.520994 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.522454 kubelet[2817]: E0709 14:55:10.522122 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.522610 kubelet[2817]: W0709 14:55:10.522578 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.523278 kubelet[2817]: E0709 14:55:10.523214 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.525154 kubelet[2817]: E0709 14:55:10.525128 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.525260 kubelet[2817]: W0709 14:55:10.525245 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.525581 kubelet[2817]: E0709 14:55:10.525565 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.525784 kubelet[2817]: W0709 14:55:10.525691 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.526813 containerd[1557]: time="2025-07-09T14:55:10.526195504Z" level=info msg="connecting to shim 0f4dc1946eade24e7ca241cea4ddb3a21e462a44dfc064e5f6246f9701147055" address="unix:///run/containerd/s/a0f71153f45bccd28e2af50db6a1f8121217754608f1ea9350cc1c0312fbc691" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:55:10.526995 kubelet[2817]: E0709 14:55:10.526980 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.527136 kubelet[2817]: W0709 14:55:10.527116 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.527550 kubelet[2817]: E0709 14:55:10.527389 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.527550 kubelet[2817]: W0709 14:55:10.527402 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.527550 kubelet[2817]: E0709 14:55:10.527423 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.528014 kubelet[2817]: E0709 14:55:10.527999 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.528179 kubelet[2817]: W0709 14:55:10.528163 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.528991 kubelet[2817]: E0709 14:55:10.528265 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.531892 kubelet[2817]: E0709 14:55:10.531786 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.531892 kubelet[2817]: E0709 14:55:10.531814 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.532590 kubelet[2817]: E0709 14:55:10.531842 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.532709 kubelet[2817]: E0709 14:55:10.532695 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.532793 kubelet[2817]: W0709 14:55:10.532778 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.533080 kubelet[2817]: E0709 14:55:10.532863 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.534063 kubelet[2817]: E0709 14:55:10.534048 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.534151 kubelet[2817]: W0709 14:55:10.534132 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.534221 kubelet[2817]: E0709 14:55:10.534207 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.535320 kubelet[2817]: E0709 14:55:10.535224 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.535320 kubelet[2817]: W0709 14:55:10.535238 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.535320 kubelet[2817]: E0709 14:55:10.535251 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.535659 kubelet[2817]: E0709 14:55:10.535645 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.535726 kubelet[2817]: W0709 14:55:10.535714 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.536004 kubelet[2817]: E0709 14:55:10.535792 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.536243 kubelet[2817]: E0709 14:55:10.536229 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.536980 kubelet[2817]: W0709 14:55:10.536964 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.537261 kubelet[2817]: E0709 14:55:10.537246 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.537352 kubelet[2817]: W0709 14:55:10.537337 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.537786 kubelet[2817]: E0709 14:55:10.537427 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.537886 kubelet[2817]: E0709 14:55:10.537869 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.571927 kubelet[2817]: E0709 14:55:10.571892 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:10.572155 kubelet[2817]: W0709 14:55:10.572137 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:10.572280 kubelet[2817]: E0709 14:55:10.572243 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:10.590199 systemd[1]: Started cri-containerd-0f4dc1946eade24e7ca241cea4ddb3a21e462a44dfc064e5f6246f9701147055.scope - libcontainer container 0f4dc1946eade24e7ca241cea4ddb3a21e462a44dfc064e5f6246f9701147055. Jul 9 14:55:10.651786 containerd[1557]: time="2025-07-09T14:55:10.651739345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8zt95,Uid:7712052e-f0a2-41fc-b8f4-ca44ae9074da,Namespace:calico-system,Attempt:0,} returns sandbox id \"0f4dc1946eade24e7ca241cea4ddb3a21e462a44dfc064e5f6246f9701147055\"" Jul 9 14:55:12.449799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount52267741.mount: Deactivated successfully. Jul 9 14:55:12.553840 kubelet[2817]: E0709 14:55:12.553743 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ntrc4" podUID="49c2fd72-8bb2-49f1-b111-af8c92749a93" Jul 9 14:55:14.055996 containerd[1557]: time="2025-07-09T14:55:14.055895243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:14.057099 containerd[1557]: time="2025-07-09T14:55:14.057040010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 9 14:55:14.059326 containerd[1557]: time="2025-07-09T14:55:14.058661552Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:14.062320 containerd[1557]: time="2025-07-09T14:55:14.062260342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:14.063597 containerd[1557]: time="2025-07-09T14:55:14.063559931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.787248333s" Jul 9 14:55:14.063597 containerd[1557]: time="2025-07-09T14:55:14.063596610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 9 14:55:14.066910 containerd[1557]: time="2025-07-09T14:55:14.066830826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 9 14:55:14.084503 containerd[1557]: time="2025-07-09T14:55:14.084447403Z" level=info msg="CreateContainer within sandbox \"fce36fcc985510e9e8570f862123d50170db83f9501276e11a6e4155c40627ab\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 9 14:55:14.100230 containerd[1557]: time="2025-07-09T14:55:14.099097125Z" level=info msg="Container 946ba5287dfbe297b642555ef5f00a27c264f4fd9a5fc996af7278f7a58aa9d8: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:55:14.115004 containerd[1557]: time="2025-07-09T14:55:14.114796976Z" level=info msg="CreateContainer within sandbox \"fce36fcc985510e9e8570f862123d50170db83f9501276e11a6e4155c40627ab\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"946ba5287dfbe297b642555ef5f00a27c264f4fd9a5fc996af7278f7a58aa9d8\"" Jul 9 14:55:14.117371 containerd[1557]: time="2025-07-09T14:55:14.117336991Z" level=info msg="StartContainer for \"946ba5287dfbe297b642555ef5f00a27c264f4fd9a5fc996af7278f7a58aa9d8\"" Jul 9 14:55:14.119659 containerd[1557]: time="2025-07-09T14:55:14.119623951Z" level=info msg="connecting to shim 946ba5287dfbe297b642555ef5f00a27c264f4fd9a5fc996af7278f7a58aa9d8" address="unix:///run/containerd/s/5eb0d533c3f3032073885f82cc0cfbc8a065830ddd5964ab16f5098f18402c00" protocol=ttrpc version=3 Jul 9 14:55:14.148092 systemd[1]: Started cri-containerd-946ba5287dfbe297b642555ef5f00a27c264f4fd9a5fc996af7278f7a58aa9d8.scope - libcontainer container 946ba5287dfbe297b642555ef5f00a27c264f4fd9a5fc996af7278f7a58aa9d8. Jul 9 14:55:14.227111 containerd[1557]: time="2025-07-09T14:55:14.225898090Z" level=info msg="StartContainer for \"946ba5287dfbe297b642555ef5f00a27c264f4fd9a5fc996af7278f7a58aa9d8\" returns successfully" Jul 9 14:55:14.554063 kubelet[2817]: E0709 14:55:14.553849 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ntrc4" podUID="49c2fd72-8bb2-49f1-b111-af8c92749a93" Jul 9 14:55:14.730968 kubelet[2817]: E0709 14:55:14.730711 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.730968 kubelet[2817]: W0709 14:55:14.730754 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.730968 kubelet[2817]: E0709 14:55:14.730821 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.731902 kubelet[2817]: E0709 14:55:14.731357 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.731902 kubelet[2817]: W0709 14:55:14.731368 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.731902 kubelet[2817]: E0709 14:55:14.731379 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.732550 kubelet[2817]: E0709 14:55:14.732305 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.732550 kubelet[2817]: W0709 14:55:14.732320 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.732550 kubelet[2817]: E0709 14:55:14.732331 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.733379 kubelet[2817]: E0709 14:55:14.733345 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.733588 kubelet[2817]: W0709 14:55:14.733573 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.733854 kubelet[2817]: E0709 14:55:14.733811 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.735155 kubelet[2817]: E0709 14:55:14.735118 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.735155 kubelet[2817]: W0709 14:55:14.735143 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.735383 kubelet[2817]: E0709 14:55:14.735164 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.735791 kubelet[2817]: E0709 14:55:14.735764 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.735791 kubelet[2817]: W0709 14:55:14.735780 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.735791 kubelet[2817]: E0709 14:55:14.735792 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.736273 kubelet[2817]: E0709 14:55:14.736251 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.736273 kubelet[2817]: W0709 14:55:14.736273 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.736464 kubelet[2817]: E0709 14:55:14.736290 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.738992 kubelet[2817]: E0709 14:55:14.737021 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.739061 kubelet[2817]: W0709 14:55:14.738995 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.739061 kubelet[2817]: E0709 14:55:14.739016 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.739648 kubelet[2817]: E0709 14:55:14.739302 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.739648 kubelet[2817]: W0709 14:55:14.739331 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.739648 kubelet[2817]: E0709 14:55:14.739342 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.739648 kubelet[2817]: E0709 14:55:14.739520 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.739648 kubelet[2817]: W0709 14:55:14.739530 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.739648 kubelet[2817]: E0709 14:55:14.739539 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.740152 kubelet[2817]: E0709 14:55:14.740129 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.740152 kubelet[2817]: W0709 14:55:14.740147 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.740389 kubelet[2817]: E0709 14:55:14.740159 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.742194 kubelet[2817]: E0709 14:55:14.741906 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.742194 kubelet[2817]: W0709 14:55:14.741991 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.742194 kubelet[2817]: E0709 14:55:14.742008 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.742326 kubelet[2817]: E0709 14:55:14.742219 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.742326 kubelet[2817]: W0709 14:55:14.742230 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.742326 kubelet[2817]: E0709 14:55:14.742240 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.742638 kubelet[2817]: E0709 14:55:14.742594 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.742638 kubelet[2817]: W0709 14:55:14.742627 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.742638 kubelet[2817]: E0709 14:55:14.742639 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.742863 kubelet[2817]: E0709 14:55:14.742781 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.742863 kubelet[2817]: W0709 14:55:14.742797 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.742863 kubelet[2817]: E0709 14:55:14.742806 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.761649 kubelet[2817]: E0709 14:55:14.761579 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.761649 kubelet[2817]: W0709 14:55:14.761622 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.761649 kubelet[2817]: E0709 14:55:14.761650 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.762276 kubelet[2817]: E0709 14:55:14.761895 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.762276 kubelet[2817]: W0709 14:55:14.761905 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.762276 kubelet[2817]: E0709 14:55:14.761915 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.762276 kubelet[2817]: E0709 14:55:14.762127 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.762276 kubelet[2817]: W0709 14:55:14.762136 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.762276 kubelet[2817]: E0709 14:55:14.762152 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.762838 kubelet[2817]: E0709 14:55:14.762295 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.762838 kubelet[2817]: W0709 14:55:14.762305 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.762838 kubelet[2817]: E0709 14:55:14.762314 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.762838 kubelet[2817]: E0709 14:55:14.762596 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.762838 kubelet[2817]: W0709 14:55:14.762606 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.762838 kubelet[2817]: E0709 14:55:14.762623 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.763632 kubelet[2817]: E0709 14:55:14.763480 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.763632 kubelet[2817]: W0709 14:55:14.763587 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.763928 kubelet[2817]: E0709 14:55:14.763836 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.764306 kubelet[2817]: E0709 14:55:14.764284 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.764306 kubelet[2817]: W0709 14:55:14.764300 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.764519 kubelet[2817]: E0709 14:55:14.764321 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.764712 kubelet[2817]: E0709 14:55:14.764667 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.764712 kubelet[2817]: W0709 14:55:14.764678 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.764712 kubelet[2817]: E0709 14:55:14.764710 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.765089 kubelet[2817]: E0709 14:55:14.765021 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.765089 kubelet[2817]: W0709 14:55:14.765077 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.765089 kubelet[2817]: E0709 14:55:14.765096 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.765502 kubelet[2817]: E0709 14:55:14.765284 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.765502 kubelet[2817]: W0709 14:55:14.765322 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.765502 kubelet[2817]: E0709 14:55:14.765408 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.765913 kubelet[2817]: E0709 14:55:14.765556 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.765913 kubelet[2817]: W0709 14:55:14.765613 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.765913 kubelet[2817]: E0709 14:55:14.765631 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.765913 kubelet[2817]: E0709 14:55:14.765864 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.765913 kubelet[2817]: W0709 14:55:14.765875 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.765913 kubelet[2817]: E0709 14:55:14.765885 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.766570 kubelet[2817]: E0709 14:55:14.766082 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.766570 kubelet[2817]: W0709 14:55:14.766093 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.766570 kubelet[2817]: E0709 14:55:14.766110 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.767174 kubelet[2817]: E0709 14:55:14.766922 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.767174 kubelet[2817]: W0709 14:55:14.766985 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.767174 kubelet[2817]: E0709 14:55:14.767022 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.767459 kubelet[2817]: E0709 14:55:14.767298 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.767459 kubelet[2817]: W0709 14:55:14.767341 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.767459 kubelet[2817]: E0709 14:55:14.767352 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.767888 kubelet[2817]: E0709 14:55:14.767847 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.768108 kubelet[2817]: W0709 14:55:14.768092 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.768285 kubelet[2817]: E0709 14:55:14.768179 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.770376 kubelet[2817]: E0709 14:55:14.770284 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.770376 kubelet[2817]: W0709 14:55:14.770334 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.770376 kubelet[2817]: E0709 14:55:14.770354 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:14.771513 kubelet[2817]: E0709 14:55:14.771441 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:14.771779 kubelet[2817]: W0709 14:55:14.771706 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:14.771779 kubelet[2817]: E0709 14:55:14.771730 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.731676 kubelet[2817]: I0709 14:55:15.730549 2817 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 14:55:15.751285 kubelet[2817]: E0709 14:55:15.751150 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.751285 kubelet[2817]: W0709 14:55:15.751249 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.751591 kubelet[2817]: E0709 14:55:15.751352 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.752076 kubelet[2817]: E0709 14:55:15.752025 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.752316 kubelet[2817]: W0709 14:55:15.752061 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.752316 kubelet[2817]: E0709 14:55:15.752129 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.752815 kubelet[2817]: E0709 14:55:15.752724 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.752815 kubelet[2817]: W0709 14:55:15.752802 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.753139 kubelet[2817]: E0709 14:55:15.752829 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.753967 kubelet[2817]: E0709 14:55:15.753895 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.754140 kubelet[2817]: W0709 14:55:15.753999 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.754140 kubelet[2817]: E0709 14:55:15.754032 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.754876 kubelet[2817]: E0709 14:55:15.754834 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.754876 kubelet[2817]: W0709 14:55:15.754869 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.755164 kubelet[2817]: E0709 14:55:15.754894 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.755498 kubelet[2817]: E0709 14:55:15.755409 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.755498 kubelet[2817]: W0709 14:55:15.755490 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.755791 kubelet[2817]: E0709 14:55:15.755515 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.756220 kubelet[2817]: E0709 14:55:15.756178 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.756220 kubelet[2817]: W0709 14:55:15.756212 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.756677 kubelet[2817]: E0709 14:55:15.756236 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.757353 kubelet[2817]: E0709 14:55:15.757169 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.757353 kubelet[2817]: W0709 14:55:15.757197 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.757353 kubelet[2817]: E0709 14:55:15.757269 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.758470 kubelet[2817]: E0709 14:55:15.758085 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.758470 kubelet[2817]: W0709 14:55:15.758113 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.758470 kubelet[2817]: E0709 14:55:15.758138 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.759324 kubelet[2817]: E0709 14:55:15.759195 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.759324 kubelet[2817]: W0709 14:55:15.759226 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.759324 kubelet[2817]: E0709 14:55:15.759252 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.759588 kubelet[2817]: E0709 14:55:15.759568 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.759674 kubelet[2817]: W0709 14:55:15.759592 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.759674 kubelet[2817]: E0709 14:55:15.759616 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.760670 kubelet[2817]: E0709 14:55:15.759898 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.760670 kubelet[2817]: W0709 14:55:15.759995 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.760670 kubelet[2817]: E0709 14:55:15.760023 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.761798 kubelet[2817]: E0709 14:55:15.760753 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.761798 kubelet[2817]: W0709 14:55:15.760781 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.761798 kubelet[2817]: E0709 14:55:15.760806 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.762490 kubelet[2817]: E0709 14:55:15.762349 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.762490 kubelet[2817]: W0709 14:55:15.762377 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.762490 kubelet[2817]: E0709 14:55:15.762402 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.763125 kubelet[2817]: E0709 14:55:15.762910 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.763125 kubelet[2817]: W0709 14:55:15.763060 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.763125 kubelet[2817]: E0709 14:55:15.763093 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.773985 kubelet[2817]: E0709 14:55:15.773619 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.773985 kubelet[2817]: W0709 14:55:15.773665 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.773985 kubelet[2817]: E0709 14:55:15.773707 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.774906 kubelet[2817]: E0709 14:55:15.774620 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.774906 kubelet[2817]: W0709 14:55:15.774651 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.774906 kubelet[2817]: E0709 14:55:15.774691 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.776012 kubelet[2817]: E0709 14:55:15.775530 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.776012 kubelet[2817]: W0709 14:55:15.775642 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.776012 kubelet[2817]: E0709 14:55:15.775689 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.777544 kubelet[2817]: E0709 14:55:15.776752 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.777544 kubelet[2817]: W0709 14:55:15.777530 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.778144 kubelet[2817]: E0709 14:55:15.777603 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.778144 kubelet[2817]: E0709 14:55:15.778093 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.778144 kubelet[2817]: W0709 14:55:15.778119 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.778144 kubelet[2817]: E0709 14:55:15.778143 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.779332 kubelet[2817]: E0709 14:55:15.779257 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.779332 kubelet[2817]: W0709 14:55:15.779295 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.779903 kubelet[2817]: E0709 14:55:15.779698 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.780339 kubelet[2817]: E0709 14:55:15.780308 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.780717 kubelet[2817]: W0709 14:55:15.780503 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.781086 kubelet[2817]: E0709 14:55:15.781021 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.782265 kubelet[2817]: E0709 14:55:15.782226 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.782604 kubelet[2817]: W0709 14:55:15.782396 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.782877 kubelet[2817]: E0709 14:55:15.782844 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.784314 kubelet[2817]: E0709 14:55:15.784101 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.784314 kubelet[2817]: W0709 14:55:15.784138 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.785990 kubelet[2817]: E0709 14:55:15.785497 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.785990 kubelet[2817]: E0709 14:55:15.785897 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.786815 kubelet[2817]: W0709 14:55:15.785924 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.787227 kubelet[2817]: E0709 14:55:15.787187 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.788004 kubelet[2817]: E0709 14:55:15.787886 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.788004 kubelet[2817]: W0709 14:55:15.787916 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.788331 kubelet[2817]: E0709 14:55:15.788268 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.789039 kubelet[2817]: E0709 14:55:15.788876 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.789039 kubelet[2817]: W0709 14:55:15.788906 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.789674 kubelet[2817]: E0709 14:55:15.789586 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.790024 kubelet[2817]: E0709 14:55:15.789991 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.790506 kubelet[2817]: W0709 14:55:15.790363 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.790734 kubelet[2817]: E0709 14:55:15.790675 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.791449 kubelet[2817]: E0709 14:55:15.791382 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.791449 kubelet[2817]: W0709 14:55:15.791413 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.791982 kubelet[2817]: E0709 14:55:15.791731 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.792510 kubelet[2817]: E0709 14:55:15.792461 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.792510 kubelet[2817]: W0709 14:55:15.792485 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.792803 kubelet[2817]: E0709 14:55:15.792627 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.794067 kubelet[2817]: E0709 14:55:15.793924 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.794130 kubelet[2817]: W0709 14:55:15.794068 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.794176 kubelet[2817]: E0709 14:55:15.794118 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.794925 kubelet[2817]: E0709 14:55:15.794873 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.794925 kubelet[2817]: W0709 14:55:15.794891 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.795461 kubelet[2817]: E0709 14:55:15.795431 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:15.795713 kubelet[2817]: E0709 14:55:15.795680 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 14:55:15.795713 kubelet[2817]: W0709 14:55:15.795697 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 14:55:15.795713 kubelet[2817]: E0709 14:55:15.795708 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 14:55:16.244760 containerd[1557]: time="2025-07-09T14:55:16.244599857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:16.247429 containerd[1557]: time="2025-07-09T14:55:16.247316213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 9 14:55:16.248676 containerd[1557]: time="2025-07-09T14:55:16.248621230Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:16.264984 containerd[1557]: time="2025-07-09T14:55:16.262843699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:16.276343 containerd[1557]: time="2025-07-09T14:55:16.276234178Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.209339943s" Jul 9 14:55:16.276829 containerd[1557]: time="2025-07-09T14:55:16.276761307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 9 14:55:16.284862 containerd[1557]: time="2025-07-09T14:55:16.284758398Z" level=info msg="CreateContainer within sandbox \"0f4dc1946eade24e7ca241cea4ddb3a21e462a44dfc064e5f6246f9701147055\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 9 14:55:16.306131 containerd[1557]: time="2025-07-09T14:55:16.306087398Z" level=info msg="Container 058ab76206788a5eb1ee687511fc15328cfe9a773e4a1ea4365641f39ea13f3c: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:55:16.324038 containerd[1557]: time="2025-07-09T14:55:16.323894351Z" level=info msg="CreateContainer within sandbox \"0f4dc1946eade24e7ca241cea4ddb3a21e462a44dfc064e5f6246f9701147055\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"058ab76206788a5eb1ee687511fc15328cfe9a773e4a1ea4365641f39ea13f3c\"" Jul 9 14:55:16.325244 containerd[1557]: time="2025-07-09T14:55:16.325095104Z" level=info msg="StartContainer for \"058ab76206788a5eb1ee687511fc15328cfe9a773e4a1ea4365641f39ea13f3c\"" Jul 9 14:55:16.330381 containerd[1557]: time="2025-07-09T14:55:16.330302212Z" level=info msg="connecting to shim 058ab76206788a5eb1ee687511fc15328cfe9a773e4a1ea4365641f39ea13f3c" address="unix:///run/containerd/s/a0f71153f45bccd28e2af50db6a1f8121217754608f1ea9350cc1c0312fbc691" protocol=ttrpc version=3 Jul 9 14:55:16.372547 systemd[1]: Started cri-containerd-058ab76206788a5eb1ee687511fc15328cfe9a773e4a1ea4365641f39ea13f3c.scope - libcontainer container 058ab76206788a5eb1ee687511fc15328cfe9a773e4a1ea4365641f39ea13f3c. Jul 9 14:55:16.444409 containerd[1557]: time="2025-07-09T14:55:16.444327195Z" level=info msg="StartContainer for \"058ab76206788a5eb1ee687511fc15328cfe9a773e4a1ea4365641f39ea13f3c\" returns successfully" Jul 9 14:55:16.455169 systemd[1]: cri-containerd-058ab76206788a5eb1ee687511fc15328cfe9a773e4a1ea4365641f39ea13f3c.scope: Deactivated successfully. Jul 9 14:55:16.464962 containerd[1557]: time="2025-07-09T14:55:16.464851526Z" level=info msg="TaskExit event in podsandbox handler container_id:\"058ab76206788a5eb1ee687511fc15328cfe9a773e4a1ea4365641f39ea13f3c\" id:\"058ab76206788a5eb1ee687511fc15328cfe9a773e4a1ea4365641f39ea13f3c\" pid:3520 exited_at:{seconds:1752072916 nanos:462199872}" Jul 9 14:55:16.465255 containerd[1557]: time="2025-07-09T14:55:16.464895959Z" level=info msg="received exit event container_id:\"058ab76206788a5eb1ee687511fc15328cfe9a773e4a1ea4365641f39ea13f3c\" id:\"058ab76206788a5eb1ee687511fc15328cfe9a773e4a1ea4365641f39ea13f3c\" pid:3520 exited_at:{seconds:1752072916 nanos:462199872}" Jul 9 14:55:16.511289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-058ab76206788a5eb1ee687511fc15328cfe9a773e4a1ea4365641f39ea13f3c-rootfs.mount: Deactivated successfully. Jul 9 14:55:16.554060 kubelet[2817]: E0709 14:55:16.553018 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ntrc4" podUID="49c2fd72-8bb2-49f1-b111-af8c92749a93" Jul 9 14:55:17.162134 kubelet[2817]: I0709 14:55:17.161702 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d7c687fd5-qs58n" podStartSLOduration=4.365532979 podStartE2EDuration="8.156474611s" podCreationTimestamp="2025-07-09 14:55:09 +0000 UTC" firstStartedPulling="2025-07-09 14:55:10.274874251 +0000 UTC m=+20.968818495" lastFinishedPulling="2025-07-09 14:55:14.065815863 +0000 UTC m=+24.759760127" observedRunningTime="2025-07-09 14:55:14.751809505 +0000 UTC m=+25.445753749" watchObservedRunningTime="2025-07-09 14:55:17.156474611 +0000 UTC m=+27.850418896" Jul 9 14:55:17.766024 containerd[1557]: time="2025-07-09T14:55:17.765201908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 9 14:55:18.553500 kubelet[2817]: E0709 14:55:18.553342 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ntrc4" podUID="49c2fd72-8bb2-49f1-b111-af8c92749a93" Jul 9 14:55:20.554162 kubelet[2817]: E0709 14:55:20.553881 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ntrc4" podUID="49c2fd72-8bb2-49f1-b111-af8c92749a93" Jul 9 14:55:22.554379 kubelet[2817]: E0709 14:55:22.553620 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ntrc4" podUID="49c2fd72-8bb2-49f1-b111-af8c92749a93" Jul 9 14:55:24.127504 containerd[1557]: time="2025-07-09T14:55:24.127370349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:24.128992 containerd[1557]: time="2025-07-09T14:55:24.128960708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 9 14:55:24.131350 containerd[1557]: time="2025-07-09T14:55:24.130424267Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:24.134509 containerd[1557]: time="2025-07-09T14:55:24.134440103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:24.135515 containerd[1557]: time="2025-07-09T14:55:24.135482666Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 6.370169509s" Jul 9 14:55:24.135580 containerd[1557]: time="2025-07-09T14:55:24.135530126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 9 14:55:24.140667 containerd[1557]: time="2025-07-09T14:55:24.140274000Z" level=info msg="CreateContainer within sandbox \"0f4dc1946eade24e7ca241cea4ddb3a21e462a44dfc064e5f6246f9701147055\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 9 14:55:24.168494 containerd[1557]: time="2025-07-09T14:55:24.168414854Z" level=info msg="Container e5a216d3edaa4ced7af5761ea993f7ce28c5ee2d0829fbe81c6fdc7d92b52029: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:55:24.200482 containerd[1557]: time="2025-07-09T14:55:24.200342963Z" level=info msg="CreateContainer within sandbox \"0f4dc1946eade24e7ca241cea4ddb3a21e462a44dfc064e5f6246f9701147055\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e5a216d3edaa4ced7af5761ea993f7ce28c5ee2d0829fbe81c6fdc7d92b52029\"" Jul 9 14:55:24.204049 containerd[1557]: time="2025-07-09T14:55:24.203006112Z" level=info msg="StartContainer for \"e5a216d3edaa4ced7af5761ea993f7ce28c5ee2d0829fbe81c6fdc7d92b52029\"" Jul 9 14:55:24.209382 containerd[1557]: time="2025-07-09T14:55:24.209270673Z" level=info msg="connecting to shim e5a216d3edaa4ced7af5761ea993f7ce28c5ee2d0829fbe81c6fdc7d92b52029" address="unix:///run/containerd/s/a0f71153f45bccd28e2af50db6a1f8121217754608f1ea9350cc1c0312fbc691" protocol=ttrpc version=3 Jul 9 14:55:24.258365 systemd[1]: Started cri-containerd-e5a216d3edaa4ced7af5761ea993f7ce28c5ee2d0829fbe81c6fdc7d92b52029.scope - libcontainer container e5a216d3edaa4ced7af5761ea993f7ce28c5ee2d0829fbe81c6fdc7d92b52029. Jul 9 14:55:24.343210 containerd[1557]: time="2025-07-09T14:55:24.343130068Z" level=info msg="StartContainer for \"e5a216d3edaa4ced7af5761ea993f7ce28c5ee2d0829fbe81c6fdc7d92b52029\" returns successfully" Jul 9 14:55:24.554659 kubelet[2817]: E0709 14:55:24.553397 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ntrc4" podUID="49c2fd72-8bb2-49f1-b111-af8c92749a93" Jul 9 14:55:26.554044 kubelet[2817]: E0709 14:55:26.553156 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ntrc4" podUID="49c2fd72-8bb2-49f1-b111-af8c92749a93" Jul 9 14:55:26.728814 containerd[1557]: time="2025-07-09T14:55:26.728726891Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 14:55:26.734052 systemd[1]: cri-containerd-e5a216d3edaa4ced7af5761ea993f7ce28c5ee2d0829fbe81c6fdc7d92b52029.scope: Deactivated successfully. Jul 9 14:55:26.736547 systemd[1]: cri-containerd-e5a216d3edaa4ced7af5761ea993f7ce28c5ee2d0829fbe81c6fdc7d92b52029.scope: Consumed 1.697s CPU time, 191.8M memory peak, 171.2M written to disk. Jul 9 14:55:26.738508 containerd[1557]: time="2025-07-09T14:55:26.738468319Z" level=info msg="received exit event container_id:\"e5a216d3edaa4ced7af5761ea993f7ce28c5ee2d0829fbe81c6fdc7d92b52029\" id:\"e5a216d3edaa4ced7af5761ea993f7ce28c5ee2d0829fbe81c6fdc7d92b52029\" pid:3580 exited_at:{seconds:1752072926 nanos:737070246}" Jul 9 14:55:26.739156 containerd[1557]: time="2025-07-09T14:55:26.739103891Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5a216d3edaa4ced7af5761ea993f7ce28c5ee2d0829fbe81c6fdc7d92b52029\" id:\"e5a216d3edaa4ced7af5761ea993f7ce28c5ee2d0829fbe81c6fdc7d92b52029\" pid:3580 exited_at:{seconds:1752072926 nanos:737070246}" Jul 9 14:55:26.778778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5a216d3edaa4ced7af5761ea993f7ce28c5ee2d0829fbe81c6fdc7d92b52029-rootfs.mount: Deactivated successfully. Jul 9 14:55:26.792803 kubelet[2817]: I0709 14:55:26.792745 2817 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 9 14:55:27.407401 systemd[1]: Created slice kubepods-burstable-pod5bcda8dd_7fb0_4bc3_927d_f3fea33dc1b0.slice - libcontainer container kubepods-burstable-pod5bcda8dd_7fb0_4bc3_927d_f3fea33dc1b0.slice. Jul 9 14:55:27.472123 kubelet[2817]: I0709 14:55:27.472027 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bcda8dd-7fb0-4bc3-927d-f3fea33dc1b0-config-volume\") pod \"coredns-668d6bf9bc-btbwb\" (UID: \"5bcda8dd-7fb0-4bc3-927d-f3fea33dc1b0\") " pod="kube-system/coredns-668d6bf9bc-btbwb" Jul 9 14:55:27.561045 systemd[1]: Created slice kubepods-besteffort-pode1912750_baae_41e5_b4c4_6ba619b7808e.slice - libcontainer container kubepods-besteffort-pode1912750_baae_41e5_b4c4_6ba619b7808e.slice. Jul 9 14:55:27.599977 kubelet[2817]: I0709 14:55:27.599847 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhghm\" (UniqueName: \"kubernetes.io/projected/5bcda8dd-7fb0-4bc3-927d-f3fea33dc1b0-kube-api-access-nhghm\") pod \"coredns-668d6bf9bc-btbwb\" (UID: \"5bcda8dd-7fb0-4bc3-927d-f3fea33dc1b0\") " pod="kube-system/coredns-668d6bf9bc-btbwb" Jul 9 14:55:27.601720 kubelet[2817]: I0709 14:55:27.600210 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9d6e04-5ff7-42c8-be2f-99dcacf78f7a-config-volume\") pod \"coredns-668d6bf9bc-6ffph\" (UID: \"3f9d6e04-5ff7-42c8-be2f-99dcacf78f7a\") " pod="kube-system/coredns-668d6bf9bc-6ffph" Jul 9 14:55:27.601720 kubelet[2817]: I0709 14:55:27.600271 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1912750-baae-41e5-b4c4-6ba619b7808e-tigera-ca-bundle\") pod \"calico-kube-controllers-988b669bd-hmncc\" (UID: \"e1912750-baae-41e5-b4c4-6ba619b7808e\") " pod="calico-system/calico-kube-controllers-988b669bd-hmncc" Jul 9 14:55:27.601720 kubelet[2817]: I0709 14:55:27.600372 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slfbp\" (UniqueName: \"kubernetes.io/projected/e1912750-baae-41e5-b4c4-6ba619b7808e-kube-api-access-slfbp\") pod \"calico-kube-controllers-988b669bd-hmncc\" (UID: \"e1912750-baae-41e5-b4c4-6ba619b7808e\") " pod="calico-system/calico-kube-controllers-988b669bd-hmncc" Jul 9 14:55:27.601720 kubelet[2817]: I0709 14:55:27.600439 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztww4\" (UniqueName: \"kubernetes.io/projected/3f9d6e04-5ff7-42c8-be2f-99dcacf78f7a-kube-api-access-ztww4\") pod \"coredns-668d6bf9bc-6ffph\" (UID: \"3f9d6e04-5ff7-42c8-be2f-99dcacf78f7a\") " pod="kube-system/coredns-668d6bf9bc-6ffph" Jul 9 14:55:27.615652 systemd[1]: Created slice kubepods-burstable-pod3f9d6e04_5ff7_42c8_be2f_99dcacf78f7a.slice - libcontainer container kubepods-burstable-pod3f9d6e04_5ff7_42c8_be2f_99dcacf78f7a.slice. Jul 9 14:55:27.635792 systemd[1]: Created slice kubepods-besteffort-pod56b0a123_293f_4e17_bf22_46e59a2bd3a0.slice - libcontainer container kubepods-besteffort-pod56b0a123_293f_4e17_bf22_46e59a2bd3a0.slice. Jul 9 14:55:27.648545 systemd[1]: Created slice kubepods-besteffort-pod13edf4a1_bc0d_426f_9e80_67b8e2499afc.slice - libcontainer container kubepods-besteffort-pod13edf4a1_bc0d_426f_9e80_67b8e2499afc.slice. Jul 9 14:55:27.656906 systemd[1]: Created slice kubepods-besteffort-podf8d26d52_a0eb_4b4c_8863_69ea830380d6.slice - libcontainer container kubepods-besteffort-podf8d26d52_a0eb_4b4c_8863_69ea830380d6.slice. Jul 9 14:55:27.665349 systemd[1]: Created slice kubepods-besteffort-podfde2a149_c43b_473e_96d9_45a1cdea3b80.slice - libcontainer container kubepods-besteffort-podfde2a149_c43b_473e_96d9_45a1cdea3b80.slice. Jul 9 14:55:27.802291 kubelet[2817]: I0709 14:55:27.802230 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/56b0a123-293f-4e17-bf22-46e59a2bd3a0-calico-apiserver-certs\") pod \"calico-apiserver-b766f7455-pkcfx\" (UID: \"56b0a123-293f-4e17-bf22-46e59a2bd3a0\") " pod="calico-apiserver/calico-apiserver-b766f7455-pkcfx" Jul 9 14:55:27.802291 kubelet[2817]: I0709 14:55:27.802296 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt8vq\" (UniqueName: \"kubernetes.io/projected/13edf4a1-bc0d-426f-9e80-67b8e2499afc-kube-api-access-qt8vq\") pod \"whisker-5f5d689c69-zzq6b\" (UID: \"13edf4a1-bc0d-426f-9e80-67b8e2499afc\") " pod="calico-system/whisker-5f5d689c69-zzq6b" Jul 9 14:55:27.802611 kubelet[2817]: I0709 14:55:27.802318 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/13edf4a1-bc0d-426f-9e80-67b8e2499afc-whisker-backend-key-pair\") pod \"whisker-5f5d689c69-zzq6b\" (UID: \"13edf4a1-bc0d-426f-9e80-67b8e2499afc\") " pod="calico-system/whisker-5f5d689c69-zzq6b" Jul 9 14:55:27.802611 kubelet[2817]: I0709 14:55:27.802385 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f8d26d52-a0eb-4b4c-8863-69ea830380d6-calico-apiserver-certs\") pod \"calico-apiserver-b766f7455-qprmh\" (UID: \"f8d26d52-a0eb-4b4c-8863-69ea830380d6\") " pod="calico-apiserver/calico-apiserver-b766f7455-qprmh" Jul 9 14:55:27.802611 kubelet[2817]: I0709 14:55:27.802429 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rw9b\" (UniqueName: \"kubernetes.io/projected/f8d26d52-a0eb-4b4c-8863-69ea830380d6-kube-api-access-7rw9b\") pod \"calico-apiserver-b766f7455-qprmh\" (UID: \"f8d26d52-a0eb-4b4c-8863-69ea830380d6\") " pod="calico-apiserver/calico-apiserver-b766f7455-qprmh" Jul 9 14:55:27.802611 kubelet[2817]: I0709 14:55:27.802454 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13edf4a1-bc0d-426f-9e80-67b8e2499afc-whisker-ca-bundle\") pod \"whisker-5f5d689c69-zzq6b\" (UID: \"13edf4a1-bc0d-426f-9e80-67b8e2499afc\") " pod="calico-system/whisker-5f5d689c69-zzq6b" Jul 9 14:55:27.802611 kubelet[2817]: I0709 14:55:27.802476 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fde2a149-c43b-473e-96d9-45a1cdea3b80-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-2xp7x\" (UID: \"fde2a149-c43b-473e-96d9-45a1cdea3b80\") " pod="calico-system/goldmane-768f4c5c69-2xp7x" Jul 9 14:55:27.802898 kubelet[2817]: I0709 14:55:27.802504 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fde2a149-c43b-473e-96d9-45a1cdea3b80-goldmane-key-pair\") pod \"goldmane-768f4c5c69-2xp7x\" (UID: \"fde2a149-c43b-473e-96d9-45a1cdea3b80\") " pod="calico-system/goldmane-768f4c5c69-2xp7x" Jul 9 14:55:27.802898 kubelet[2817]: I0709 14:55:27.802527 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p75js\" (UniqueName: \"kubernetes.io/projected/56b0a123-293f-4e17-bf22-46e59a2bd3a0-kube-api-access-p75js\") pod \"calico-apiserver-b766f7455-pkcfx\" (UID: \"56b0a123-293f-4e17-bf22-46e59a2bd3a0\") " pod="calico-apiserver/calico-apiserver-b766f7455-pkcfx" Jul 9 14:55:27.802898 kubelet[2817]: I0709 14:55:27.802550 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fde2a149-c43b-473e-96d9-45a1cdea3b80-config\") pod \"goldmane-768f4c5c69-2xp7x\" (UID: \"fde2a149-c43b-473e-96d9-45a1cdea3b80\") " pod="calico-system/goldmane-768f4c5c69-2xp7x" Jul 9 14:55:27.802898 kubelet[2817]: I0709 14:55:27.802583 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4qzl\" (UniqueName: \"kubernetes.io/projected/fde2a149-c43b-473e-96d9-45a1cdea3b80-kube-api-access-d4qzl\") pod \"goldmane-768f4c5c69-2xp7x\" (UID: \"fde2a149-c43b-473e-96d9-45a1cdea3b80\") " pod="calico-system/goldmane-768f4c5c69-2xp7x" Jul 9 14:55:27.913213 containerd[1557]: time="2025-07-09T14:55:27.912424859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-988b669bd-hmncc,Uid:e1912750-baae-41e5-b4c4-6ba619b7808e,Namespace:calico-system,Attempt:0,}" Jul 9 14:55:27.964844 containerd[1557]: time="2025-07-09T14:55:27.964619209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6ffph,Uid:3f9d6e04-5ff7-42c8-be2f-99dcacf78f7a,Namespace:kube-system,Attempt:0,}" Jul 9 14:55:28.027408 containerd[1557]: time="2025-07-09T14:55:28.027320621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-btbwb,Uid:5bcda8dd-7fb0-4bc3-927d-f3fea33dc1b0,Namespace:kube-system,Attempt:0,}" Jul 9 14:55:28.036695 containerd[1557]: time="2025-07-09T14:55:28.036302847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 9 14:55:28.135519 containerd[1557]: time="2025-07-09T14:55:28.135451841Z" level=error msg="Failed to destroy network for sandbox \"af53147db2801f0642519fce62fbbf212d751ad290196384826d956a86d3fd10\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.142208 containerd[1557]: time="2025-07-09T14:55:28.141510482Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-988b669bd-hmncc,Uid:e1912750-baae-41e5-b4c4-6ba619b7808e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af53147db2801f0642519fce62fbbf212d751ad290196384826d956a86d3fd10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.143071 kubelet[2817]: E0709 14:55:28.142703 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af53147db2801f0642519fce62fbbf212d751ad290196384826d956a86d3fd10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.143368 kubelet[2817]: E0709 14:55:28.143175 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af53147db2801f0642519fce62fbbf212d751ad290196384826d956a86d3fd10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-988b669bd-hmncc" Jul 9 14:55:28.143368 kubelet[2817]: E0709 14:55:28.143228 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af53147db2801f0642519fce62fbbf212d751ad290196384826d956a86d3fd10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-988b669bd-hmncc" Jul 9 14:55:28.143368 kubelet[2817]: E0709 14:55:28.143330 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-988b669bd-hmncc_calico-system(e1912750-baae-41e5-b4c4-6ba619b7808e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-988b669bd-hmncc_calico-system(e1912750-baae-41e5-b4c4-6ba619b7808e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af53147db2801f0642519fce62fbbf212d751ad290196384826d956a86d3fd10\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-988b669bd-hmncc" podUID="e1912750-baae-41e5-b4c4-6ba619b7808e" Jul 9 14:55:28.166218 containerd[1557]: time="2025-07-09T14:55:28.166116799Z" level=error msg="Failed to destroy network for sandbox \"68096fd6e971b7348eb3f52547c3a2492ff0a988e54cb68f5f1fa44aa4e1e089\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.168225 containerd[1557]: time="2025-07-09T14:55:28.168181631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6ffph,Uid:3f9d6e04-5ff7-42c8-be2f-99dcacf78f7a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"68096fd6e971b7348eb3f52547c3a2492ff0a988e54cb68f5f1fa44aa4e1e089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.168830 kubelet[2817]: E0709 14:55:28.168600 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68096fd6e971b7348eb3f52547c3a2492ff0a988e54cb68f5f1fa44aa4e1e089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.168830 kubelet[2817]: E0709 14:55:28.168746 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68096fd6e971b7348eb3f52547c3a2492ff0a988e54cb68f5f1fa44aa4e1e089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6ffph" Jul 9 14:55:28.168830 kubelet[2817]: E0709 14:55:28.168775 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68096fd6e971b7348eb3f52547c3a2492ff0a988e54cb68f5f1fa44aa4e1e089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6ffph" Jul 9 14:55:28.169183 kubelet[2817]: E0709 14:55:28.169139 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6ffph_kube-system(3f9d6e04-5ff7-42c8-be2f-99dcacf78f7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6ffph_kube-system(3f9d6e04-5ff7-42c8-be2f-99dcacf78f7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68096fd6e971b7348eb3f52547c3a2492ff0a988e54cb68f5f1fa44aa4e1e089\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6ffph" podUID="3f9d6e04-5ff7-42c8-be2f-99dcacf78f7a" Jul 9 14:55:28.186659 containerd[1557]: time="2025-07-09T14:55:28.186572149Z" level=error msg="Failed to destroy network for sandbox \"3c7e025b215b0b177c11d6e02e06a633a91befb0f9be449e01977893d46d1a39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.188468 containerd[1557]: time="2025-07-09T14:55:28.188418419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-btbwb,Uid:5bcda8dd-7fb0-4bc3-927d-f3fea33dc1b0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7e025b215b0b177c11d6e02e06a633a91befb0f9be449e01977893d46d1a39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.188793 kubelet[2817]: E0709 14:55:28.188733 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7e025b215b0b177c11d6e02e06a633a91befb0f9be449e01977893d46d1a39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.188858 kubelet[2817]: E0709 14:55:28.188827 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7e025b215b0b177c11d6e02e06a633a91befb0f9be449e01977893d46d1a39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-btbwb" Jul 9 14:55:28.188900 kubelet[2817]: E0709 14:55:28.188857 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7e025b215b0b177c11d6e02e06a633a91befb0f9be449e01977893d46d1a39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-btbwb" Jul 9 14:55:28.189090 kubelet[2817]: E0709 14:55:28.188921 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-btbwb_kube-system(5bcda8dd-7fb0-4bc3-927d-f3fea33dc1b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-btbwb_kube-system(5bcda8dd-7fb0-4bc3-927d-f3fea33dc1b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c7e025b215b0b177c11d6e02e06a633a91befb0f9be449e01977893d46d1a39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-btbwb" podUID="5bcda8dd-7fb0-4bc3-927d-f3fea33dc1b0" Jul 9 14:55:28.244664 containerd[1557]: time="2025-07-09T14:55:28.244438890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b766f7455-pkcfx,Uid:56b0a123-293f-4e17-bf22-46e59a2bd3a0,Namespace:calico-apiserver,Attempt:0,}" Jul 9 14:55:28.255443 containerd[1557]: time="2025-07-09T14:55:28.255346946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f5d689c69-zzq6b,Uid:13edf4a1-bc0d-426f-9e80-67b8e2499afc,Namespace:calico-system,Attempt:0,}" Jul 9 14:55:28.285744 containerd[1557]: time="2025-07-09T14:55:28.285650351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-2xp7x,Uid:fde2a149-c43b-473e-96d9-45a1cdea3b80,Namespace:calico-system,Attempt:0,}" Jul 9 14:55:28.287997 containerd[1557]: time="2025-07-09T14:55:28.287695395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b766f7455-qprmh,Uid:f8d26d52-a0eb-4b4c-8863-69ea830380d6,Namespace:calico-apiserver,Attempt:0,}" Jul 9 14:55:28.412413 containerd[1557]: time="2025-07-09T14:55:28.412343742Z" level=error msg="Failed to destroy network for sandbox \"62e20ce2d6bf26215798b516aa75627e57dc61e1f17cc8d141045c4848c00c7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.415793 containerd[1557]: time="2025-07-09T14:55:28.415704655Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b766f7455-pkcfx,Uid:56b0a123-293f-4e17-bf22-46e59a2bd3a0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e20ce2d6bf26215798b516aa75627e57dc61e1f17cc8d141045c4848c00c7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.417165 kubelet[2817]: E0709 14:55:28.416318 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e20ce2d6bf26215798b516aa75627e57dc61e1f17cc8d141045c4848c00c7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.417165 kubelet[2817]: E0709 14:55:28.416413 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e20ce2d6bf26215798b516aa75627e57dc61e1f17cc8d141045c4848c00c7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b766f7455-pkcfx" Jul 9 14:55:28.417165 kubelet[2817]: E0709 14:55:28.416450 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e20ce2d6bf26215798b516aa75627e57dc61e1f17cc8d141045c4848c00c7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b766f7455-pkcfx" Jul 9 14:55:28.417327 kubelet[2817]: E0709 14:55:28.416667 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b766f7455-pkcfx_calico-apiserver(56b0a123-293f-4e17-bf22-46e59a2bd3a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b766f7455-pkcfx_calico-apiserver(56b0a123-293f-4e17-bf22-46e59a2bd3a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62e20ce2d6bf26215798b516aa75627e57dc61e1f17cc8d141045c4848c00c7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b766f7455-pkcfx" podUID="56b0a123-293f-4e17-bf22-46e59a2bd3a0" Jul 9 14:55:28.419416 containerd[1557]: time="2025-07-09T14:55:28.419362879Z" level=error msg="Failed to destroy network for sandbox \"c183d04c62542757bf7d385fd65b90bd3fe6ece934dcd4d10ca0241fd3eac6de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.421262 containerd[1557]: time="2025-07-09T14:55:28.421135449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f5d689c69-zzq6b,Uid:13edf4a1-bc0d-426f-9e80-67b8e2499afc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c183d04c62542757bf7d385fd65b90bd3fe6ece934dcd4d10ca0241fd3eac6de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.421509 kubelet[2817]: E0709 14:55:28.421453 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c183d04c62542757bf7d385fd65b90bd3fe6ece934dcd4d10ca0241fd3eac6de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.421603 kubelet[2817]: E0709 14:55:28.421538 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c183d04c62542757bf7d385fd65b90bd3fe6ece934dcd4d10ca0241fd3eac6de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f5d689c69-zzq6b" Jul 9 14:55:28.421603 kubelet[2817]: E0709 14:55:28.421599 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c183d04c62542757bf7d385fd65b90bd3fe6ece934dcd4d10ca0241fd3eac6de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f5d689c69-zzq6b" Jul 9 14:55:28.421849 kubelet[2817]: E0709 14:55:28.421655 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f5d689c69-zzq6b_calico-system(13edf4a1-bc0d-426f-9e80-67b8e2499afc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f5d689c69-zzq6b_calico-system(13edf4a1-bc0d-426f-9e80-67b8e2499afc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c183d04c62542757bf7d385fd65b90bd3fe6ece934dcd4d10ca0241fd3eac6de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f5d689c69-zzq6b" podUID="13edf4a1-bc0d-426f-9e80-67b8e2499afc" Jul 9 14:55:28.441453 containerd[1557]: time="2025-07-09T14:55:28.441398066Z" level=error msg="Failed to destroy network for sandbox \"2e3c685af20a5bd082f374487d8383640f99842c7f82da97cfe3a59f5a56f290\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.443332 containerd[1557]: time="2025-07-09T14:55:28.443282497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b766f7455-qprmh,Uid:f8d26d52-a0eb-4b4c-8863-69ea830380d6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e3c685af20a5bd082f374487d8383640f99842c7f82da97cfe3a59f5a56f290\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.444223 kubelet[2817]: E0709 14:55:28.444048 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e3c685af20a5bd082f374487d8383640f99842c7f82da97cfe3a59f5a56f290\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.444342 kubelet[2817]: E0709 14:55:28.444253 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e3c685af20a5bd082f374487d8383640f99842c7f82da97cfe3a59f5a56f290\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b766f7455-qprmh" Jul 9 14:55:28.444342 kubelet[2817]: E0709 14:55:28.444284 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e3c685af20a5bd082f374487d8383640f99842c7f82da97cfe3a59f5a56f290\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b766f7455-qprmh" Jul 9 14:55:28.444521 kubelet[2817]: E0709 14:55:28.444387 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b766f7455-qprmh_calico-apiserver(f8d26d52-a0eb-4b4c-8863-69ea830380d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b766f7455-qprmh_calico-apiserver(f8d26d52-a0eb-4b4c-8863-69ea830380d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e3c685af20a5bd082f374487d8383640f99842c7f82da97cfe3a59f5a56f290\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b766f7455-qprmh" podUID="f8d26d52-a0eb-4b4c-8863-69ea830380d6" Jul 9 14:55:28.454320 containerd[1557]: time="2025-07-09T14:55:28.454189422Z" level=error msg="Failed to destroy network for sandbox \"0d4341ab6946655b42af79af4ec5ce5131d7a767ba1044178752a6b7ac54341d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.455978 containerd[1557]: time="2025-07-09T14:55:28.455867904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-2xp7x,Uid:fde2a149-c43b-473e-96d9-45a1cdea3b80,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d4341ab6946655b42af79af4ec5ce5131d7a767ba1044178752a6b7ac54341d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.456330 kubelet[2817]: E0709 14:55:28.456251 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d4341ab6946655b42af79af4ec5ce5131d7a767ba1044178752a6b7ac54341d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.456501 kubelet[2817]: E0709 14:55:28.456464 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d4341ab6946655b42af79af4ec5ce5131d7a767ba1044178752a6b7ac54341d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-2xp7x" Jul 9 14:55:28.456671 kubelet[2817]: E0709 14:55:28.456595 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d4341ab6946655b42af79af4ec5ce5131d7a767ba1044178752a6b7ac54341d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-2xp7x" Jul 9 14:55:28.456892 kubelet[2817]: E0709 14:55:28.456828 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-2xp7x_calico-system(fde2a149-c43b-473e-96d9-45a1cdea3b80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-2xp7x_calico-system(fde2a149-c43b-473e-96d9-45a1cdea3b80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d4341ab6946655b42af79af4ec5ce5131d7a767ba1044178752a6b7ac54341d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-2xp7x" podUID="fde2a149-c43b-473e-96d9-45a1cdea3b80" Jul 9 14:55:28.572834 systemd[1]: Created slice kubepods-besteffort-pod49c2fd72_8bb2_49f1_b111_af8c92749a93.slice - libcontainer container kubepods-besteffort-pod49c2fd72_8bb2_49f1_b111_af8c92749a93.slice. Jul 9 14:55:28.581620 containerd[1557]: time="2025-07-09T14:55:28.581541419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ntrc4,Uid:49c2fd72-8bb2-49f1-b111-af8c92749a93,Namespace:calico-system,Attempt:0,}" Jul 9 14:55:28.716144 containerd[1557]: time="2025-07-09T14:55:28.716072014Z" level=error msg="Failed to destroy network for sandbox \"ea28ea822c398b54d11add58fa7708ead4b3b8e813d51cf7e50ae9aa79227cef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.717790 containerd[1557]: time="2025-07-09T14:55:28.717718817Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ntrc4,Uid:49c2fd72-8bb2-49f1-b111-af8c92749a93,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea28ea822c398b54d11add58fa7708ead4b3b8e813d51cf7e50ae9aa79227cef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.719073 kubelet[2817]: E0709 14:55:28.718132 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea28ea822c398b54d11add58fa7708ead4b3b8e813d51cf7e50ae9aa79227cef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:28.719073 kubelet[2817]: E0709 14:55:28.718229 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea28ea822c398b54d11add58fa7708ead4b3b8e813d51cf7e50ae9aa79227cef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ntrc4" Jul 9 14:55:28.719073 kubelet[2817]: E0709 14:55:28.718259 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea28ea822c398b54d11add58fa7708ead4b3b8e813d51cf7e50ae9aa79227cef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ntrc4" Jul 9 14:55:28.719475 kubelet[2817]: E0709 14:55:28.718318 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ntrc4_calico-system(49c2fd72-8bb2-49f1-b111-af8c92749a93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ntrc4_calico-system(49c2fd72-8bb2-49f1-b111-af8c92749a93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea28ea822c398b54d11add58fa7708ead4b3b8e813d51cf7e50ae9aa79227cef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ntrc4" podUID="49c2fd72-8bb2-49f1-b111-af8c92749a93" Jul 9 14:55:28.942783 systemd[1]: run-netns-cni\x2d5717dae2\x2d9477\x2d8ff8\x2da14c\x2dc4b2746c85ee.mount: Deactivated successfully. Jul 9 14:55:38.704349 containerd[1557]: time="2025-07-09T14:55:38.703209621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b766f7455-qprmh,Uid:f8d26d52-a0eb-4b4c-8863-69ea830380d6,Namespace:calico-apiserver,Attempt:0,}" Jul 9 14:55:38.804418 kubelet[2817]: I0709 14:55:38.803725 2817 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 14:55:39.044091 containerd[1557]: time="2025-07-09T14:55:39.041119247Z" level=error msg="Failed to destroy network for sandbox \"87cfed4c5bf562d3cc7eda23fcb577bbda5e5146809c2025ecb12d9f940f18db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:39.049028 systemd[1]: run-netns-cni\x2d9780ca91\x2d77cc\x2d4de5\x2df32e\x2d822be735e54a.mount: Deactivated successfully. Jul 9 14:55:39.053786 containerd[1557]: time="2025-07-09T14:55:39.053304432Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b766f7455-qprmh,Uid:f8d26d52-a0eb-4b4c-8863-69ea830380d6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"87cfed4c5bf562d3cc7eda23fcb577bbda5e5146809c2025ecb12d9f940f18db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:39.055862 kubelet[2817]: E0709 14:55:39.055168 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87cfed4c5bf562d3cc7eda23fcb577bbda5e5146809c2025ecb12d9f940f18db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:39.055862 kubelet[2817]: E0709 14:55:39.055356 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87cfed4c5bf562d3cc7eda23fcb577bbda5e5146809c2025ecb12d9f940f18db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b766f7455-qprmh" Jul 9 14:55:39.055862 kubelet[2817]: E0709 14:55:39.055439 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87cfed4c5bf562d3cc7eda23fcb577bbda5e5146809c2025ecb12d9f940f18db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b766f7455-qprmh" Jul 9 14:55:39.056150 kubelet[2817]: E0709 14:55:39.055686 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b766f7455-qprmh_calico-apiserver(f8d26d52-a0eb-4b4c-8863-69ea830380d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b766f7455-qprmh_calico-apiserver(f8d26d52-a0eb-4b4c-8863-69ea830380d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87cfed4c5bf562d3cc7eda23fcb577bbda5e5146809c2025ecb12d9f940f18db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b766f7455-qprmh" podUID="f8d26d52-a0eb-4b4c-8863-69ea830380d6" Jul 9 14:55:39.555907 containerd[1557]: time="2025-07-09T14:55:39.555774292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-btbwb,Uid:5bcda8dd-7fb0-4bc3-927d-f3fea33dc1b0,Namespace:kube-system,Attempt:0,}" Jul 9 14:55:39.948661 containerd[1557]: time="2025-07-09T14:55:39.948568861Z" level=error msg="Failed to destroy network for sandbox \"626ada9b0dadb4427e8230c4b9d0944904ef9a7334f8c2a077c703b45d8a710f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:39.952634 systemd[1]: run-netns-cni\x2d546a4a3c\x2d91a7\x2dbac3\x2d6298\x2d00f7064694bd.mount: Deactivated successfully. Jul 9 14:55:39.955770 containerd[1557]: time="2025-07-09T14:55:39.955503431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-btbwb,Uid:5bcda8dd-7fb0-4bc3-927d-f3fea33dc1b0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"626ada9b0dadb4427e8230c4b9d0944904ef9a7334f8c2a077c703b45d8a710f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:39.956405 kubelet[2817]: E0709 14:55:39.956234 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"626ada9b0dadb4427e8230c4b9d0944904ef9a7334f8c2a077c703b45d8a710f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:39.957395 kubelet[2817]: E0709 14:55:39.956799 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"626ada9b0dadb4427e8230c4b9d0944904ef9a7334f8c2a077c703b45d8a710f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-btbwb" Jul 9 14:55:39.957395 kubelet[2817]: E0709 14:55:39.956876 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"626ada9b0dadb4427e8230c4b9d0944904ef9a7334f8c2a077c703b45d8a710f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-btbwb" Jul 9 14:55:39.957395 kubelet[2817]: E0709 14:55:39.957113 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-btbwb_kube-system(5bcda8dd-7fb0-4bc3-927d-f3fea33dc1b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-btbwb_kube-system(5bcda8dd-7fb0-4bc3-927d-f3fea33dc1b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"626ada9b0dadb4427e8230c4b9d0944904ef9a7334f8c2a077c703b45d8a710f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-btbwb" podUID="5bcda8dd-7fb0-4bc3-927d-f3fea33dc1b0" Jul 9 14:55:40.515302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3671851908.mount: Deactivated successfully. Jul 9 14:55:40.555123 containerd[1557]: time="2025-07-09T14:55:40.555052278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6ffph,Uid:3f9d6e04-5ff7-42c8-be2f-99dcacf78f7a,Namespace:kube-system,Attempt:0,}" Jul 9 14:55:40.555303 containerd[1557]: time="2025-07-09T14:55:40.555279556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ntrc4,Uid:49c2fd72-8bb2-49f1-b111-af8c92749a93,Namespace:calico-system,Attempt:0,}" Jul 9 14:55:40.555416 containerd[1557]: time="2025-07-09T14:55:40.555386969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-2xp7x,Uid:fde2a149-c43b-473e-96d9-45a1cdea3b80,Namespace:calico-system,Attempt:0,}" Jul 9 14:55:40.592546 containerd[1557]: time="2025-07-09T14:55:40.591243255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:40.608712 containerd[1557]: time="2025-07-09T14:55:40.608667167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 9 14:55:40.611957 containerd[1557]: time="2025-07-09T14:55:40.610593329Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:40.634138 containerd[1557]: time="2025-07-09T14:55:40.632999398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:40.636044 containerd[1557]: time="2025-07-09T14:55:40.634377467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 12.59798484s" Jul 9 14:55:40.636198 containerd[1557]: time="2025-07-09T14:55:40.636178484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 9 14:55:40.715435 containerd[1557]: time="2025-07-09T14:55:40.715384568Z" level=info msg="CreateContainer within sandbox \"0f4dc1946eade24e7ca241cea4ddb3a21e462a44dfc064e5f6246f9701147055\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 9 14:55:40.867251 containerd[1557]: time="2025-07-09T14:55:40.867199174Z" level=error msg="Failed to destroy network for sandbox \"83fcac811eb40bf0b52ce0ff7fdd4cd360fd826f0034f5ae06e87c090e73dd7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:40.870800 systemd[1]: run-netns-cni\x2d8ed20a41\x2d92f1\x2dda85\x2d6578\x2dee6a3f56d547.mount: Deactivated successfully. Jul 9 14:55:40.871219 containerd[1557]: time="2025-07-09T14:55:40.871186424Z" level=error msg="Failed to destroy network for sandbox \"43e5ab1d1eb3ba16030f2b2307618524d795abb611bf931a83a23a010890b340\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:40.878765 systemd[1]: run-netns-cni\x2d9f6b4cfb\x2df7e2\x2d7eef\x2df704\x2d2126696cb482.mount: Deactivated successfully. Jul 9 14:55:40.899132 containerd[1557]: time="2025-07-09T14:55:40.898985975Z" level=error msg="Failed to destroy network for sandbox \"57a53608aa40025116669bbb89238899bb339e38b04c0abc8bd26b0b7389942f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:40.905202 systemd[1]: run-netns-cni\x2dfa746b1a\x2dae73\x2d6f9c\x2d440e\x2d8b33942a37da.mount: Deactivated successfully. Jul 9 14:55:41.051975 containerd[1557]: time="2025-07-09T14:55:41.051705314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-2xp7x,Uid:fde2a149-c43b-473e-96d9-45a1cdea3b80,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"57a53608aa40025116669bbb89238899bb339e38b04c0abc8bd26b0b7389942f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:41.052956 kubelet[2817]: E0709 14:55:41.052750 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57a53608aa40025116669bbb89238899bb339e38b04c0abc8bd26b0b7389942f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:41.052956 kubelet[2817]: E0709 14:55:41.052901 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57a53608aa40025116669bbb89238899bb339e38b04c0abc8bd26b0b7389942f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-2xp7x" Jul 9 14:55:41.053837 kubelet[2817]: E0709 14:55:41.053358 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57a53608aa40025116669bbb89238899bb339e38b04c0abc8bd26b0b7389942f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-2xp7x" Jul 9 14:55:41.053837 kubelet[2817]: E0709 14:55:41.053515 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-2xp7x_calico-system(fde2a149-c43b-473e-96d9-45a1cdea3b80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-2xp7x_calico-system(fde2a149-c43b-473e-96d9-45a1cdea3b80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57a53608aa40025116669bbb89238899bb339e38b04c0abc8bd26b0b7389942f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-2xp7x" podUID="fde2a149-c43b-473e-96d9-45a1cdea3b80" Jul 9 14:55:41.056520 containerd[1557]: time="2025-07-09T14:55:41.056373086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6ffph,Uid:3f9d6e04-5ff7-42c8-be2f-99dcacf78f7a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"83fcac811eb40bf0b52ce0ff7fdd4cd360fd826f0034f5ae06e87c090e73dd7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:41.057385 kubelet[2817]: E0709 14:55:41.057088 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83fcac811eb40bf0b52ce0ff7fdd4cd360fd826f0034f5ae06e87c090e73dd7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:41.057385 kubelet[2817]: E0709 14:55:41.057182 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83fcac811eb40bf0b52ce0ff7fdd4cd360fd826f0034f5ae06e87c090e73dd7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6ffph" Jul 9 14:55:41.057385 kubelet[2817]: E0709 14:55:41.057216 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83fcac811eb40bf0b52ce0ff7fdd4cd360fd826f0034f5ae06e87c090e73dd7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6ffph" Jul 9 14:55:41.057995 kubelet[2817]: E0709 14:55:41.057796 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6ffph_kube-system(3f9d6e04-5ff7-42c8-be2f-99dcacf78f7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6ffph_kube-system(3f9d6e04-5ff7-42c8-be2f-99dcacf78f7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83fcac811eb40bf0b52ce0ff7fdd4cd360fd826f0034f5ae06e87c090e73dd7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6ffph" podUID="3f9d6e04-5ff7-42c8-be2f-99dcacf78f7a" Jul 9 14:55:41.058379 containerd[1557]: time="2025-07-09T14:55:41.058198870Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ntrc4,Uid:49c2fd72-8bb2-49f1-b111-af8c92749a93,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43e5ab1d1eb3ba16030f2b2307618524d795abb611bf931a83a23a010890b340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:41.058721 kubelet[2817]: E0709 14:55:41.058507 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43e5ab1d1eb3ba16030f2b2307618524d795abb611bf931a83a23a010890b340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 14:55:41.058721 kubelet[2817]: E0709 14:55:41.058544 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43e5ab1d1eb3ba16030f2b2307618524d795abb611bf931a83a23a010890b340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ntrc4" Jul 9 14:55:41.058721 kubelet[2817]: E0709 14:55:41.058587 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43e5ab1d1eb3ba16030f2b2307618524d795abb611bf931a83a23a010890b340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ntrc4" Jul 9 14:55:41.059200 kubelet[2817]: E0709 14:55:41.058924 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ntrc4_calico-system(49c2fd72-8bb2-49f1-b111-af8c92749a93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ntrc4_calico-system(49c2fd72-8bb2-49f1-b111-af8c92749a93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43e5ab1d1eb3ba16030f2b2307618524d795abb611bf931a83a23a010890b340\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ntrc4" podUID="49c2fd72-8bb2-49f1-b111-af8c92749a93" Jul 9 14:55:41.067963 containerd[1557]: time="2025-07-09T14:55:41.067764868Z" level=info msg="Container fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:55:41.088077 containerd[1557]: time="2025-07-09T14:55:41.088011428Z" level=info msg="CreateContainer within sandbox \"0f4dc1946eade24e7ca241cea4ddb3a21e462a44dfc064e5f6246f9701147055\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a\"" Jul 9 14:55:41.089306 containerd[1557]: time="2025-07-09T14:55:41.089226069Z" level=info msg="StartContainer for \"fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a\"" Jul 9 14:55:41.092214 containerd[1557]: time="2025-07-09T14:55:41.092161034Z" level=info msg="connecting to shim fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a" address="unix:///run/containerd/s/a0f71153f45bccd28e2af50db6a1f8121217754608f1ea9350cc1c0312fbc691" protocol=ttrpc version=3 Jul 9 14:55:41.160113 systemd[1]: Started cri-containerd-fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a.scope - libcontainer container fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a. Jul 9 14:55:41.233352 containerd[1557]: time="2025-07-09T14:55:41.233291642Z" level=info msg="StartContainer for \"fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a\" returns successfully" Jul 9 14:55:41.354196 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 9 14:55:41.354592 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 9 14:55:41.556545 containerd[1557]: time="2025-07-09T14:55:41.555206985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b766f7455-pkcfx,Uid:56b0a123-293f-4e17-bf22-46e59a2bd3a0,Namespace:calico-apiserver,Attempt:0,}" Jul 9 14:55:41.556545 containerd[1557]: time="2025-07-09T14:55:41.556050696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-988b669bd-hmncc,Uid:e1912750-baae-41e5-b4c4-6ba619b7808e,Namespace:calico-system,Attempt:0,}" Jul 9 14:55:41.741989 kubelet[2817]: I0709 14:55:41.741778 2817 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/13edf4a1-bc0d-426f-9e80-67b8e2499afc-whisker-backend-key-pair\") pod \"13edf4a1-bc0d-426f-9e80-67b8e2499afc\" (UID: \"13edf4a1-bc0d-426f-9e80-67b8e2499afc\") " Jul 9 14:55:41.743132 kubelet[2817]: I0709 14:55:41.742035 2817 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13edf4a1-bc0d-426f-9e80-67b8e2499afc-whisker-ca-bundle\") pod \"13edf4a1-bc0d-426f-9e80-67b8e2499afc\" (UID: \"13edf4a1-bc0d-426f-9e80-67b8e2499afc\") " Jul 9 14:55:41.743132 kubelet[2817]: I0709 14:55:41.742411 2817 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt8vq\" (UniqueName: \"kubernetes.io/projected/13edf4a1-bc0d-426f-9e80-67b8e2499afc-kube-api-access-qt8vq\") pod \"13edf4a1-bc0d-426f-9e80-67b8e2499afc\" (UID: \"13edf4a1-bc0d-426f-9e80-67b8e2499afc\") " Jul 9 14:55:41.748786 kubelet[2817]: I0709 14:55:41.746535 2817 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13edf4a1-bc0d-426f-9e80-67b8e2499afc-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "13edf4a1-bc0d-426f-9e80-67b8e2499afc" (UID: "13edf4a1-bc0d-426f-9e80-67b8e2499afc"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 14:55:41.763911 kubelet[2817]: I0709 14:55:41.763860 2817 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13edf4a1-bc0d-426f-9e80-67b8e2499afc-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "13edf4a1-bc0d-426f-9e80-67b8e2499afc" (UID: "13edf4a1-bc0d-426f-9e80-67b8e2499afc"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 9 14:55:41.769275 kubelet[2817]: I0709 14:55:41.769183 2817 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13edf4a1-bc0d-426f-9e80-67b8e2499afc-kube-api-access-qt8vq" (OuterVolumeSpecName: "kube-api-access-qt8vq") pod "13edf4a1-bc0d-426f-9e80-67b8e2499afc" (UID: "13edf4a1-bc0d-426f-9e80-67b8e2499afc"). InnerVolumeSpecName "kube-api-access-qt8vq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 14:55:41.806422 systemd[1]: var-lib-kubelet-pods-13edf4a1\x2dbc0d\x2d426f\x2d9e80\x2d67b8e2499afc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqt8vq.mount: Deactivated successfully. Jul 9 14:55:41.806548 systemd[1]: var-lib-kubelet-pods-13edf4a1\x2dbc0d\x2d426f\x2d9e80\x2d67b8e2499afc-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 9 14:55:41.844916 kubelet[2817]: I0709 14:55:41.844782 2817 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13edf4a1-bc0d-426f-9e80-67b8e2499afc-whisker-ca-bundle\") on node \"ci-9999-9-100-3d8d1010bc.novalocal\" DevicePath \"\"" Jul 9 14:55:41.844916 kubelet[2817]: I0709 14:55:41.844865 2817 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qt8vq\" (UniqueName: \"kubernetes.io/projected/13edf4a1-bc0d-426f-9e80-67b8e2499afc-kube-api-access-qt8vq\") on node \"ci-9999-9-100-3d8d1010bc.novalocal\" DevicePath \"\"" Jul 9 14:55:41.844916 kubelet[2817]: I0709 14:55:41.844890 2817 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/13edf4a1-bc0d-426f-9e80-67b8e2499afc-whisker-backend-key-pair\") on node \"ci-9999-9-100-3d8d1010bc.novalocal\" DevicePath \"\"" Jul 9 14:55:41.994995 systemd-networkd[1452]: cali9249b6a5780: Link UP Jul 9 14:55:41.995760 systemd-networkd[1452]: cali9249b6a5780: Gained carrier Jul 9 14:55:42.027562 containerd[1557]: 2025-07-09 14:55:41.656 [INFO][4014] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 9 14:55:42.027562 containerd[1557]: 2025-07-09 14:55:41.770 [INFO][4014] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--pkcfx-eth0 calico-apiserver-b766f7455- calico-apiserver 56b0a123-293f-4e17-bf22-46e59a2bd3a0 812 0 2025-07-09 14:55:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b766f7455 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-9999-9-100-3d8d1010bc.novalocal calico-apiserver-b766f7455-pkcfx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9249b6a5780 [] [] }} ContainerID="f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7" Namespace="calico-apiserver" Pod="calico-apiserver-b766f7455-pkcfx" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--pkcfx-" Jul 9 14:55:42.027562 containerd[1557]: 2025-07-09 14:55:41.772 [INFO][4014] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7" Namespace="calico-apiserver" Pod="calico-apiserver-b766f7455-pkcfx" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--pkcfx-eth0" Jul 9 14:55:42.027562 containerd[1557]: 2025-07-09 14:55:41.873 [INFO][4046] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7" HandleID="k8s-pod-network.f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--pkcfx-eth0" Jul 9 14:55:42.027562 containerd[1557]: 2025-07-09 14:55:41.874 [INFO][4046] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7" HandleID="k8s-pod-network.f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--pkcfx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000372ac0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-9999-9-100-3d8d1010bc.novalocal", "pod":"calico-apiserver-b766f7455-pkcfx", "timestamp":"2025-07-09 14:55:41.873860798 +0000 UTC"}, Hostname:"ci-9999-9-100-3d8d1010bc.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 14:55:42.027562 containerd[1557]: 2025-07-09 14:55:41.874 [INFO][4046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 14:55:42.027562 containerd[1557]: 2025-07-09 14:55:41.874 [INFO][4046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 14:55:42.027562 containerd[1557]: 2025-07-09 14:55:41.874 [INFO][4046] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-100-3d8d1010bc.novalocal' Jul 9 14:55:42.027562 containerd[1557]: 2025-07-09 14:55:41.885 [INFO][4046] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:42.027562 containerd[1557]: 2025-07-09 14:55:41.893 [INFO][4046] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:42.027562 containerd[1557]: 2025-07-09 14:55:41.908 [INFO][4046] ipam/ipam.go 543: Ran out of existing affine blocks for host host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:42.027562 containerd[1557]: 2025-07-09 14:55:41.911 [INFO][4046] ipam/ipam.go 560: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:42.027562 containerd[1557]: 2025-07-09 14:55:41.923 [INFO][4046] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.101.192/26 Jul 9 14:55:42.027562 containerd[1557]: 2025-07-09 14:55:41.923 [INFO][4046] ipam/ipam.go 572: Found unclaimed block host="ci-9999-9-100-3d8d1010bc.novalocal" subnet=192.168.101.192/26 Jul 9 14:55:42.027562 containerd[1557]: 2025-07-09 14:55:41.923 [INFO][4046] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="ci-9999-9-100-3d8d1010bc.novalocal" subnet=192.168.101.192/26 Jul 9 14:55:42.027562 containerd[1557]: 2025-07-09 14:55:41.929 [INFO][4046] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="ci-9999-9-100-3d8d1010bc.novalocal" subnet=192.168.101.192/26 Jul 9 14:55:42.027562 containerd[1557]: 2025-07-09 14:55:41.929 [INFO][4046] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:42.027562 containerd[1557]: 2025-07-09 14:55:41.931 [INFO][4046] ipam/ipam.go 163: The referenced block doesn't exist, trying to create it cidr=192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:42.027562 containerd[1557]: 2025-07-09 14:55:41.937 [INFO][4046] ipam/ipam.go 170: Wrote affinity as pending cidr=192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:42.027562 containerd[1557]: 2025-07-09 14:55:41.940 [INFO][4046] ipam/ipam.go 179: Attempting to claim the block cidr=192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:42.029737 containerd[1557]: 2025-07-09 14:55:41.940 [INFO][4046] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="ci-9999-9-100-3d8d1010bc.novalocal" subnet=192.168.101.192/26 Jul 9 14:55:42.029737 containerd[1557]: 2025-07-09 14:55:41.950 [INFO][4046] ipam/ipam_block_reader_writer.go 267: Successfully created block Jul 9 14:55:42.029737 containerd[1557]: 2025-07-09 14:55:41.950 [INFO][4046] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="ci-9999-9-100-3d8d1010bc.novalocal" subnet=192.168.101.192/26 Jul 9 14:55:42.029737 containerd[1557]: 2025-07-09 14:55:41.955 [INFO][4046] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="ci-9999-9-100-3d8d1010bc.novalocal" subnet=192.168.101.192/26 Jul 9 14:55:42.029737 containerd[1557]: 2025-07-09 14:55:41.955 [INFO][4046] ipam/ipam.go 607: Block '192.168.101.192/26' has 64 free ips which is more than 1 ips required. host="ci-9999-9-100-3d8d1010bc.novalocal" subnet=192.168.101.192/26 Jul 9 14:55:42.029737 containerd[1557]: 2025-07-09 14:55:41.955 [INFO][4046] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.192/26 handle="k8s-pod-network.f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:42.029737 containerd[1557]: 2025-07-09 14:55:41.958 [INFO][4046] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7 Jul 9 14:55:42.029737 containerd[1557]: 2025-07-09 14:55:41.963 [INFO][4046] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.192/26 handle="k8s-pod-network.f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:42.029737 containerd[1557]: 2025-07-09 14:55:41.972 [INFO][4046] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.192/26] block=192.168.101.192/26 handle="k8s-pod-network.f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:42.029737 containerd[1557]: 2025-07-09 14:55:41.972 [INFO][4046] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.192/26] handle="k8s-pod-network.f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:42.029737 containerd[1557]: 2025-07-09 14:55:41.972 [INFO][4046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 14:55:42.029737 containerd[1557]: 2025-07-09 14:55:41.972 [INFO][4046] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.192/26] IPv6=[] ContainerID="f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7" HandleID="k8s-pod-network.f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--pkcfx-eth0" Jul 9 14:55:42.029737 containerd[1557]: 2025-07-09 14:55:41.977 [INFO][4014] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7" Namespace="calico-apiserver" Pod="calico-apiserver-b766f7455-pkcfx" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--pkcfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--pkcfx-eth0", GenerateName:"calico-apiserver-b766f7455-", Namespace:"calico-apiserver", SelfLink:"", UID:"56b0a123-293f-4e17-bf22-46e59a2bd3a0", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 14, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b766f7455", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-3d8d1010bc.novalocal", ContainerID:"", Pod:"calico-apiserver-b766f7455-pkcfx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.192/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9249b6a5780", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 14:55:42.029737 containerd[1557]: 2025-07-09 14:55:41.977 [INFO][4014] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.192/32] ContainerID="f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7" Namespace="calico-apiserver" Pod="calico-apiserver-b766f7455-pkcfx" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--pkcfx-eth0" Jul 9 14:55:42.029737 containerd[1557]: 2025-07-09 14:55:41.977 [INFO][4014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9249b6a5780 ContainerID="f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7" Namespace="calico-apiserver" Pod="calico-apiserver-b766f7455-pkcfx" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--pkcfx-eth0" Jul 9 14:55:42.031345 containerd[1557]: 2025-07-09 14:55:41.996 [INFO][4014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7" Namespace="calico-apiserver" Pod="calico-apiserver-b766f7455-pkcfx" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--pkcfx-eth0" Jul 9 14:55:42.031345 containerd[1557]: 2025-07-09 14:55:41.997 [INFO][4014] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7" Namespace="calico-apiserver" Pod="calico-apiserver-b766f7455-pkcfx" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--pkcfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--pkcfx-eth0", GenerateName:"calico-apiserver-b766f7455-", Namespace:"calico-apiserver", SelfLink:"", UID:"56b0a123-293f-4e17-bf22-46e59a2bd3a0", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 14, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b766f7455", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-3d8d1010bc.novalocal", ContainerID:"f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7", Pod:"calico-apiserver-b766f7455-pkcfx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.192/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9249b6a5780", MAC:"7a:a9:a4:d6:ea:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 14:55:42.031345 containerd[1557]: 2025-07-09 14:55:42.017 [INFO][4014] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7" Namespace="calico-apiserver" Pod="calico-apiserver-b766f7455-pkcfx" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--pkcfx-eth0" Jul 9 14:55:42.062275 systemd-networkd[1452]: cali41a4c359744: Link UP Jul 9 14:55:42.062525 systemd-networkd[1452]: cali41a4c359744: Gained carrier Jul 9 14:55:42.090646 containerd[1557]: 2025-07-09 14:55:41.651 [INFO][4017] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 9 14:55:42.090646 containerd[1557]: 2025-07-09 14:55:41.770 [INFO][4017] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--kube--controllers--988b669bd--hmncc-eth0 calico-kube-controllers-988b669bd- calico-system e1912750-baae-41e5-b4c4-6ba619b7808e 808 0 2025-07-09 14:55:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:988b669bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-9999-9-100-3d8d1010bc.novalocal calico-kube-controllers-988b669bd-hmncc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali41a4c359744 [] [] }} ContainerID="28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126" Namespace="calico-system" Pod="calico-kube-controllers-988b669bd-hmncc" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--kube--controllers--988b669bd--hmncc-" Jul 9 14:55:42.090646 containerd[1557]: 2025-07-09 14:55:41.772 [INFO][4017] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126" Namespace="calico-system" Pod="calico-kube-controllers-988b669bd-hmncc" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--kube--controllers--988b669bd--hmncc-eth0" Jul 9 14:55:42.090646 containerd[1557]: 2025-07-09 14:55:41.876 [INFO][4048] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126" HandleID="k8s-pod-network.28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--kube--controllers--988b669bd--hmncc-eth0" Jul 9 14:55:42.090646 containerd[1557]: 2025-07-09 14:55:41.876 [INFO][4048] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126" HandleID="k8s-pod-network.28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--kube--controllers--988b669bd--hmncc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999-9-100-3d8d1010bc.novalocal", "pod":"calico-kube-controllers-988b669bd-hmncc", "timestamp":"2025-07-09 14:55:41.876427619 +0000 UTC"}, Hostname:"ci-9999-9-100-3d8d1010bc.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 14:55:42.090646 containerd[1557]: 2025-07-09 14:55:41.877 [INFO][4048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 14:55:42.090646 containerd[1557]: 2025-07-09 14:55:41.972 [INFO][4048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 14:55:42.090646 containerd[1557]: 2025-07-09 14:55:41.972 [INFO][4048] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-100-3d8d1010bc.novalocal' Jul 9 14:55:42.090646 containerd[1557]: 2025-07-09 14:55:41.986 [INFO][4048] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:42.090646 containerd[1557]: 2025-07-09 14:55:42.006 [INFO][4048] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:42.090646 containerd[1557]: 2025-07-09 14:55:42.024 [INFO][4048] ipam/ipam.go 511: Trying affinity for 192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:42.090646 containerd[1557]: 2025-07-09 14:55:42.030 [INFO][4048] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:42.090646 containerd[1557]: 2025-07-09 14:55:42.036 [INFO][4048] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:42.090646 containerd[1557]: 2025-07-09 14:55:42.036 [INFO][4048] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.192/26 handle="k8s-pod-network.28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:42.090646 containerd[1557]: 2025-07-09 14:55:42.040 [INFO][4048] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126 Jul 9 14:55:42.090646 containerd[1557]: 2025-07-09 14:55:42.048 [INFO][4048] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.192/26 handle="k8s-pod-network.28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:42.090646 containerd[1557]: 2025-07-09 14:55:42.056 [INFO][4048] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.193/26] block=192.168.101.192/26 handle="k8s-pod-network.28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:42.090646 containerd[1557]: 2025-07-09 14:55:42.056 [INFO][4048] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.193/26] handle="k8s-pod-network.28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:42.093430 containerd[1557]: 2025-07-09 14:55:42.056 [INFO][4048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 14:55:42.093430 containerd[1557]: 2025-07-09 14:55:42.056 [INFO][4048] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.193/26] IPv6=[] ContainerID="28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126" HandleID="k8s-pod-network.28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--kube--controllers--988b669bd--hmncc-eth0" Jul 9 14:55:42.093430 containerd[1557]: 2025-07-09 14:55:42.058 [INFO][4017] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126" Namespace="calico-system" Pod="calico-kube-controllers-988b669bd-hmncc" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--kube--controllers--988b669bd--hmncc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--kube--controllers--988b669bd--hmncc-eth0", GenerateName:"calico-kube-controllers-988b669bd-", Namespace:"calico-system", SelfLink:"", UID:"e1912750-baae-41e5-b4c4-6ba619b7808e", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 14, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"988b669bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-3d8d1010bc.novalocal", ContainerID:"", Pod:"calico-kube-controllers-988b669bd-hmncc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.101.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali41a4c359744", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 14:55:42.093430 containerd[1557]: 2025-07-09 14:55:42.058 [INFO][4017] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.193/32] ContainerID="28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126" Namespace="calico-system" Pod="calico-kube-controllers-988b669bd-hmncc" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--kube--controllers--988b669bd--hmncc-eth0" Jul 9 14:55:42.093430 containerd[1557]: 2025-07-09 14:55:42.058 [INFO][4017] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41a4c359744 ContainerID="28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126" Namespace="calico-system" Pod="calico-kube-controllers-988b669bd-hmncc" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--kube--controllers--988b669bd--hmncc-eth0" Jul 9 14:55:42.093430 containerd[1557]: 2025-07-09 14:55:42.062 [INFO][4017] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126" Namespace="calico-system" Pod="calico-kube-controllers-988b669bd-hmncc" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--kube--controllers--988b669bd--hmncc-eth0" Jul 9 14:55:42.093750 containerd[1557]: 2025-07-09 14:55:42.063 [INFO][4017] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126" Namespace="calico-system" Pod="calico-kube-controllers-988b669bd-hmncc" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--kube--controllers--988b669bd--hmncc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--kube--controllers--988b669bd--hmncc-eth0", GenerateName:"calico-kube-controllers-988b669bd-", Namespace:"calico-system", SelfLink:"", UID:"e1912750-baae-41e5-b4c4-6ba619b7808e", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 14, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"988b669bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-3d8d1010bc.novalocal", ContainerID:"28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126", Pod:"calico-kube-controllers-988b669bd-hmncc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.101.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali41a4c359744", MAC:"3a:e2:f8:42:bb:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 14:55:42.093750 containerd[1557]: 2025-07-09 14:55:42.085 [INFO][4017] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126" Namespace="calico-system" Pod="calico-kube-controllers-988b669bd-hmncc" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--kube--controllers--988b669bd--hmncc-eth0" Jul 9 14:55:42.129755 containerd[1557]: time="2025-07-09T14:55:42.129695676Z" level=info msg="connecting to shim 28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126" address="unix:///run/containerd/s/e0ec2f119dbfed5f9bf9960e6e1f5f7594c8a24e36a30c8c73a480343f1bd607" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:55:42.132662 containerd[1557]: time="2025-07-09T14:55:42.132548566Z" level=info msg="connecting to shim f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7" address="unix:///run/containerd/s/3d16d7ed113f43c5ba40807e48507a11271311189fa3621ddf9ab7fb03fc25cc" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:55:42.159550 systemd[1]: Started cri-containerd-28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126.scope - libcontainer container 28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126. Jul 9 14:55:42.188156 systemd[1]: Started cri-containerd-f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7.scope - libcontainer container f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7. Jul 9 14:55:42.247320 systemd[1]: Removed slice kubepods-besteffort-pod13edf4a1_bc0d_426f_9e80_67b8e2499afc.slice - libcontainer container kubepods-besteffort-pod13edf4a1_bc0d_426f_9e80_67b8e2499afc.slice. Jul 9 14:55:42.338517 kubelet[2817]: I0709 14:55:42.336804 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8zt95" podStartSLOduration=2.347453053 podStartE2EDuration="32.336152901s" podCreationTimestamp="2025-07-09 14:55:10 +0000 UTC" firstStartedPulling="2025-07-09 14:55:10.654113359 +0000 UTC m=+21.348057593" lastFinishedPulling="2025-07-09 14:55:40.642813207 +0000 UTC m=+51.336757441" observedRunningTime="2025-07-09 14:55:42.331602561 +0000 UTC m=+53.025546815" watchObservedRunningTime="2025-07-09 14:55:42.336152901 +0000 UTC m=+53.030097165" Jul 9 14:55:42.361635 containerd[1557]: time="2025-07-09T14:55:42.360893668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-988b669bd-hmncc,Uid:e1912750-baae-41e5-b4c4-6ba619b7808e,Namespace:calico-system,Attempt:0,} returns sandbox id \"28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126\"" Jul 9 14:55:42.367501 containerd[1557]: time="2025-07-09T14:55:42.367458797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 9 14:55:42.369497 containerd[1557]: time="2025-07-09T14:55:42.369400869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b766f7455-pkcfx,Uid:56b0a123-293f-4e17-bf22-46e59a2bd3a0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7\"" Jul 9 14:55:42.459211 kubelet[2817]: I0709 14:55:42.459158 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40f0e66c-bbb9-4f20-a304-fa5309a31303-whisker-ca-bundle\") pod \"whisker-87c8978ff-tk69g\" (UID: \"40f0e66c-bbb9-4f20-a304-fa5309a31303\") " pod="calico-system/whisker-87c8978ff-tk69g" Jul 9 14:55:42.459398 kubelet[2817]: I0709 14:55:42.459338 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk6kp\" (UniqueName: \"kubernetes.io/projected/40f0e66c-bbb9-4f20-a304-fa5309a31303-kube-api-access-dk6kp\") pod \"whisker-87c8978ff-tk69g\" (UID: \"40f0e66c-bbb9-4f20-a304-fa5309a31303\") " pod="calico-system/whisker-87c8978ff-tk69g" Jul 9 14:55:42.460009 kubelet[2817]: I0709 14:55:42.459979 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/40f0e66c-bbb9-4f20-a304-fa5309a31303-whisker-backend-key-pair\") pod \"whisker-87c8978ff-tk69g\" (UID: \"40f0e66c-bbb9-4f20-a304-fa5309a31303\") " pod="calico-system/whisker-87c8978ff-tk69g" Jul 9 14:55:42.466456 systemd[1]: Created slice kubepods-besteffort-pod40f0e66c_bbb9_4f20_a304_fa5309a31303.slice - libcontainer container kubepods-besteffort-pod40f0e66c_bbb9_4f20_a304_fa5309a31303.slice. Jul 9 14:55:42.552985 containerd[1557]: time="2025-07-09T14:55:42.552803283Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a\" id:\"2179c3da12baf328be451100baa5ae44a43b9e09e7f3c4a53e17888be2c9901d\" pid:4162 exit_status:1 exited_at:{seconds:1752072942 nanos:552094036}" Jul 9 14:55:42.775044 containerd[1557]: time="2025-07-09T14:55:42.774289904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-87c8978ff-tk69g,Uid:40f0e66c-bbb9-4f20-a304-fa5309a31303,Namespace:calico-system,Attempt:0,}" Jul 9 14:55:43.030799 systemd-networkd[1452]: cali1ce202c5f4f: Link UP Jul 9 14:55:43.032116 systemd-networkd[1452]: cali1ce202c5f4f: Gained carrier Jul 9 14:55:43.056683 containerd[1557]: 2025-07-09 14:55:42.853 [INFO][4190] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 9 14:55:43.056683 containerd[1557]: 2025-07-09 14:55:42.868 [INFO][4190] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--100--3d8d1010bc.novalocal-k8s-whisker--87c8978ff--tk69g-eth0 whisker-87c8978ff- calico-system 40f0e66c-bbb9-4f20-a304-fa5309a31303 904 0 2025-07-09 14:55:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:87c8978ff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-9999-9-100-3d8d1010bc.novalocal whisker-87c8978ff-tk69g eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1ce202c5f4f [] [] }} ContainerID="bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" Namespace="calico-system" Pod="whisker-87c8978ff-tk69g" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-whisker--87c8978ff--tk69g-" Jul 9 14:55:43.056683 containerd[1557]: 2025-07-09 14:55:42.870 [INFO][4190] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" Namespace="calico-system" Pod="whisker-87c8978ff-tk69g" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-whisker--87c8978ff--tk69g-eth0" Jul 9 14:55:43.056683 containerd[1557]: 2025-07-09 14:55:42.906 [INFO][4202] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" HandleID="k8s-pod-network.bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-whisker--87c8978ff--tk69g-eth0" Jul 9 14:55:43.056683 containerd[1557]: 2025-07-09 14:55:42.906 [INFO][4202] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" HandleID="k8s-pod-network.bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-whisker--87c8978ff--tk69g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f190), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999-9-100-3d8d1010bc.novalocal", "pod":"whisker-87c8978ff-tk69g", "timestamp":"2025-07-09 14:55:42.906665879 +0000 UTC"}, Hostname:"ci-9999-9-100-3d8d1010bc.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 14:55:43.056683 containerd[1557]: 2025-07-09 14:55:42.907 [INFO][4202] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 14:55:43.056683 containerd[1557]: 2025-07-09 14:55:42.907 [INFO][4202] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 14:55:43.056683 containerd[1557]: 2025-07-09 14:55:42.907 [INFO][4202] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-100-3d8d1010bc.novalocal' Jul 9 14:55:43.056683 containerd[1557]: 2025-07-09 14:55:42.919 [INFO][4202] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:43.056683 containerd[1557]: 2025-07-09 14:55:42.936 [INFO][4202] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:43.056683 containerd[1557]: 2025-07-09 14:55:42.944 [INFO][4202] ipam/ipam.go 511: Trying affinity for 192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:43.056683 containerd[1557]: 2025-07-09 14:55:42.947 [INFO][4202] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:43.056683 containerd[1557]: 2025-07-09 14:55:42.951 [INFO][4202] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:43.056683 containerd[1557]: 2025-07-09 14:55:42.952 [INFO][4202] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.192/26 handle="k8s-pod-network.bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:43.056683 containerd[1557]: 2025-07-09 14:55:42.955 [INFO][4202] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74 Jul 9 14:55:43.056683 containerd[1557]: 2025-07-09 14:55:42.963 [INFO][4202] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.192/26 handle="k8s-pod-network.bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:43.058808 containerd[1557]: 2025-07-09 14:55:42.976 [ERROR][4202] ipam/customresource.go 184: Error updating resource Key=IPAMBlock(192-168-101-192-26) Name="192-168-101-192-26" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"192-168-101-192-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.IPAMBlockSpec{CIDR:"192.168.101.192/26", Affinity:(*string)(0xc0003a8520), Allocations:[]*int{(*int)(0xc00068d4f8), (*int)(0xc00068d500), (*int)(0xc00068d660), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0xc0003a8560), AttrSecondary:map[string]string{"namespace":"calico-apiserver", "node":"ci-9999-9-100-3d8d1010bc.novalocal", "pod":"calico-apiserver-b766f7455-pkcfx", "timestamp":"2025-07-09 14:55:41.873860798 +0000 UTC"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0003a85c0), AttrSecondary:map[string]string{"namespace":"calico-system", "node":"ci-9999-9-100-3d8d1010bc.novalocal", "pod":"calico-kube-controllers-988b669bd-hmncc", "timestamp":"2025-07-09 14:55:41.876427619 +0000 UTC"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc00024f190), AttrSecondary:map[string]string{"namespace":"calico-system", "node":"ci-9999-9-100-3d8d1010bc.novalocal", "pod":"whisker-87c8978ff-tk69g", "timestamp":"2025-07-09 14:55:42.906665879 +0000 UTC"}}}, SequenceNumber:0x18509d0ecce9552b, SequenceNumberForAllocation:map[string]uint64{"0":0x18509d0ecce95528, "1":0x18509d0ecce95529, "2":0x18509d0ecce9552a}, Deleted:false, DeprecatedStrictAffinity:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "192-168-101-192-26": the object has been modified; please apply your changes to the latest version and try again Jul 9 14:55:43.058808 containerd[1557]: 2025-07-09 14:55:42.977 [INFO][4202] ipam/ipam.go 1247: Failed to update block block=192.168.101.192/26 error=update conflict: IPAMBlock(192-168-101-192-26) handle="k8s-pod-network.bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:43.058808 containerd[1557]: 2025-07-09 14:55:43.003 [INFO][4202] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.192/26 handle="k8s-pod-network.bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:43.058808 containerd[1557]: 2025-07-09 14:55:43.006 [INFO][4202] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74 Jul 9 14:55:43.058808 containerd[1557]: 2025-07-09 14:55:43.013 [INFO][4202] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.192/26 handle="k8s-pod-network.bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:43.058808 containerd[1557]: 2025-07-09 14:55:43.023 [INFO][4202] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.195/26] block=192.168.101.192/26 handle="k8s-pod-network.bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:43.058808 containerd[1557]: 2025-07-09 14:55:43.023 [INFO][4202] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.195/26] handle="k8s-pod-network.bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:43.058808 containerd[1557]: 2025-07-09 14:55:43.023 [INFO][4202] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 14:55:43.058808 containerd[1557]: 2025-07-09 14:55:43.023 [INFO][4202] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.195/26] IPv6=[] ContainerID="bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" HandleID="k8s-pod-network.bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-whisker--87c8978ff--tk69g-eth0" Jul 9 14:55:43.059842 containerd[1557]: 2025-07-09 14:55:43.026 [INFO][4190] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" Namespace="calico-system" Pod="whisker-87c8978ff-tk69g" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-whisker--87c8978ff--tk69g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--3d8d1010bc.novalocal-k8s-whisker--87c8978ff--tk69g-eth0", GenerateName:"whisker-87c8978ff-", Namespace:"calico-system", SelfLink:"", UID:"40f0e66c-bbb9-4f20-a304-fa5309a31303", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 14, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"87c8978ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-3d8d1010bc.novalocal", ContainerID:"", Pod:"whisker-87c8978ff-tk69g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.101.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1ce202c5f4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 14:55:43.059842 containerd[1557]: 2025-07-09 14:55:43.026 [INFO][4190] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.195/32] ContainerID="bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" Namespace="calico-system" Pod="whisker-87c8978ff-tk69g" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-whisker--87c8978ff--tk69g-eth0" Jul 9 14:55:43.059842 containerd[1557]: 2025-07-09 14:55:43.026 [INFO][4190] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ce202c5f4f ContainerID="bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" Namespace="calico-system" Pod="whisker-87c8978ff-tk69g" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-whisker--87c8978ff--tk69g-eth0" Jul 9 14:55:43.059842 containerd[1557]: 2025-07-09 14:55:43.031 [INFO][4190] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" Namespace="calico-system" Pod="whisker-87c8978ff-tk69g" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-whisker--87c8978ff--tk69g-eth0" Jul 9 14:55:43.059842 containerd[1557]: 2025-07-09 14:55:43.032 [INFO][4190] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" Namespace="calico-system" Pod="whisker-87c8978ff-tk69g" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-whisker--87c8978ff--tk69g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--3d8d1010bc.novalocal-k8s-whisker--87c8978ff--tk69g-eth0", GenerateName:"whisker-87c8978ff-", Namespace:"calico-system", SelfLink:"", UID:"40f0e66c-bbb9-4f20-a304-fa5309a31303", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 14, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"87c8978ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-3d8d1010bc.novalocal", ContainerID:"bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74", Pod:"whisker-87c8978ff-tk69g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.101.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1ce202c5f4f", MAC:"62:c2:e0:86:fe:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 14:55:43.059842 containerd[1557]: 2025-07-09 14:55:43.053 [INFO][4190] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" Namespace="calico-system" Pod="whisker-87c8978ff-tk69g" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-whisker--87c8978ff--tk69g-eth0" Jul 9 14:55:43.095976 containerd[1557]: time="2025-07-09T14:55:43.095433425Z" level=info msg="connecting to shim bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74" address="unix:///run/containerd/s/971b342b35ea7934313027288af2c83c578747cb594cccf1840b6169d5cc38d4" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:55:43.126176 systemd[1]: Started cri-containerd-bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74.scope - libcontainer container bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74. Jul 9 14:55:43.217788 containerd[1557]: time="2025-07-09T14:55:43.217721738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-87c8978ff-tk69g,Uid:40f0e66c-bbb9-4f20-a304-fa5309a31303,Namespace:calico-system,Attempt:0,} returns sandbox id \"bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74\"" Jul 9 14:55:43.386296 containerd[1557]: time="2025-07-09T14:55:43.385863226Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a\" id:\"ea5725fd1f8c156293b6f9e96a01f946f290ac62c906bfa30d2e0f7ae3f979de\" pid:4283 exit_status:1 exited_at:{seconds:1752072943 nanos:385127258}" Jul 9 14:55:43.558270 kubelet[2817]: I0709 14:55:43.558200 2817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13edf4a1-bc0d-426f-9e80-67b8e2499afc" path="/var/lib/kubelet/pods/13edf4a1-bc0d-426f-9e80-67b8e2499afc/volumes" Jul 9 14:55:43.651486 systemd-networkd[1452]: cali9249b6a5780: Gained IPv6LL Jul 9 14:55:44.035700 systemd-networkd[1452]: cali41a4c359744: Gained IPv6LL Jul 9 14:55:45.060307 systemd-networkd[1452]: cali1ce202c5f4f: Gained IPv6LL Jul 9 14:55:45.199668 systemd-networkd[1452]: vxlan.calico: Link UP Jul 9 14:55:45.199705 systemd-networkd[1452]: vxlan.calico: Gained carrier Jul 9 14:55:45.577533 update_engine[1535]: I20250709 14:55:45.576488 1535 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 9 14:55:45.577533 update_engine[1535]: I20250709 14:55:45.577492 1535 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 9 14:55:45.579323 update_engine[1535]: I20250709 14:55:45.578534 1535 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 9 14:55:45.580160 update_engine[1535]: I20250709 14:55:45.580121 1535 omaha_request_params.cc:62] Current group set to developer Jul 9 14:55:45.583555 update_engine[1535]: I20250709 14:55:45.583513 1535 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 9 14:55:45.583555 update_engine[1535]: I20250709 14:55:45.583542 1535 update_attempter.cc:643] Scheduling an action processor start. Jul 9 14:55:45.583669 update_engine[1535]: I20250709 14:55:45.583568 1535 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 9 14:55:45.583729 update_engine[1535]: I20250709 14:55:45.583667 1535 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 9 14:55:45.584095 update_engine[1535]: I20250709 14:55:45.583894 1535 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 9 14:55:45.584095 update_engine[1535]: I20250709 14:55:45.583909 1535 omaha_request_action.cc:272] Request: Jul 9 14:55:45.584095 update_engine[1535]: Jul 9 14:55:45.584095 update_engine[1535]: Jul 9 14:55:45.584095 update_engine[1535]: Jul 9 14:55:45.584095 update_engine[1535]: Jul 9 14:55:45.584095 update_engine[1535]: Jul 9 14:55:45.584095 update_engine[1535]: Jul 9 14:55:45.584095 update_engine[1535]: Jul 9 14:55:45.584095 update_engine[1535]: Jul 9 14:55:45.584095 update_engine[1535]: I20250709 14:55:45.583921 1535 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 9 14:55:45.601835 locksmithd[1570]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 9 14:55:45.603586 update_engine[1535]: I20250709 14:55:45.603526 1535 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 9 14:55:45.604330 update_engine[1535]: I20250709 14:55:45.604240 1535 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 9 14:55:45.610901 update_engine[1535]: E20250709 14:55:45.610845 1535 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 9 14:55:45.611262 update_engine[1535]: I20250709 14:55:45.611238 1535 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 9 14:55:47.236141 systemd-networkd[1452]: vxlan.calico: Gained IPv6LL Jul 9 14:55:48.325393 containerd[1557]: time="2025-07-09T14:55:48.325216022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:48.327780 containerd[1557]: time="2025-07-09T14:55:48.327712766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 9 14:55:48.329162 containerd[1557]: time="2025-07-09T14:55:48.329075704Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:48.331657 containerd[1557]: time="2025-07-09T14:55:48.331623514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:48.332871 containerd[1557]: time="2025-07-09T14:55:48.332385319Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 5.964885394s" Jul 9 14:55:48.332871 containerd[1557]: time="2025-07-09T14:55:48.332430355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 9 14:55:48.335232 containerd[1557]: time="2025-07-09T14:55:48.335195304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 9 14:55:48.355398 containerd[1557]: time="2025-07-09T14:55:48.355344958Z" level=info msg="CreateContainer within sandbox \"28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 9 14:55:48.372731 containerd[1557]: time="2025-07-09T14:55:48.372679076Z" level=info msg="Container ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:55:48.378795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2094239556.mount: Deactivated successfully. Jul 9 14:55:48.397927 containerd[1557]: time="2025-07-09T14:55:48.397865009Z" level=info msg="CreateContainer within sandbox \"28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\"" Jul 9 14:55:48.399029 containerd[1557]: time="2025-07-09T14:55:48.398749827Z" level=info msg="StartContainer for \"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\"" Jul 9 14:55:48.400568 containerd[1557]: time="2025-07-09T14:55:48.400532094Z" level=info msg="connecting to shim ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400" address="unix:///run/containerd/s/e0ec2f119dbfed5f9bf9960e6e1f5f7594c8a24e36a30c8c73a480343f1bd607" protocol=ttrpc version=3 Jul 9 14:55:48.435122 systemd[1]: Started cri-containerd-ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400.scope - libcontainer container ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400. Jul 9 14:55:48.523503 containerd[1557]: time="2025-07-09T14:55:48.523444583Z" level=info msg="StartContainer for \"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\" returns successfully" Jul 9 14:55:49.341977 kubelet[2817]: I0709 14:55:49.340604 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-988b669bd-hmncc" podStartSLOduration=33.372183157 podStartE2EDuration="39.34045811s" podCreationTimestamp="2025-07-09 14:55:10 +0000 UTC" firstStartedPulling="2025-07-09 14:55:42.36631517 +0000 UTC m=+53.060259414" lastFinishedPulling="2025-07-09 14:55:48.334590113 +0000 UTC m=+59.028534367" observedRunningTime="2025-07-09 14:55:49.336064333 +0000 UTC m=+60.030008597" watchObservedRunningTime="2025-07-09 14:55:49.34045811 +0000 UTC m=+60.034402344" Jul 9 14:55:49.392730 containerd[1557]: time="2025-07-09T14:55:49.391786006Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\" id:\"5dbb3bca70414e98b864c0f640931d40480e347dc45222cfdabbbb873c53b7d5\" pid:4547 exited_at:{seconds:1752072949 nanos:390412699}" Jul 9 14:55:52.879950 containerd[1557]: time="2025-07-09T14:55:52.878895404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:52.880837 containerd[1557]: time="2025-07-09T14:55:52.880810941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 9 14:55:52.881540 containerd[1557]: time="2025-07-09T14:55:52.881504548Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:52.885608 containerd[1557]: time="2025-07-09T14:55:52.885548504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:52.886548 containerd[1557]: time="2025-07-09T14:55:52.886491029Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.55125553s" Jul 9 14:55:52.886672 containerd[1557]: time="2025-07-09T14:55:52.886651792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 9 14:55:52.889033 containerd[1557]: time="2025-07-09T14:55:52.888999874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 9 14:55:52.890870 containerd[1557]: time="2025-07-09T14:55:52.890835982Z" level=info msg="CreateContainer within sandbox \"f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 9 14:55:52.910252 containerd[1557]: time="2025-07-09T14:55:52.910200711Z" level=info msg="Container 8d75e72404012573d2c4c28ecf68f40d9f947e9aa7252775436b0f00f6fafdd8: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:55:52.925308 containerd[1557]: time="2025-07-09T14:55:52.925261124Z" level=info msg="CreateContainer within sandbox \"f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8d75e72404012573d2c4c28ecf68f40d9f947e9aa7252775436b0f00f6fafdd8\"" Jul 9 14:55:52.926498 containerd[1557]: time="2025-07-09T14:55:52.926375112Z" level=info msg="StartContainer for \"8d75e72404012573d2c4c28ecf68f40d9f947e9aa7252775436b0f00f6fafdd8\"" Jul 9 14:55:52.929972 containerd[1557]: time="2025-07-09T14:55:52.929905441Z" level=info msg="connecting to shim 8d75e72404012573d2c4c28ecf68f40d9f947e9aa7252775436b0f00f6fafdd8" address="unix:///run/containerd/s/3d16d7ed113f43c5ba40807e48507a11271311189fa3621ddf9ab7fb03fc25cc" protocol=ttrpc version=3 Jul 9 14:55:52.962162 systemd[1]: Started cri-containerd-8d75e72404012573d2c4c28ecf68f40d9f947e9aa7252775436b0f00f6fafdd8.scope - libcontainer container 8d75e72404012573d2c4c28ecf68f40d9f947e9aa7252775436b0f00f6fafdd8. Jul 9 14:55:53.094500 containerd[1557]: time="2025-07-09T14:55:53.094361233Z" level=info msg="StartContainer for \"8d75e72404012573d2c4c28ecf68f40d9f947e9aa7252775436b0f00f6fafdd8\" returns successfully" Jul 9 14:55:53.345652 kubelet[2817]: I0709 14:55:53.345543 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b766f7455-pkcfx" podStartSLOduration=36.830418856 podStartE2EDuration="47.345524791s" podCreationTimestamp="2025-07-09 14:55:06 +0000 UTC" firstStartedPulling="2025-07-09 14:55:42.372751487 +0000 UTC m=+53.066695721" lastFinishedPulling="2025-07-09 14:55:52.887857422 +0000 UTC m=+63.581801656" observedRunningTime="2025-07-09 14:55:53.342752501 +0000 UTC m=+64.036696755" watchObservedRunningTime="2025-07-09 14:55:53.345524791 +0000 UTC m=+64.039469025" Jul 9 14:55:53.555314 containerd[1557]: time="2025-07-09T14:55:53.555250628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b766f7455-qprmh,Uid:f8d26d52-a0eb-4b4c-8863-69ea830380d6,Namespace:calico-apiserver,Attempt:0,}" Jul 9 14:55:53.560051 containerd[1557]: time="2025-07-09T14:55:53.559580061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-btbwb,Uid:5bcda8dd-7fb0-4bc3-927d-f3fea33dc1b0,Namespace:kube-system,Attempt:0,}" Jul 9 14:55:53.962233 systemd-networkd[1452]: cali69ad1db5b14: Link UP Jul 9 14:55:53.963711 systemd-networkd[1452]: cali69ad1db5b14: Gained carrier Jul 9 14:55:54.006023 containerd[1557]: 2025-07-09 14:55:53.704 [INFO][4617] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--btbwb-eth0 coredns-668d6bf9bc- kube-system 5bcda8dd-7fb0-4bc3-927d-f3fea33dc1b0 800 0 2025-07-09 14:54:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-9999-9-100-3d8d1010bc.novalocal coredns-668d6bf9bc-btbwb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali69ad1db5b14 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a" Namespace="kube-system" Pod="coredns-668d6bf9bc-btbwb" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--btbwb-" Jul 9 14:55:54.006023 containerd[1557]: 2025-07-09 14:55:53.704 [INFO][4617] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a" Namespace="kube-system" Pod="coredns-668d6bf9bc-btbwb" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--btbwb-eth0" Jul 9 14:55:54.006023 containerd[1557]: 2025-07-09 14:55:53.806 [INFO][4643] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a" HandleID="k8s-pod-network.9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--btbwb-eth0" Jul 9 14:55:54.006023 containerd[1557]: 2025-07-09 14:55:53.806 [INFO][4643] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a" HandleID="k8s-pod-network.9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--btbwb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fa50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-9999-9-100-3d8d1010bc.novalocal", "pod":"coredns-668d6bf9bc-btbwb", "timestamp":"2025-07-09 14:55:53.80620394 +0000 UTC"}, Hostname:"ci-9999-9-100-3d8d1010bc.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 14:55:54.006023 containerd[1557]: 2025-07-09 14:55:53.806 [INFO][4643] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 14:55:54.006023 containerd[1557]: 2025-07-09 14:55:53.807 [INFO][4643] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 14:55:54.006023 containerd[1557]: 2025-07-09 14:55:53.807 [INFO][4643] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-100-3d8d1010bc.novalocal' Jul 9 14:55:54.006023 containerd[1557]: 2025-07-09 14:55:53.842 [INFO][4643] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.006023 containerd[1557]: 2025-07-09 14:55:53.856 [INFO][4643] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.006023 containerd[1557]: 2025-07-09 14:55:53.867 [INFO][4643] ipam/ipam.go 511: Trying affinity for 192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.006023 containerd[1557]: 2025-07-09 14:55:53.870 [INFO][4643] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.006023 containerd[1557]: 2025-07-09 14:55:53.875 [INFO][4643] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.006023 containerd[1557]: 2025-07-09 14:55:53.875 [INFO][4643] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.192/26 handle="k8s-pod-network.9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.006023 containerd[1557]: 2025-07-09 14:55:53.878 [INFO][4643] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a Jul 9 14:55:54.006023 containerd[1557]: 2025-07-09 14:55:53.886 [INFO][4643] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.192/26 handle="k8s-pod-network.9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.006023 containerd[1557]: 2025-07-09 14:55:53.917 [INFO][4643] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.196/26] block=192.168.101.192/26 handle="k8s-pod-network.9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.006023 containerd[1557]: 2025-07-09 14:55:53.917 [INFO][4643] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.196/26] handle="k8s-pod-network.9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.006023 containerd[1557]: 2025-07-09 14:55:53.917 [INFO][4643] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 14:55:54.010441 containerd[1557]: 2025-07-09 14:55:53.917 [INFO][4643] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.196/26] IPv6=[] ContainerID="9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a" HandleID="k8s-pod-network.9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--btbwb-eth0" Jul 9 14:55:54.010441 containerd[1557]: 2025-07-09 14:55:53.937 [INFO][4617] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a" Namespace="kube-system" Pod="coredns-668d6bf9bc-btbwb" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--btbwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--btbwb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5bcda8dd-7fb0-4bc3-927d-f3fea33dc1b0", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 14, 54, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-3d8d1010bc.novalocal", ContainerID:"", Pod:"coredns-668d6bf9bc-btbwb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69ad1db5b14", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 14:55:54.010441 containerd[1557]: 2025-07-09 14:55:53.944 [INFO][4617] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.196/32] ContainerID="9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a" Namespace="kube-system" Pod="coredns-668d6bf9bc-btbwb" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--btbwb-eth0" Jul 9 14:55:54.010441 containerd[1557]: 2025-07-09 14:55:53.944 [INFO][4617] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69ad1db5b14 ContainerID="9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a" Namespace="kube-system" Pod="coredns-668d6bf9bc-btbwb" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--btbwb-eth0" Jul 9 14:55:54.010441 containerd[1557]: 2025-07-09 14:55:53.963 [INFO][4617] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a" Namespace="kube-system" Pod="coredns-668d6bf9bc-btbwb" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--btbwb-eth0" Jul 9 14:55:54.011163 containerd[1557]: 2025-07-09 14:55:53.964 [INFO][4617] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a" Namespace="kube-system" Pod="coredns-668d6bf9bc-btbwb" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--btbwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--btbwb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5bcda8dd-7fb0-4bc3-927d-f3fea33dc1b0", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 14, 54, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-3d8d1010bc.novalocal", ContainerID:"9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a", Pod:"coredns-668d6bf9bc-btbwb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69ad1db5b14", MAC:"22:f3:bb:43:9a:99", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 14:55:54.011163 containerd[1557]: 2025-07-09 14:55:53.998 [INFO][4617] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a" Namespace="kube-system" Pod="coredns-668d6bf9bc-btbwb" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--btbwb-eth0" Jul 9 14:55:54.068954 containerd[1557]: time="2025-07-09T14:55:54.068826316Z" level=info msg="connecting to shim 9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a" address="unix:///run/containerd/s/713ede0075448cffd17a986a36b18e2f593d972f47673ce779ff2f2437b86821" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:55:54.110427 systemd-networkd[1452]: cali48cd963d150: Link UP Jul 9 14:55:54.114553 systemd-networkd[1452]: cali48cd963d150: Gained carrier Jul 9 14:55:54.145852 systemd[1]: Started cri-containerd-9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a.scope - libcontainer container 9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a. Jul 9 14:55:54.152235 containerd[1557]: 2025-07-09 14:55:53.677 [INFO][4611] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--qprmh-eth0 calico-apiserver-b766f7455- calico-apiserver f8d26d52-a0eb-4b4c-8863-69ea830380d6 809 0 2025-07-09 14:55:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b766f7455 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-9999-9-100-3d8d1010bc.novalocal calico-apiserver-b766f7455-qprmh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali48cd963d150 [] [] }} ContainerID="bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c" Namespace="calico-apiserver" Pod="calico-apiserver-b766f7455-qprmh" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--qprmh-" Jul 9 14:55:54.152235 containerd[1557]: 2025-07-09 14:55:53.677 [INFO][4611] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c" Namespace="calico-apiserver" Pod="calico-apiserver-b766f7455-qprmh" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--qprmh-eth0" Jul 9 14:55:54.152235 containerd[1557]: 2025-07-09 14:55:53.807 [INFO][4638] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c" HandleID="k8s-pod-network.bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--qprmh-eth0" Jul 9 14:55:54.152235 containerd[1557]: 2025-07-09 14:55:53.807 [INFO][4638] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c" HandleID="k8s-pod-network.bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--qprmh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000528510), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-9999-9-100-3d8d1010bc.novalocal", "pod":"calico-apiserver-b766f7455-qprmh", "timestamp":"2025-07-09 14:55:53.807558873 +0000 UTC"}, Hostname:"ci-9999-9-100-3d8d1010bc.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 14:55:54.152235 containerd[1557]: 2025-07-09 14:55:53.807 [INFO][4638] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 14:55:54.152235 containerd[1557]: 2025-07-09 14:55:53.917 [INFO][4638] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 14:55:54.152235 containerd[1557]: 2025-07-09 14:55:53.917 [INFO][4638] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-100-3d8d1010bc.novalocal' Jul 9 14:55:54.152235 containerd[1557]: 2025-07-09 14:55:53.950 [INFO][4638] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.152235 containerd[1557]: 2025-07-09 14:55:54.010 [INFO][4638] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.152235 containerd[1557]: 2025-07-09 14:55:54.021 [INFO][4638] ipam/ipam.go 511: Trying affinity for 192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.152235 containerd[1557]: 2025-07-09 14:55:54.031 [INFO][4638] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.152235 containerd[1557]: 2025-07-09 14:55:54.038 [INFO][4638] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.152235 containerd[1557]: 2025-07-09 14:55:54.040 [INFO][4638] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.192/26 handle="k8s-pod-network.bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.152235 containerd[1557]: 2025-07-09 14:55:54.042 [INFO][4638] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c Jul 9 14:55:54.152235 containerd[1557]: 2025-07-09 14:55:54.065 [INFO][4638] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.192/26 handle="k8s-pod-network.bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.152235 containerd[1557]: 2025-07-09 14:55:54.088 [INFO][4638] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.197/26] block=192.168.101.192/26 handle="k8s-pod-network.bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.152235 containerd[1557]: 2025-07-09 14:55:54.088 [INFO][4638] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.197/26] handle="k8s-pod-network.bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.152235 containerd[1557]: 2025-07-09 14:55:54.088 [INFO][4638] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 14:55:54.153123 containerd[1557]: 2025-07-09 14:55:54.088 [INFO][4638] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.197/26] IPv6=[] ContainerID="bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c" HandleID="k8s-pod-network.bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--qprmh-eth0" Jul 9 14:55:54.153123 containerd[1557]: 2025-07-09 14:55:54.102 [INFO][4611] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c" Namespace="calico-apiserver" Pod="calico-apiserver-b766f7455-qprmh" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--qprmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--qprmh-eth0", GenerateName:"calico-apiserver-b766f7455-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8d26d52-a0eb-4b4c-8863-69ea830380d6", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 14, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b766f7455", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-3d8d1010bc.novalocal", ContainerID:"", Pod:"calico-apiserver-b766f7455-qprmh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali48cd963d150", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 14:55:54.153123 containerd[1557]: 2025-07-09 14:55:54.102 [INFO][4611] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.197/32] ContainerID="bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c" Namespace="calico-apiserver" Pod="calico-apiserver-b766f7455-qprmh" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--qprmh-eth0" Jul 9 14:55:54.153123 containerd[1557]: 2025-07-09 14:55:54.102 [INFO][4611] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48cd963d150 ContainerID="bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c" Namespace="calico-apiserver" Pod="calico-apiserver-b766f7455-qprmh" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--qprmh-eth0" Jul 9 14:55:54.153123 containerd[1557]: 2025-07-09 14:55:54.117 [INFO][4611] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c" Namespace="calico-apiserver" Pod="calico-apiserver-b766f7455-qprmh" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--qprmh-eth0" Jul 9 14:55:54.153365 containerd[1557]: 2025-07-09 14:55:54.126 [INFO][4611] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c" Namespace="calico-apiserver" Pod="calico-apiserver-b766f7455-qprmh" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--qprmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--qprmh-eth0", GenerateName:"calico-apiserver-b766f7455-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8d26d52-a0eb-4b4c-8863-69ea830380d6", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 14, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b766f7455", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-3d8d1010bc.novalocal", ContainerID:"bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c", Pod:"calico-apiserver-b766f7455-qprmh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali48cd963d150", MAC:"b2:fb:0a:c1:78:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 14:55:54.153365 containerd[1557]: 2025-07-09 14:55:54.144 [INFO][4611] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c" Namespace="calico-apiserver" Pod="calico-apiserver-b766f7455-qprmh" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-calico--apiserver--b766f7455--qprmh-eth0" Jul 9 14:55:54.217622 containerd[1557]: time="2025-07-09T14:55:54.217314049Z" level=info msg="connecting to shim bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c" address="unix:///run/containerd/s/38a268431f1078221c5caa3db3e6e528624956b77b4d10c768b704a423a1ac2c" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:55:54.287899 containerd[1557]: time="2025-07-09T14:55:54.287830055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-btbwb,Uid:5bcda8dd-7fb0-4bc3-927d-f3fea33dc1b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a\"" Jul 9 14:55:54.298648 systemd[1]: Started cri-containerd-bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c.scope - libcontainer container bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c. Jul 9 14:55:54.301311 containerd[1557]: time="2025-07-09T14:55:54.301152570Z" level=info msg="CreateContainer within sandbox \"9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 14:55:54.318077 kubelet[2817]: I0709 14:55:54.318019 2817 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 14:55:54.342724 containerd[1557]: time="2025-07-09T14:55:54.342663142Z" level=info msg="Container 30135be8c9b621979686ac3c0dc4b00e88adf393c9c6fde9ca33cf60a201647b: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:55:54.357706 containerd[1557]: time="2025-07-09T14:55:54.357581272Z" level=info msg="CreateContainer within sandbox \"9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"30135be8c9b621979686ac3c0dc4b00e88adf393c9c6fde9ca33cf60a201647b\"" Jul 9 14:55:54.359080 containerd[1557]: time="2025-07-09T14:55:54.359036762Z" level=info msg="StartContainer for \"30135be8c9b621979686ac3c0dc4b00e88adf393c9c6fde9ca33cf60a201647b\"" Jul 9 14:55:54.362133 containerd[1557]: time="2025-07-09T14:55:54.361614155Z" level=info msg="connecting to shim 30135be8c9b621979686ac3c0dc4b00e88adf393c9c6fde9ca33cf60a201647b" address="unix:///run/containerd/s/713ede0075448cffd17a986a36b18e2f593d972f47673ce779ff2f2437b86821" protocol=ttrpc version=3 Jul 9 14:55:54.408098 systemd[1]: Started cri-containerd-30135be8c9b621979686ac3c0dc4b00e88adf393c9c6fde9ca33cf60a201647b.scope - libcontainer container 30135be8c9b621979686ac3c0dc4b00e88adf393c9c6fde9ca33cf60a201647b. Jul 9 14:55:54.466996 containerd[1557]: time="2025-07-09T14:55:54.464915464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b766f7455-qprmh,Uid:f8d26d52-a0eb-4b4c-8863-69ea830380d6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c\"" Jul 9 14:55:54.477098 containerd[1557]: time="2025-07-09T14:55:54.475043005Z" level=info msg="CreateContainer within sandbox \"bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 9 14:55:54.483360 containerd[1557]: time="2025-07-09T14:55:54.483071422Z" level=info msg="StartContainer for \"30135be8c9b621979686ac3c0dc4b00e88adf393c9c6fde9ca33cf60a201647b\" returns successfully" Jul 9 14:55:54.501977 containerd[1557]: time="2025-07-09T14:55:54.501875118Z" level=info msg="Container 8d9da4273c7688fc6f858b77dd4282a765443eefa29322f0256ba45b638414af: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:55:54.519298 containerd[1557]: time="2025-07-09T14:55:54.519239243Z" level=info msg="CreateContainer within sandbox \"bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8d9da4273c7688fc6f858b77dd4282a765443eefa29322f0256ba45b638414af\"" Jul 9 14:55:54.523148 containerd[1557]: time="2025-07-09T14:55:54.522142800Z" level=info msg="StartContainer for \"8d9da4273c7688fc6f858b77dd4282a765443eefa29322f0256ba45b638414af\"" Jul 9 14:55:54.524852 containerd[1557]: time="2025-07-09T14:55:54.524173895Z" level=info msg="connecting to shim 8d9da4273c7688fc6f858b77dd4282a765443eefa29322f0256ba45b638414af" address="unix:///run/containerd/s/38a268431f1078221c5caa3db3e6e528624956b77b4d10c768b704a423a1ac2c" protocol=ttrpc version=3 Jul 9 14:55:54.554757 containerd[1557]: time="2025-07-09T14:55:54.554684484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6ffph,Uid:3f9d6e04-5ff7-42c8-be2f-99dcacf78f7a,Namespace:kube-system,Attempt:0,}" Jul 9 14:55:54.571215 systemd[1]: Started cri-containerd-8d9da4273c7688fc6f858b77dd4282a765443eefa29322f0256ba45b638414af.scope - libcontainer container 8d9da4273c7688fc6f858b77dd4282a765443eefa29322f0256ba45b638414af. Jul 9 14:55:54.694260 containerd[1557]: time="2025-07-09T14:55:54.694210211Z" level=info msg="StartContainer for \"8d9da4273c7688fc6f858b77dd4282a765443eefa29322f0256ba45b638414af\" returns successfully" Jul 9 14:55:54.904354 systemd-networkd[1452]: cali07d60a54023: Link UP Jul 9 14:55:54.904670 systemd-networkd[1452]: cali07d60a54023: Gained carrier Jul 9 14:55:54.930405 containerd[1557]: 2025-07-09 14:55:54.689 [INFO][4802] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--6ffph-eth0 coredns-668d6bf9bc- kube-system 3f9d6e04-5ff7-42c8-be2f-99dcacf78f7a 806 0 2025-07-09 14:54:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-9999-9-100-3d8d1010bc.novalocal coredns-668d6bf9bc-6ffph eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali07d60a54023 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-6ffph" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--6ffph-" Jul 9 14:55:54.930405 containerd[1557]: 2025-07-09 14:55:54.691 [INFO][4802] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-6ffph" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--6ffph-eth0" Jul 9 14:55:54.930405 containerd[1557]: 2025-07-09 14:55:54.795 [INFO][4833] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc" HandleID="k8s-pod-network.13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--6ffph-eth0" Jul 9 14:55:54.930405 containerd[1557]: 2025-07-09 14:55:54.795 [INFO][4833] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc" HandleID="k8s-pod-network.13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--6ffph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ceff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-9999-9-100-3d8d1010bc.novalocal", "pod":"coredns-668d6bf9bc-6ffph", "timestamp":"2025-07-09 14:55:54.795516123 +0000 UTC"}, Hostname:"ci-9999-9-100-3d8d1010bc.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 14:55:54.930405 containerd[1557]: 2025-07-09 14:55:54.797 [INFO][4833] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 14:55:54.930405 containerd[1557]: 2025-07-09 14:55:54.797 [INFO][4833] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 14:55:54.930405 containerd[1557]: 2025-07-09 14:55:54.798 [INFO][4833] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-100-3d8d1010bc.novalocal' Jul 9 14:55:54.930405 containerd[1557]: 2025-07-09 14:55:54.824 [INFO][4833] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.930405 containerd[1557]: 2025-07-09 14:55:54.840 [INFO][4833] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.930405 containerd[1557]: 2025-07-09 14:55:54.852 [INFO][4833] ipam/ipam.go 511: Trying affinity for 192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.930405 containerd[1557]: 2025-07-09 14:55:54.860 [INFO][4833] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.930405 containerd[1557]: 2025-07-09 14:55:54.865 [INFO][4833] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.930405 containerd[1557]: 2025-07-09 14:55:54.865 [INFO][4833] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.192/26 handle="k8s-pod-network.13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.930405 containerd[1557]: 2025-07-09 14:55:54.868 [INFO][4833] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc Jul 9 14:55:54.930405 containerd[1557]: 2025-07-09 14:55:54.874 [INFO][4833] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.192/26 handle="k8s-pod-network.13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.930405 containerd[1557]: 2025-07-09 14:55:54.890 [INFO][4833] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.198/26] block=192.168.101.192/26 handle="k8s-pod-network.13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.930405 containerd[1557]: 2025-07-09 14:55:54.890 [INFO][4833] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.198/26] handle="k8s-pod-network.13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:54.930405 containerd[1557]: 2025-07-09 14:55:54.890 [INFO][4833] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 14:55:54.933194 containerd[1557]: 2025-07-09 14:55:54.890 [INFO][4833] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.198/26] IPv6=[] ContainerID="13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc" HandleID="k8s-pod-network.13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--6ffph-eth0" Jul 9 14:55:54.933194 containerd[1557]: 2025-07-09 14:55:54.897 [INFO][4802] cni-plugin/k8s.go 418: Populated endpoint ContainerID="13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-6ffph" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--6ffph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--6ffph-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3f9d6e04-5ff7-42c8-be2f-99dcacf78f7a", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 14, 54, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-3d8d1010bc.novalocal", ContainerID:"", Pod:"coredns-668d6bf9bc-6ffph", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali07d60a54023", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 14:55:54.933194 containerd[1557]: 2025-07-09 14:55:54.897 [INFO][4802] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.198/32] ContainerID="13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-6ffph" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--6ffph-eth0" Jul 9 14:55:54.933194 containerd[1557]: 2025-07-09 14:55:54.897 [INFO][4802] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07d60a54023 ContainerID="13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-6ffph" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--6ffph-eth0" Jul 9 14:55:54.933194 containerd[1557]: 2025-07-09 14:55:54.901 [INFO][4802] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-6ffph" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--6ffph-eth0" Jul 9 14:55:54.934193 containerd[1557]: 2025-07-09 14:55:54.901 [INFO][4802] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-6ffph" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--6ffph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--6ffph-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3f9d6e04-5ff7-42c8-be2f-99dcacf78f7a", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 14, 54, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-3d8d1010bc.novalocal", ContainerID:"13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc", Pod:"coredns-668d6bf9bc-6ffph", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali07d60a54023", MAC:"ea:ac:17:94:8e:1f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 14:55:54.934193 containerd[1557]: 2025-07-09 14:55:54.924 [INFO][4802] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc" Namespace="kube-system" Pod="coredns-668d6bf9bc-6ffph" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-coredns--668d6bf9bc--6ffph-eth0" Jul 9 14:55:54.995457 containerd[1557]: time="2025-07-09T14:55:54.994069066Z" level=info msg="connecting to shim 13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc" address="unix:///run/containerd/s/229ec75a2560ac3aa63bfad9af390f2224aa2da7593392e2eea34c7203c0c5b0" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:55:55.073874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3637722050.mount: Deactivated successfully. Jul 9 14:55:55.097375 systemd[1]: Started cri-containerd-13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc.scope - libcontainer container 13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc. Jul 9 14:55:55.181291 systemd-networkd[1452]: cali48cd963d150: Gained IPv6LL Jul 9 14:55:55.299707 systemd-networkd[1452]: cali69ad1db5b14: Gained IPv6LL Jul 9 14:55:55.372975 containerd[1557]: time="2025-07-09T14:55:55.372909081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6ffph,Uid:3f9d6e04-5ff7-42c8-be2f-99dcacf78f7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc\"" Jul 9 14:55:55.392283 containerd[1557]: time="2025-07-09T14:55:55.392233474Z" level=info msg="CreateContainer within sandbox \"13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 14:55:55.423543 kubelet[2817]: I0709 14:55:55.423285 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-btbwb" podStartSLOduration=61.423254298 podStartE2EDuration="1m1.423254298s" podCreationTimestamp="2025-07-09 14:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 14:55:55.413200159 +0000 UTC m=+66.107144423" watchObservedRunningTime="2025-07-09 14:55:55.423254298 +0000 UTC m=+66.117198543" Jul 9 14:55:55.441176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3509665188.mount: Deactivated successfully. Jul 9 14:55:55.452518 containerd[1557]: time="2025-07-09T14:55:55.451845248Z" level=info msg="Container 716e55bcde5ac6169db87993fbf92cd6138fc78e506f9b5fe7798e1c9b0a0540: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:55:55.489710 containerd[1557]: time="2025-07-09T14:55:55.489301901Z" level=info msg="CreateContainer within sandbox \"13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"716e55bcde5ac6169db87993fbf92cd6138fc78e506f9b5fe7798e1c9b0a0540\"" Jul 9 14:55:55.494021 containerd[1557]: time="2025-07-09T14:55:55.493958939Z" level=info msg="StartContainer for \"716e55bcde5ac6169db87993fbf92cd6138fc78e506f9b5fe7798e1c9b0a0540\"" Jul 9 14:55:55.504341 containerd[1557]: time="2025-07-09T14:55:55.503481237Z" level=info msg="connecting to shim 716e55bcde5ac6169db87993fbf92cd6138fc78e506f9b5fe7798e1c9b0a0540" address="unix:///run/containerd/s/229ec75a2560ac3aa63bfad9af390f2224aa2da7593392e2eea34c7203c0c5b0" protocol=ttrpc version=3 Jul 9 14:55:55.555188 systemd[1]: Started cri-containerd-716e55bcde5ac6169db87993fbf92cd6138fc78e506f9b5fe7798e1c9b0a0540.scope - libcontainer container 716e55bcde5ac6169db87993fbf92cd6138fc78e506f9b5fe7798e1c9b0a0540. Jul 9 14:55:55.566918 containerd[1557]: time="2025-07-09T14:55:55.566843794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ntrc4,Uid:49c2fd72-8bb2-49f1-b111-af8c92749a93,Namespace:calico-system,Attempt:0,}" Jul 9 14:55:55.567532 containerd[1557]: time="2025-07-09T14:55:55.567413426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-2xp7x,Uid:fde2a149-c43b-473e-96d9-45a1cdea3b80,Namespace:calico-system,Attempt:0,}" Jul 9 14:55:55.574420 update_engine[1535]: I20250709 14:55:55.574124 1535 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 9 14:55:55.576353 update_engine[1535]: I20250709 14:55:55.576299 1535 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 9 14:55:55.577046 update_engine[1535]: I20250709 14:55:55.576822 1535 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 9 14:55:55.582145 update_engine[1535]: E20250709 14:55:55.582011 1535 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 9 14:55:55.582145 update_engine[1535]: I20250709 14:55:55.582135 1535 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 9 14:55:55.842529 containerd[1557]: time="2025-07-09T14:55:55.841816020Z" level=info msg="StartContainer for \"716e55bcde5ac6169db87993fbf92cd6138fc78e506f9b5fe7798e1c9b0a0540\" returns successfully" Jul 9 14:55:55.995544 containerd[1557]: time="2025-07-09T14:55:55.995221937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:55.998881 containerd[1557]: time="2025-07-09T14:55:55.998845539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 9 14:55:56.002160 containerd[1557]: time="2025-07-09T14:55:56.001365232Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:56.013212 containerd[1557]: time="2025-07-09T14:55:56.012766246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:56.018640 containerd[1557]: time="2025-07-09T14:55:56.018209183Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 3.129175124s" Jul 9 14:55:56.018640 containerd[1557]: time="2025-07-09T14:55:56.018257263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 9 14:55:56.030561 containerd[1557]: time="2025-07-09T14:55:56.028920117Z" level=info msg="CreateContainer within sandbox \"bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 9 14:55:56.044226 containerd[1557]: time="2025-07-09T14:55:56.044030534Z" level=info msg="Container 5ed07ebabe25f5f4ec303d21db93714dc247fb3b50daf4f721646e19e1ec6ae4: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:55:56.066563 containerd[1557]: time="2025-07-09T14:55:56.066499223Z" level=info msg="CreateContainer within sandbox \"bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5ed07ebabe25f5f4ec303d21db93714dc247fb3b50daf4f721646e19e1ec6ae4\"" Jul 9 14:55:56.070966 containerd[1557]: time="2025-07-09T14:55:56.068893050Z" level=info msg="StartContainer for \"5ed07ebabe25f5f4ec303d21db93714dc247fb3b50daf4f721646e19e1ec6ae4\"" Jul 9 14:55:56.072416 containerd[1557]: time="2025-07-09T14:55:56.072295826Z" level=info msg="connecting to shim 5ed07ebabe25f5f4ec303d21db93714dc247fb3b50daf4f721646e19e1ec6ae4" address="unix:///run/containerd/s/971b342b35ea7934313027288af2c83c578747cb594cccf1840b6169d5cc38d4" protocol=ttrpc version=3 Jul 9 14:55:56.141670 systemd[1]: Started cri-containerd-5ed07ebabe25f5f4ec303d21db93714dc247fb3b50daf4f721646e19e1ec6ae4.scope - libcontainer container 5ed07ebabe25f5f4ec303d21db93714dc247fb3b50daf4f721646e19e1ec6ae4. Jul 9 14:55:56.152763 systemd-networkd[1452]: cali3733c61838f: Link UP Jul 9 14:55:56.155368 systemd-networkd[1452]: cali3733c61838f: Gained carrier Jul 9 14:55:56.191784 containerd[1557]: 2025-07-09 14:55:55.835 [INFO][4936] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--100--3d8d1010bc.novalocal-k8s-csi--node--driver--ntrc4-eth0 csi-node-driver- calico-system 49c2fd72-8bb2-49f1-b111-af8c92749a93 684 0 2025-07-09 14:55:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-9999-9-100-3d8d1010bc.novalocal csi-node-driver-ntrc4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3733c61838f [] [] }} ContainerID="44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509" Namespace="calico-system" Pod="csi-node-driver-ntrc4" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-csi--node--driver--ntrc4-" Jul 9 14:55:56.191784 containerd[1557]: 2025-07-09 14:55:55.837 [INFO][4936] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509" Namespace="calico-system" Pod="csi-node-driver-ntrc4" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-csi--node--driver--ntrc4-eth0" Jul 9 14:55:56.191784 containerd[1557]: 2025-07-09 14:55:55.999 [INFO][4978] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509" HandleID="k8s-pod-network.44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-csi--node--driver--ntrc4-eth0" Jul 9 14:55:56.191784 containerd[1557]: 2025-07-09 14:55:56.000 [INFO][4978] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509" HandleID="k8s-pod-network.44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-csi--node--driver--ntrc4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123a60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999-9-100-3d8d1010bc.novalocal", "pod":"csi-node-driver-ntrc4", "timestamp":"2025-07-09 14:55:55.999821386 +0000 UTC"}, Hostname:"ci-9999-9-100-3d8d1010bc.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 14:55:56.191784 containerd[1557]: 2025-07-09 14:55:56.000 [INFO][4978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 14:55:56.191784 containerd[1557]: 2025-07-09 14:55:56.000 [INFO][4978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 14:55:56.191784 containerd[1557]: 2025-07-09 14:55:56.000 [INFO][4978] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-100-3d8d1010bc.novalocal' Jul 9 14:55:56.191784 containerd[1557]: 2025-07-09 14:55:56.019 [INFO][4978] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:56.191784 containerd[1557]: 2025-07-09 14:55:56.044 [INFO][4978] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:56.191784 containerd[1557]: 2025-07-09 14:55:56.063 [INFO][4978] ipam/ipam.go 511: Trying affinity for 192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:56.191784 containerd[1557]: 2025-07-09 14:55:56.073 [INFO][4978] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:56.191784 containerd[1557]: 2025-07-09 14:55:56.089 [INFO][4978] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:56.191784 containerd[1557]: 2025-07-09 14:55:56.089 [INFO][4978] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.192/26 handle="k8s-pod-network.44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:56.191784 containerd[1557]: 2025-07-09 14:55:56.099 [INFO][4978] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509 Jul 9 14:55:56.191784 containerd[1557]: 2025-07-09 14:55:56.117 [INFO][4978] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.192/26 handle="k8s-pod-network.44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:56.191784 containerd[1557]: 2025-07-09 14:55:56.130 [INFO][4978] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.199/26] block=192.168.101.192/26 handle="k8s-pod-network.44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:56.191784 containerd[1557]: 2025-07-09 14:55:56.130 [INFO][4978] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.199/26] handle="k8s-pod-network.44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:56.191784 containerd[1557]: 2025-07-09 14:55:56.130 [INFO][4978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 14:55:56.195381 containerd[1557]: 2025-07-09 14:55:56.130 [INFO][4978] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.199/26] IPv6=[] ContainerID="44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509" HandleID="k8s-pod-network.44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-csi--node--driver--ntrc4-eth0" Jul 9 14:55:56.195381 containerd[1557]: 2025-07-09 14:55:56.138 [INFO][4936] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509" Namespace="calico-system" Pod="csi-node-driver-ntrc4" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-csi--node--driver--ntrc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--3d8d1010bc.novalocal-k8s-csi--node--driver--ntrc4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"49c2fd72-8bb2-49f1-b111-af8c92749a93", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 14, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-3d8d1010bc.novalocal", ContainerID:"", Pod:"csi-node-driver-ntrc4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.101.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3733c61838f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 14:55:56.195381 containerd[1557]: 2025-07-09 14:55:56.138 [INFO][4936] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.199/32] ContainerID="44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509" Namespace="calico-system" Pod="csi-node-driver-ntrc4" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-csi--node--driver--ntrc4-eth0" Jul 9 14:55:56.195381 containerd[1557]: 2025-07-09 14:55:56.138 [INFO][4936] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3733c61838f ContainerID="44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509" Namespace="calico-system" Pod="csi-node-driver-ntrc4" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-csi--node--driver--ntrc4-eth0" Jul 9 14:55:56.195381 containerd[1557]: 2025-07-09 14:55:56.158 [INFO][4936] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509" Namespace="calico-system" Pod="csi-node-driver-ntrc4" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-csi--node--driver--ntrc4-eth0" Jul 9 14:55:56.195594 kubelet[2817]: I0709 14:55:56.193590 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b766f7455-qprmh" podStartSLOduration=50.193563253 podStartE2EDuration="50.193563253s" podCreationTimestamp="2025-07-09 14:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 14:55:55.477569862 +0000 UTC m=+66.171514106" watchObservedRunningTime="2025-07-09 14:55:56.193563253 +0000 UTC m=+66.887507487" Jul 9 14:55:56.195674 containerd[1557]: 2025-07-09 14:55:56.158 [INFO][4936] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509" Namespace="calico-system" Pod="csi-node-driver-ntrc4" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-csi--node--driver--ntrc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--3d8d1010bc.novalocal-k8s-csi--node--driver--ntrc4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"49c2fd72-8bb2-49f1-b111-af8c92749a93", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 14, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-3d8d1010bc.novalocal", ContainerID:"44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509", Pod:"csi-node-driver-ntrc4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.101.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3733c61838f", MAC:"42:06:ae:f2:75:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 14:55:56.195674 containerd[1557]: 2025-07-09 14:55:56.188 [INFO][4936] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509" Namespace="calico-system" Pod="csi-node-driver-ntrc4" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-csi--node--driver--ntrc4-eth0" Jul 9 14:55:56.248006 containerd[1557]: time="2025-07-09T14:55:56.247749634Z" level=info msg="connecting to shim 44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509" address="unix:///run/containerd/s/ddc6df684d8860aebe508fac4162c575bde5b8a974bfb3c82be698fe3b925e7b" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:55:56.259224 systemd-networkd[1452]: cali63295c12538: Link UP Jul 9 14:55:56.262088 systemd-networkd[1452]: cali63295c12538: Gained carrier Jul 9 14:55:56.311459 containerd[1557]: 2025-07-09 14:55:55.892 [INFO][4943] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--100--3d8d1010bc.novalocal-k8s-goldmane--768f4c5c69--2xp7x-eth0 goldmane-768f4c5c69- calico-system fde2a149-c43b-473e-96d9-45a1cdea3b80 811 0 2025-07-09 14:55:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-9999-9-100-3d8d1010bc.novalocal goldmane-768f4c5c69-2xp7x eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali63295c12538 [] [] }} ContainerID="1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca" Namespace="calico-system" Pod="goldmane-768f4c5c69-2xp7x" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-goldmane--768f4c5c69--2xp7x-" Jul 9 14:55:56.311459 containerd[1557]: 2025-07-09 14:55:55.893 [INFO][4943] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca" Namespace="calico-system" Pod="goldmane-768f4c5c69-2xp7x" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-goldmane--768f4c5c69--2xp7x-eth0" Jul 9 14:55:56.311459 containerd[1557]: 2025-07-09 14:55:56.014 [INFO][4984] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca" HandleID="k8s-pod-network.1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-goldmane--768f4c5c69--2xp7x-eth0" Jul 9 14:55:56.311459 containerd[1557]: 2025-07-09 14:55:56.014 [INFO][4984] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca" HandleID="k8s-pod-network.1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-goldmane--768f4c5c69--2xp7x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000327310), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999-9-100-3d8d1010bc.novalocal", "pod":"goldmane-768f4c5c69-2xp7x", "timestamp":"2025-07-09 14:55:56.014318919 +0000 UTC"}, Hostname:"ci-9999-9-100-3d8d1010bc.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 14:55:56.311459 containerd[1557]: 2025-07-09 14:55:56.014 [INFO][4984] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 14:55:56.311459 containerd[1557]: 2025-07-09 14:55:56.131 [INFO][4984] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 14:55:56.311459 containerd[1557]: 2025-07-09 14:55:56.131 [INFO][4984] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-100-3d8d1010bc.novalocal' Jul 9 14:55:56.311459 containerd[1557]: 2025-07-09 14:55:56.154 [INFO][4984] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:56.311459 containerd[1557]: 2025-07-09 14:55:56.172 [INFO][4984] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:56.311459 containerd[1557]: 2025-07-09 14:55:56.196 [INFO][4984] ipam/ipam.go 511: Trying affinity for 192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:56.311459 containerd[1557]: 2025-07-09 14:55:56.202 [INFO][4984] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:56.311459 containerd[1557]: 2025-07-09 14:55:56.208 [INFO][4984] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.192/26 host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:56.311459 containerd[1557]: 2025-07-09 14:55:56.208 [INFO][4984] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.192/26 handle="k8s-pod-network.1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:56.311459 containerd[1557]: 2025-07-09 14:55:56.212 [INFO][4984] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca Jul 9 14:55:56.311459 containerd[1557]: 2025-07-09 14:55:56.226 [INFO][4984] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.192/26 handle="k8s-pod-network.1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:56.311459 containerd[1557]: 2025-07-09 14:55:56.246 [INFO][4984] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.200/26] block=192.168.101.192/26 handle="k8s-pod-network.1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:56.311459 containerd[1557]: 2025-07-09 14:55:56.247 [INFO][4984] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.200/26] handle="k8s-pod-network.1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca" host="ci-9999-9-100-3d8d1010bc.novalocal" Jul 9 14:55:56.311459 containerd[1557]: 2025-07-09 14:55:56.247 [INFO][4984] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 14:55:56.314583 containerd[1557]: 2025-07-09 14:55:56.247 [INFO][4984] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.200/26] IPv6=[] ContainerID="1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca" HandleID="k8s-pod-network.1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca" Workload="ci--9999--9--100--3d8d1010bc.novalocal-k8s-goldmane--768f4c5c69--2xp7x-eth0" Jul 9 14:55:56.314583 containerd[1557]: 2025-07-09 14:55:56.251 [INFO][4943] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca" Namespace="calico-system" Pod="goldmane-768f4c5c69-2xp7x" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-goldmane--768f4c5c69--2xp7x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--3d8d1010bc.novalocal-k8s-goldmane--768f4c5c69--2xp7x-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"fde2a149-c43b-473e-96d9-45a1cdea3b80", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 14, 55, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-3d8d1010bc.novalocal", ContainerID:"", Pod:"goldmane-768f4c5c69-2xp7x", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.101.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali63295c12538", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 14:55:56.314583 containerd[1557]: 2025-07-09 14:55:56.253 [INFO][4943] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.200/32] ContainerID="1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca" Namespace="calico-system" Pod="goldmane-768f4c5c69-2xp7x" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-goldmane--768f4c5c69--2xp7x-eth0" Jul 9 14:55:56.314583 containerd[1557]: 2025-07-09 14:55:56.253 [INFO][4943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63295c12538 ContainerID="1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca" Namespace="calico-system" Pod="goldmane-768f4c5c69-2xp7x" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-goldmane--768f4c5c69--2xp7x-eth0" Jul 9 14:55:56.314583 containerd[1557]: 2025-07-09 14:55:56.259 [INFO][4943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca" Namespace="calico-system" Pod="goldmane-768f4c5c69-2xp7x" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-goldmane--768f4c5c69--2xp7x-eth0" Jul 9 14:55:56.314583 containerd[1557]: 2025-07-09 14:55:56.264 [INFO][4943] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca" Namespace="calico-system" Pod="goldmane-768f4c5c69-2xp7x" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-goldmane--768f4c5c69--2xp7x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--3d8d1010bc.novalocal-k8s-goldmane--768f4c5c69--2xp7x-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"fde2a149-c43b-473e-96d9-45a1cdea3b80", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 14, 55, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-3d8d1010bc.novalocal", ContainerID:"1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca", Pod:"goldmane-768f4c5c69-2xp7x", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.101.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali63295c12538", MAC:"06:64:0a:41:b0:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 14:55:56.314889 containerd[1557]: 2025-07-09 14:55:56.304 [INFO][4943] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca" Namespace="calico-system" Pod="goldmane-768f4c5c69-2xp7x" WorkloadEndpoint="ci--9999--9--100--3d8d1010bc.novalocal-k8s-goldmane--768f4c5c69--2xp7x-eth0" Jul 9 14:55:56.367979 kubelet[2817]: I0709 14:55:56.367842 2817 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 14:55:56.383243 systemd[1]: Started cri-containerd-44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509.scope - libcontainer container 44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509. Jul 9 14:55:56.409371 containerd[1557]: time="2025-07-09T14:55:56.409196282Z" level=info msg="connecting to shim 1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca" address="unix:///run/containerd/s/ef0cade8fd63746330d04c7ba70033da80e7e4a9072c58382646a7538b71b14a" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:55:56.425165 kubelet[2817]: I0709 14:55:56.424594 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6ffph" podStartSLOduration=62.424569754 podStartE2EDuration="1m2.424569754s" podCreationTimestamp="2025-07-09 14:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 14:55:56.423279525 +0000 UTC m=+67.117223769" watchObservedRunningTime="2025-07-09 14:55:56.424569754 +0000 UTC m=+67.118513988" Jul 9 14:55:56.451277 systemd-networkd[1452]: cali07d60a54023: Gained IPv6LL Jul 9 14:55:56.485440 systemd[1]: Started cri-containerd-1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca.scope - libcontainer container 1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca. Jul 9 14:55:56.681421 containerd[1557]: time="2025-07-09T14:55:56.681215323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ntrc4,Uid:49c2fd72-8bb2-49f1-b111-af8c92749a93,Namespace:calico-system,Attempt:0,} returns sandbox id \"44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509\"" Jul 9 14:55:56.687747 containerd[1557]: time="2025-07-09T14:55:56.687634318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 9 14:55:56.781203 containerd[1557]: time="2025-07-09T14:55:56.780923764Z" level=info msg="StartContainer for \"5ed07ebabe25f5f4ec303d21db93714dc247fb3b50daf4f721646e19e1ec6ae4\" returns successfully" Jul 9 14:55:56.887855 containerd[1557]: time="2025-07-09T14:55:56.887781693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-2xp7x,Uid:fde2a149-c43b-473e-96d9-45a1cdea3b80,Namespace:calico-system,Attempt:0,} returns sandbox id \"1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca\"" Jul 9 14:55:58.051245 systemd-networkd[1452]: cali63295c12538: Gained IPv6LL Jul 9 14:55:58.115360 systemd-networkd[1452]: cali3733c61838f: Gained IPv6LL Jul 9 14:55:59.823234 containerd[1557]: time="2025-07-09T14:55:59.822567981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:59.827239 containerd[1557]: time="2025-07-09T14:55:59.825150181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 9 14:55:59.827239 containerd[1557]: time="2025-07-09T14:55:59.826707371Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:59.832021 containerd[1557]: time="2025-07-09T14:55:59.831537172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:55:59.832558 containerd[1557]: time="2025-07-09T14:55:59.832357767Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 3.144641766s" Jul 9 14:55:59.832558 containerd[1557]: time="2025-07-09T14:55:59.832420184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 9 14:55:59.835952 containerd[1557]: time="2025-07-09T14:55:59.835893682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 9 14:55:59.846241 containerd[1557]: time="2025-07-09T14:55:59.845826898Z" level=info msg="CreateContainer within sandbox \"44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 9 14:55:59.872820 containerd[1557]: time="2025-07-09T14:55:59.872403538Z" level=info msg="Container bac7a5cd2865b25bfe10d29f04bb3c62b5d94237fbcd750a1b73c41a24d2191b: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:55:59.895272 containerd[1557]: time="2025-07-09T14:55:59.895187620Z" level=info msg="CreateContainer within sandbox \"44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bac7a5cd2865b25bfe10d29f04bb3c62b5d94237fbcd750a1b73c41a24d2191b\"" Jul 9 14:55:59.897304 containerd[1557]: time="2025-07-09T14:55:59.897214265Z" level=info msg="StartContainer for \"bac7a5cd2865b25bfe10d29f04bb3c62b5d94237fbcd750a1b73c41a24d2191b\"" Jul 9 14:55:59.900816 containerd[1557]: time="2025-07-09T14:55:59.900756862Z" level=info msg="connecting to shim bac7a5cd2865b25bfe10d29f04bb3c62b5d94237fbcd750a1b73c41a24d2191b" address="unix:///run/containerd/s/ddc6df684d8860aebe508fac4162c575bde5b8a974bfb3c82be698fe3b925e7b" protocol=ttrpc version=3 Jul 9 14:55:59.956548 systemd[1]: Started cri-containerd-bac7a5cd2865b25bfe10d29f04bb3c62b5d94237fbcd750a1b73c41a24d2191b.scope - libcontainer container bac7a5cd2865b25bfe10d29f04bb3c62b5d94237fbcd750a1b73c41a24d2191b. Jul 9 14:56:00.077112 containerd[1557]: time="2025-07-09T14:56:00.076221737Z" level=info msg="StartContainer for \"bac7a5cd2865b25bfe10d29f04bb3c62b5d94237fbcd750a1b73c41a24d2191b\" returns successfully" Jul 9 14:56:03.644671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount148329051.mount: Deactivated successfully. Jul 9 14:56:03.700795 containerd[1557]: time="2025-07-09T14:56:03.700573640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:56:03.704228 containerd[1557]: time="2025-07-09T14:56:03.703003061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 9 14:56:03.709339 containerd[1557]: time="2025-07-09T14:56:03.709280382Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:56:03.719965 containerd[1557]: time="2025-07-09T14:56:03.719844419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:56:03.722065 containerd[1557]: time="2025-07-09T14:56:03.721993382Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.886000163s" Jul 9 14:56:03.722354 containerd[1557]: time="2025-07-09T14:56:03.722307533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 9 14:56:03.725750 containerd[1557]: time="2025-07-09T14:56:03.725687042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 9 14:56:03.730454 containerd[1557]: time="2025-07-09T14:56:03.730330989Z" level=info msg="CreateContainer within sandbox \"bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 9 14:56:03.756052 containerd[1557]: time="2025-07-09T14:56:03.755737283Z" level=info msg="Container b6dd00a334099754d81f28e2073f40b0f1e34c25406b21f503edda27a31603e6: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:56:03.795197 containerd[1557]: time="2025-07-09T14:56:03.794764566Z" level=info msg="CreateContainer within sandbox \"bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b6dd00a334099754d81f28e2073f40b0f1e34c25406b21f503edda27a31603e6\"" Jul 9 14:56:03.798240 containerd[1557]: time="2025-07-09T14:56:03.798165474Z" level=info msg="StartContainer for \"b6dd00a334099754d81f28e2073f40b0f1e34c25406b21f503edda27a31603e6\"" Jul 9 14:56:03.803384 containerd[1557]: time="2025-07-09T14:56:03.803227248Z" level=info msg="connecting to shim b6dd00a334099754d81f28e2073f40b0f1e34c25406b21f503edda27a31603e6" address="unix:///run/containerd/s/971b342b35ea7934313027288af2c83c578747cb594cccf1840b6169d5cc38d4" protocol=ttrpc version=3 Jul 9 14:56:03.849344 systemd[1]: Started cri-containerd-b6dd00a334099754d81f28e2073f40b0f1e34c25406b21f503edda27a31603e6.scope - libcontainer container b6dd00a334099754d81f28e2073f40b0f1e34c25406b21f503edda27a31603e6. Jul 9 14:56:03.938832 containerd[1557]: time="2025-07-09T14:56:03.938586665Z" level=info msg="StartContainer for \"b6dd00a334099754d81f28e2073f40b0f1e34c25406b21f503edda27a31603e6\" returns successfully" Jul 9 14:56:04.548019 kubelet[2817]: I0709 14:56:04.544868 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-87c8978ff-tk69g" podStartSLOduration=2.040429121 podStartE2EDuration="22.54466169s" podCreationTimestamp="2025-07-09 14:55:42 +0000 UTC" firstStartedPulling="2025-07-09 14:55:43.220984269 +0000 UTC m=+53.914928513" lastFinishedPulling="2025-07-09 14:56:03.725216788 +0000 UTC m=+74.419161082" observedRunningTime="2025-07-09 14:56:04.544025303 +0000 UTC m=+75.237969587" watchObservedRunningTime="2025-07-09 14:56:04.54466169 +0000 UTC m=+75.238605974" Jul 9 14:56:05.576105 update_engine[1535]: I20250709 14:56:05.575299 1535 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 9 14:56:05.578751 update_engine[1535]: I20250709 14:56:05.578006 1535 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 9 14:56:05.579454 update_engine[1535]: I20250709 14:56:05.579361 1535 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 9 14:56:05.584668 update_engine[1535]: E20250709 14:56:05.584275 1535 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 9 14:56:05.584668 update_engine[1535]: I20250709 14:56:05.584546 1535 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 9 14:56:08.347278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1895716831.mount: Deactivated successfully. Jul 9 14:56:09.386230 containerd[1557]: time="2025-07-09T14:56:09.386144821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:56:09.389274 containerd[1557]: time="2025-07-09T14:56:09.388976577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 9 14:56:09.391262 containerd[1557]: time="2025-07-09T14:56:09.391223032Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:56:09.398197 containerd[1557]: time="2025-07-09T14:56:09.398147506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:56:09.401674 containerd[1557]: time="2025-07-09T14:56:09.401149893Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 5.675401004s" Jul 9 14:56:09.401869 containerd[1557]: time="2025-07-09T14:56:09.401846904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 9 14:56:09.408114 containerd[1557]: time="2025-07-09T14:56:09.408051304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 9 14:56:09.412928 containerd[1557]: time="2025-07-09T14:56:09.412047600Z" level=info msg="CreateContainer within sandbox \"1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 9 14:56:09.442147 containerd[1557]: time="2025-07-09T14:56:09.442087630Z" level=info msg="Container 38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:56:09.467334 containerd[1557]: time="2025-07-09T14:56:09.467223316Z" level=info msg="CreateContainer within sandbox \"1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\"" Jul 9 14:56:09.468351 containerd[1557]: time="2025-07-09T14:56:09.468270737Z" level=info msg="StartContainer for \"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\"" Jul 9 14:56:09.470710 containerd[1557]: time="2025-07-09T14:56:09.470650603Z" level=info msg="connecting to shim 38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab" address="unix:///run/containerd/s/ef0cade8fd63746330d04c7ba70033da80e7e4a9072c58382646a7538b71b14a" protocol=ttrpc version=3 Jul 9 14:56:09.549629 systemd[1]: Started cri-containerd-38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab.scope - libcontainer container 38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab. Jul 9 14:56:09.747288 containerd[1557]: time="2025-07-09T14:56:09.746214464Z" level=info msg="StartContainer for \"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\" returns successfully" Jul 9 14:56:10.621966 kubelet[2817]: I0709 14:56:10.620562 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-2xp7x" podStartSLOduration=49.105457252 podStartE2EDuration="1m1.620541235s" podCreationTimestamp="2025-07-09 14:55:09 +0000 UTC" firstStartedPulling="2025-07-09 14:55:56.890375486 +0000 UTC m=+67.584319720" lastFinishedPulling="2025-07-09 14:56:09.405459469 +0000 UTC m=+80.099403703" observedRunningTime="2025-07-09 14:56:10.614842918 +0000 UTC m=+81.308787172" watchObservedRunningTime="2025-07-09 14:56:10.620541235 +0000 UTC m=+81.314485469" Jul 9 14:56:10.826840 containerd[1557]: time="2025-07-09T14:56:10.826561614Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\" id:\"30ad7c22775ae5337a1453c1b5835baf4106c49584b8e2882cdd5233d358c4ac\" pid:5292 exit_status:1 exited_at:{seconds:1752072970 nanos:825099455}" Jul 9 14:56:12.189185 containerd[1557]: time="2025-07-09T14:56:12.188676125Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\" id:\"ca5341a6c2d9c270fc6901b25449c4ec9ebef033dbc1c8b01232a719eb475cff\" pid:5316 exit_status:1 exited_at:{seconds:1752072972 nanos:187868565}" Jul 9 14:56:12.262430 containerd[1557]: time="2025-07-09T14:56:12.261288183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:56:12.263110 containerd[1557]: time="2025-07-09T14:56:12.263082227Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 9 14:56:12.263927 containerd[1557]: time="2025-07-09T14:56:12.263898121Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:56:12.268539 containerd[1557]: time="2025-07-09T14:56:12.268481810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:56:12.270875 containerd[1557]: time="2025-07-09T14:56:12.270474058Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.862373862s" Jul 9 14:56:12.271191 containerd[1557]: time="2025-07-09T14:56:12.271131454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 9 14:56:12.278523 containerd[1557]: time="2025-07-09T14:56:12.278000211Z" level=info msg="CreateContainer within sandbox \"44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 9 14:56:12.295964 containerd[1557]: time="2025-07-09T14:56:12.295867510Z" level=info msg="Container a42a3b67c6d7e025661ae2a61cb2ea06559d3ada7157d6ac3430964f54c232c3: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:56:12.320251 containerd[1557]: time="2025-07-09T14:56:12.320164131Z" level=info msg="CreateContainer within sandbox \"44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a42a3b67c6d7e025661ae2a61cb2ea06559d3ada7157d6ac3430964f54c232c3\"" Jul 9 14:56:12.321635 containerd[1557]: time="2025-07-09T14:56:12.321584923Z" level=info msg="StartContainer for \"a42a3b67c6d7e025661ae2a61cb2ea06559d3ada7157d6ac3430964f54c232c3\"" Jul 9 14:56:12.326733 containerd[1557]: time="2025-07-09T14:56:12.326682980Z" level=info msg="connecting to shim a42a3b67c6d7e025661ae2a61cb2ea06559d3ada7157d6ac3430964f54c232c3" address="unix:///run/containerd/s/ddc6df684d8860aebe508fac4162c575bde5b8a974bfb3c82be698fe3b925e7b" protocol=ttrpc version=3 Jul 9 14:56:12.368157 systemd[1]: Started cri-containerd-a42a3b67c6d7e025661ae2a61cb2ea06559d3ada7157d6ac3430964f54c232c3.scope - libcontainer container a42a3b67c6d7e025661ae2a61cb2ea06559d3ada7157d6ac3430964f54c232c3. Jul 9 14:56:12.530660 containerd[1557]: time="2025-07-09T14:56:12.530453997Z" level=info msg="StartContainer for \"a42a3b67c6d7e025661ae2a61cb2ea06559d3ada7157d6ac3430964f54c232c3\" returns successfully" Jul 9 14:56:12.555479 containerd[1557]: time="2025-07-09T14:56:12.555415608Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\" id:\"a068e811d17a30429bc350f8527ce123e61f4073c0781d387fda097c25e424e5\" pid:5362 exited_at:{seconds:1752072972 nanos:554808546}" Jul 9 14:56:12.782447 kubelet[2817]: I0709 14:56:12.782219 2817 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 9 14:56:12.782447 kubelet[2817]: I0709 14:56:12.782349 2817 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 9 14:56:12.862829 containerd[1557]: time="2025-07-09T14:56:12.862745240Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\" id:\"9f5dd5f13ab0165470d7c6e62761cd5620a727f4e202008c3993ef0e52497f56\" pid:5398 exit_status:1 exited_at:{seconds:1752072972 nanos:861104404}" Jul 9 14:56:13.371238 containerd[1557]: time="2025-07-09T14:56:13.371111630Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a\" id:\"3121b997686560978e7a459ec2fb47d3febff12ed2a660b80ca986ba321a30ca\" pid:5423 exited_at:{seconds:1752072973 nanos:367356950}" Jul 9 14:56:13.428785 kubelet[2817]: I0709 14:56:13.427516 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ntrc4" podStartSLOduration=47.838680032 podStartE2EDuration="1m3.427483604s" podCreationTimestamp="2025-07-09 14:55:10 +0000 UTC" firstStartedPulling="2025-07-09 14:55:56.684449702 +0000 UTC m=+67.378393936" lastFinishedPulling="2025-07-09 14:56:12.273253264 +0000 UTC m=+82.967197508" observedRunningTime="2025-07-09 14:56:12.665485156 +0000 UTC m=+83.359429410" watchObservedRunningTime="2025-07-09 14:56:13.427483604 +0000 UTC m=+84.121427848" Jul 9 14:56:15.577120 update_engine[1535]: I20250709 14:56:15.577018 1535 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 9 14:56:15.577696 update_engine[1535]: I20250709 14:56:15.577392 1535 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 9 14:56:15.577773 update_engine[1535]: I20250709 14:56:15.577737 1535 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 9 14:56:15.583892 update_engine[1535]: E20250709 14:56:15.583840 1535 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 9 14:56:15.584056 update_engine[1535]: I20250709 14:56:15.583902 1535 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 9 14:56:15.584056 update_engine[1535]: I20250709 14:56:15.583920 1535 omaha_request_action.cc:617] Omaha request response: Jul 9 14:56:15.584138 update_engine[1535]: E20250709 14:56:15.584059 1535 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 9 14:56:15.584322 update_engine[1535]: I20250709 14:56:15.584286 1535 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 9 14:56:15.584322 update_engine[1535]: I20250709 14:56:15.584300 1535 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 9 14:56:15.584322 update_engine[1535]: I20250709 14:56:15.584311 1535 update_attempter.cc:306] Processing Done. Jul 9 14:56:15.584441 update_engine[1535]: E20250709 14:56:15.584398 1535 update_attempter.cc:619] Update failed. Jul 9 14:56:15.584441 update_engine[1535]: I20250709 14:56:15.584412 1535 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 9 14:56:15.584441 update_engine[1535]: I20250709 14:56:15.584419 1535 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 9 14:56:15.584441 update_engine[1535]: I20250709 14:56:15.584424 1535 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 9 14:56:15.584826 update_engine[1535]: I20250709 14:56:15.584583 1535 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 9 14:56:15.584826 update_engine[1535]: I20250709 14:56:15.584651 1535 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 9 14:56:15.584826 update_engine[1535]: I20250709 14:56:15.584659 1535 omaha_request_action.cc:272] Request: Jul 9 14:56:15.584826 update_engine[1535]: Jul 9 14:56:15.584826 update_engine[1535]: Jul 9 14:56:15.584826 update_engine[1535]: Jul 9 14:56:15.584826 update_engine[1535]: Jul 9 14:56:15.584826 update_engine[1535]: Jul 9 14:56:15.584826 update_engine[1535]: Jul 9 14:56:15.584826 update_engine[1535]: I20250709 14:56:15.584664 1535 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 9 14:56:15.585298 update_engine[1535]: I20250709 14:56:15.585211 1535 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 9 14:56:15.585497 update_engine[1535]: I20250709 14:56:15.585464 1535 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 9 14:56:15.586259 locksmithd[1570]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 9 14:56:15.590617 update_engine[1535]: E20250709 14:56:15.590565 1535 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 9 14:56:15.590617 update_engine[1535]: I20250709 14:56:15.590616 1535 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 9 14:56:15.590728 update_engine[1535]: I20250709 14:56:15.590626 1535 omaha_request_action.cc:617] Omaha request response: Jul 9 14:56:15.590728 update_engine[1535]: I20250709 14:56:15.590632 1535 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 9 14:56:15.590728 update_engine[1535]: I20250709 14:56:15.590638 1535 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 9 14:56:15.590728 update_engine[1535]: I20250709 14:56:15.590643 1535 update_attempter.cc:306] Processing Done. Jul 9 14:56:15.590728 update_engine[1535]: I20250709 14:56:15.590649 1535 update_attempter.cc:310] Error event sent. Jul 9 14:56:15.590728 update_engine[1535]: I20250709 14:56:15.590667 1535 update_check_scheduler.cc:74] Next update check in 41m52s Jul 9 14:56:15.591467 locksmithd[1570]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 9 14:56:16.046424 kubelet[2817]: I0709 14:56:16.043816 2817 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 14:56:16.878969 kubelet[2817]: I0709 14:56:16.877929 2817 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 14:56:19.420094 containerd[1557]: time="2025-07-09T14:56:19.419646946Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\" id:\"f212a7c982053e86cc89537360a2e2c64b083b7a03bb60a4b1cf55f71bab789e\" pid:5453 exited_at:{seconds:1752072979 nanos:417298661}" Jul 9 14:56:42.802331 containerd[1557]: time="2025-07-09T14:56:42.802031857Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\" id:\"c8ec499de8b271d14532b2f997c657f48ce0e27053968b6d842bcc38917a8258\" pid:5485 exited_at:{seconds:1752073002 nanos:799285298}" Jul 9 14:56:43.438519 containerd[1557]: time="2025-07-09T14:56:43.438451075Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a\" id:\"abbe388d6569d64726d8acdef6006c75151e3419a46cb932582b3a20fe40d6e5\" pid:5510 exited_at:{seconds:1752073003 nanos:437263875}" Jul 9 14:56:49.397289 containerd[1557]: time="2025-07-09T14:56:49.396827286Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\" id:\"c0b766726cbc78e3e355116c9389f36f7e2ff75bfd427e90b5db7595968ff3cc\" pid:5532 exited_at:{seconds:1752073009 nanos:395732249}" Jul 9 14:57:08.439161 containerd[1557]: time="2025-07-09T14:57:08.438172734Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\" id:\"866a85a505a101f83c6d60111fec0fcbbeaf294aa7f2ba185c21ecaf7e955f93\" pid:5565 exited_at:{seconds:1752073028 nanos:437311217}" Jul 9 14:57:12.528556 containerd[1557]: time="2025-07-09T14:57:12.528255753Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\" id:\"f656da4c18b6991772a2d094767aadde2bbf6e0a16b6bdec134ae10162c4dd0e\" pid:5588 exited_at:{seconds:1752073032 nanos:527621942}" Jul 9 14:57:12.777314 containerd[1557]: time="2025-07-09T14:57:12.777106542Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\" id:\"38a54180e3ac4bb5fe708cd18ea934852043e8a2addcc2a378b0bf7b2824877f\" pid:5610 exited_at:{seconds:1752073032 nanos:775356856}" Jul 9 14:57:13.631364 systemd[1]: Started sshd@9-172.24.4.253:22-172.24.4.1:37458.service - OpenSSH per-connection server daemon (172.24.4.1:37458). Jul 9 14:57:13.738104 containerd[1557]: time="2025-07-09T14:57:13.737980239Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a\" id:\"0cec022678edac996c6d5435403f2b76b1887d6fa45922640cfe8f0c37a2078a\" pid:5633 exited_at:{seconds:1752073033 nanos:736648328}" Jul 9 14:57:14.939532 sshd[5648]: Accepted publickey for core from 172.24.4.1 port 37458 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:57:14.945221 sshd-session[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:57:14.966551 systemd-logind[1534]: New session 12 of user core. Jul 9 14:57:14.977300 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 9 14:57:15.799827 sshd[5651]: Connection closed by 172.24.4.1 port 37458 Jul 9 14:57:15.799630 sshd-session[5648]: pam_unix(sshd:session): session closed for user core Jul 9 14:57:15.808794 systemd[1]: sshd@9-172.24.4.253:22-172.24.4.1:37458.service: Deactivated successfully. Jul 9 14:57:15.816502 systemd[1]: session-12.scope: Deactivated successfully. Jul 9 14:57:15.825783 systemd-logind[1534]: Session 12 logged out. Waiting for processes to exit. Jul 9 14:57:15.830741 systemd-logind[1534]: Removed session 12. Jul 9 14:57:19.377636 containerd[1557]: time="2025-07-09T14:57:19.377545726Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\" id:\"2d7f809ea3e5288de115633b67af2722f211a2c332be52f1ce96f6925ba24d7b\" pid:5690 exited_at:{seconds:1752073039 nanos:376587436}" Jul 9 14:57:20.819217 systemd[1]: Started sshd@10-172.24.4.253:22-172.24.4.1:37464.service - OpenSSH per-connection server daemon (172.24.4.1:37464). Jul 9 14:57:22.136959 sshd[5700]: Accepted publickey for core from 172.24.4.1 port 37464 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:57:22.140355 sshd-session[5700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:57:22.155849 systemd-logind[1534]: New session 13 of user core. Jul 9 14:57:22.162226 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 9 14:57:22.928411 sshd[5703]: Connection closed by 172.24.4.1 port 37464 Jul 9 14:57:22.932078 sshd-session[5700]: pam_unix(sshd:session): session closed for user core Jul 9 14:57:22.940410 systemd[1]: sshd@10-172.24.4.253:22-172.24.4.1:37464.service: Deactivated successfully. Jul 9 14:57:22.945363 systemd[1]: session-13.scope: Deactivated successfully. Jul 9 14:57:22.948181 systemd-logind[1534]: Session 13 logged out. Waiting for processes to exit. Jul 9 14:57:22.951508 systemd-logind[1534]: Removed session 13. Jul 9 14:57:27.955155 systemd[1]: Started sshd@11-172.24.4.253:22-172.24.4.1:42650.service - OpenSSH per-connection server daemon (172.24.4.1:42650). Jul 9 14:57:29.425250 sshd[5725]: Accepted publickey for core from 172.24.4.1 port 42650 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:57:29.428434 sshd-session[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:57:29.443091 systemd-logind[1534]: New session 14 of user core. Jul 9 14:57:29.455347 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 9 14:57:30.222166 sshd[5728]: Connection closed by 172.24.4.1 port 42650 Jul 9 14:57:30.222619 sshd-session[5725]: pam_unix(sshd:session): session closed for user core Jul 9 14:57:30.227883 systemd-logind[1534]: Session 14 logged out. Waiting for processes to exit. Jul 9 14:57:30.228525 systemd[1]: sshd@11-172.24.4.253:22-172.24.4.1:42650.service: Deactivated successfully. Jul 9 14:57:30.234270 systemd[1]: session-14.scope: Deactivated successfully. Jul 9 14:57:30.240700 systemd-logind[1534]: Removed session 14. Jul 9 14:57:35.248828 systemd[1]: Started sshd@12-172.24.4.253:22-172.24.4.1:36414.service - OpenSSH per-connection server daemon (172.24.4.1:36414). Jul 9 14:57:36.681449 sshd[5741]: Accepted publickey for core from 172.24.4.1 port 36414 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:57:36.685519 sshd-session[5741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:57:36.706087 systemd-logind[1534]: New session 15 of user core. Jul 9 14:57:36.723431 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 9 14:57:37.595667 sshd[5744]: Connection closed by 172.24.4.1 port 36414 Jul 9 14:57:37.597325 sshd-session[5741]: pam_unix(sshd:session): session closed for user core Jul 9 14:57:37.613722 systemd[1]: sshd@12-172.24.4.253:22-172.24.4.1:36414.service: Deactivated successfully. Jul 9 14:57:37.619731 systemd[1]: session-15.scope: Deactivated successfully. Jul 9 14:57:37.625440 systemd-logind[1534]: Session 15 logged out. Waiting for processes to exit. Jul 9 14:57:37.633290 systemd[1]: Started sshd@13-172.24.4.253:22-172.24.4.1:36416.service - OpenSSH per-connection server daemon (172.24.4.1:36416). Jul 9 14:57:37.641325 systemd-logind[1534]: Removed session 15. Jul 9 14:57:39.114154 sshd[5757]: Accepted publickey for core from 172.24.4.1 port 36416 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:57:39.118235 sshd-session[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:57:39.131600 systemd-logind[1534]: New session 16 of user core. Jul 9 14:57:39.152319 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 9 14:57:40.310968 sshd[5760]: Connection closed by 172.24.4.1 port 36416 Jul 9 14:57:40.314773 sshd-session[5757]: pam_unix(sshd:session): session closed for user core Jul 9 14:57:40.343561 systemd[1]: Started sshd@14-172.24.4.253:22-172.24.4.1:36432.service - OpenSSH per-connection server daemon (172.24.4.1:36432). Jul 9 14:57:40.344821 systemd[1]: sshd@13-172.24.4.253:22-172.24.4.1:36416.service: Deactivated successfully. Jul 9 14:57:40.357665 systemd[1]: session-16.scope: Deactivated successfully. Jul 9 14:57:40.364582 systemd-logind[1534]: Session 16 logged out. Waiting for processes to exit. Jul 9 14:57:40.373220 systemd-logind[1534]: Removed session 16. Jul 9 14:57:41.694758 sshd[5767]: Accepted publickey for core from 172.24.4.1 port 36432 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:57:41.697397 sshd-session[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:57:41.709776 systemd-logind[1534]: New session 17 of user core. Jul 9 14:57:41.723273 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 9 14:57:42.610687 sshd[5773]: Connection closed by 172.24.4.1 port 36432 Jul 9 14:57:42.612377 sshd-session[5767]: pam_unix(sshd:session): session closed for user core Jul 9 14:57:42.622627 systemd[1]: sshd@14-172.24.4.253:22-172.24.4.1:36432.service: Deactivated successfully. Jul 9 14:57:42.633905 systemd[1]: session-17.scope: Deactivated successfully. Jul 9 14:57:42.640293 systemd-logind[1534]: Session 17 logged out. Waiting for processes to exit. Jul 9 14:57:42.644644 systemd-logind[1534]: Removed session 17. Jul 9 14:57:42.753591 containerd[1557]: time="2025-07-09T14:57:42.753231980Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\" id:\"30bf6da35583f5fc30fa61a37226293301161deefaf9399eb1e1a8ec3942946f\" pid:5796 exited_at:{seconds:1752073062 nanos:752179509}" Jul 9 14:57:43.463065 containerd[1557]: time="2025-07-09T14:57:43.462737386Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a\" id:\"6c00c170840cfe900caae5f6f4c341ccf9d6046875a7bc6af6612edb6c324ec8\" pid:5820 exited_at:{seconds:1752073063 nanos:460035376}" Jul 9 14:57:47.655307 systemd[1]: Started sshd@15-172.24.4.253:22-172.24.4.1:57136.service - OpenSSH per-connection server daemon (172.24.4.1:57136). Jul 9 14:57:48.810419 sshd[5838]: Accepted publickey for core from 172.24.4.1 port 57136 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:57:48.812879 sshd-session[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:57:48.825089 systemd-logind[1534]: New session 18 of user core. Jul 9 14:57:48.832263 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 9 14:57:49.382965 containerd[1557]: time="2025-07-09T14:57:49.382897792Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\" id:\"22558f0c58c2372c7f1fdf7e7f7ce0e481142d06b0dfbca6bcff96d73a346b2d\" pid:5860 exited_at:{seconds:1752073069 nanos:382617648}" Jul 9 14:57:49.595455 sshd[5841]: Connection closed by 172.24.4.1 port 57136 Jul 9 14:57:49.597167 sshd-session[5838]: pam_unix(sshd:session): session closed for user core Jul 9 14:57:49.607697 systemd[1]: sshd@15-172.24.4.253:22-172.24.4.1:57136.service: Deactivated successfully. Jul 9 14:57:49.615468 systemd[1]: session-18.scope: Deactivated successfully. Jul 9 14:57:49.624195 systemd-logind[1534]: Session 18 logged out. Waiting for processes to exit. Jul 9 14:57:49.627125 systemd-logind[1534]: Removed session 18. Jul 9 14:57:54.628731 systemd[1]: Started sshd@16-172.24.4.253:22-172.24.4.1:41854.service - OpenSSH per-connection server daemon (172.24.4.1:41854). Jul 9 14:57:55.857343 sshd[5876]: Accepted publickey for core from 172.24.4.1 port 41854 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:57:55.860450 sshd-session[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:57:55.872325 systemd-logind[1534]: New session 19 of user core. Jul 9 14:57:55.880353 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 9 14:57:56.872576 sshd[5881]: Connection closed by 172.24.4.1 port 41854 Jul 9 14:57:56.874791 sshd-session[5876]: pam_unix(sshd:session): session closed for user core Jul 9 14:57:56.884363 systemd[1]: sshd@16-172.24.4.253:22-172.24.4.1:41854.service: Deactivated successfully. Jul 9 14:57:56.896307 systemd[1]: session-19.scope: Deactivated successfully. Jul 9 14:57:56.900535 systemd-logind[1534]: Session 19 logged out. Waiting for processes to exit. Jul 9 14:57:56.909598 systemd-logind[1534]: Removed session 19. Jul 9 14:58:01.904765 systemd[1]: Started sshd@17-172.24.4.253:22-172.24.4.1:41862.service - OpenSSH per-connection server daemon (172.24.4.1:41862). Jul 9 14:58:03.090646 sshd[5896]: Accepted publickey for core from 172.24.4.1 port 41862 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:58:03.094429 sshd-session[5896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:58:03.105338 systemd-logind[1534]: New session 20 of user core. Jul 9 14:58:03.119348 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 9 14:58:03.975320 sshd[5899]: Connection closed by 172.24.4.1 port 41862 Jul 9 14:58:03.976460 sshd-session[5896]: pam_unix(sshd:session): session closed for user core Jul 9 14:58:03.990382 systemd[1]: sshd@17-172.24.4.253:22-172.24.4.1:41862.service: Deactivated successfully. Jul 9 14:58:03.997878 systemd[1]: session-20.scope: Deactivated successfully. Jul 9 14:58:04.001918 systemd-logind[1534]: Session 20 logged out. Waiting for processes to exit. Jul 9 14:58:04.006126 systemd-logind[1534]: Removed session 20. Jul 9 14:58:08.427811 containerd[1557]: time="2025-07-09T14:58:08.427748015Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\" id:\"b4fc517ef3a72113537d3a0bf346acadada3d0d3910e72ca4d37a1fcc087f29d\" pid:5923 exited_at:{seconds:1752073088 nanos:426362960}" Jul 9 14:58:09.001026 systemd[1]: Started sshd@18-172.24.4.253:22-172.24.4.1:46498.service - OpenSSH per-connection server daemon (172.24.4.1:46498). Jul 9 14:58:10.301877 sshd[5934]: Accepted publickey for core from 172.24.4.1 port 46498 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:58:10.304577 sshd-session[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:58:10.319047 systemd-logind[1534]: New session 21 of user core. Jul 9 14:58:10.331277 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 9 14:58:11.045995 sshd[5937]: Connection closed by 172.24.4.1 port 46498 Jul 9 14:58:11.047354 sshd-session[5934]: pam_unix(sshd:session): session closed for user core Jul 9 14:58:11.066493 systemd[1]: sshd@18-172.24.4.253:22-172.24.4.1:46498.service: Deactivated successfully. Jul 9 14:58:11.074143 systemd[1]: session-21.scope: Deactivated successfully. Jul 9 14:58:11.079516 systemd-logind[1534]: Session 21 logged out. Waiting for processes to exit. Jul 9 14:58:11.084114 systemd-logind[1534]: Removed session 21. Jul 9 14:58:11.089075 systemd[1]: Started sshd@19-172.24.4.253:22-172.24.4.1:46502.service - OpenSSH per-connection server daemon (172.24.4.1:46502). Jul 9 14:58:12.369062 sshd[5949]: Accepted publickey for core from 172.24.4.1 port 46502 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:58:12.376628 sshd-session[5949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:58:12.398094 systemd-logind[1534]: New session 22 of user core. Jul 9 14:58:12.407448 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 9 14:58:12.494201 containerd[1557]: time="2025-07-09T14:58:12.494128012Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\" id:\"de71a01396e361e3c5f479d2bd5d3b5f55d93490a459969bfd11f45fdaf5aa26\" pid:5967 exited_at:{seconds:1752073092 nanos:493823742}" Jul 9 14:58:12.743803 containerd[1557]: time="2025-07-09T14:58:12.743479211Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\" id:\"717ee623411dc03955b766ff04a05f1deddb013b4690fce3dcd51eb1f6b1bbcc\" pid:5988 exited_at:{seconds:1752073092 nanos:742925183}" Jul 9 14:58:13.397964 containerd[1557]: time="2025-07-09T14:58:13.397580301Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a\" id:\"5f99e03ae649e68fd887faeb19a96b834ecaf8b343fb82c3a7e031fb52251d6f\" pid:6016 exited_at:{seconds:1752073093 nanos:396011651}" Jul 9 14:58:13.614608 sshd[5952]: Connection closed by 172.24.4.1 port 46502 Jul 9 14:58:13.615734 sshd-session[5949]: pam_unix(sshd:session): session closed for user core Jul 9 14:58:13.643128 systemd[1]: sshd@19-172.24.4.253:22-172.24.4.1:46502.service: Deactivated successfully. Jul 9 14:58:13.651223 systemd[1]: session-22.scope: Deactivated successfully. Jul 9 14:58:13.657051 systemd-logind[1534]: Session 22 logged out. Waiting for processes to exit. Jul 9 14:58:13.670205 systemd[1]: Started sshd@20-172.24.4.253:22-172.24.4.1:55184.service - OpenSSH per-connection server daemon (172.24.4.1:55184). Jul 9 14:58:13.675085 systemd-logind[1534]: Removed session 22. Jul 9 14:58:15.178818 sshd[6031]: Accepted publickey for core from 172.24.4.1 port 55184 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:58:15.181435 sshd-session[6031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:58:15.192176 systemd-logind[1534]: New session 23 of user core. Jul 9 14:58:15.197194 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 9 14:58:17.515254 sshd[6034]: Connection closed by 172.24.4.1 port 55184 Jul 9 14:58:17.516662 sshd-session[6031]: pam_unix(sshd:session): session closed for user core Jul 9 14:58:17.537458 systemd[1]: sshd@20-172.24.4.253:22-172.24.4.1:55184.service: Deactivated successfully. Jul 9 14:58:17.543515 systemd[1]: session-23.scope: Deactivated successfully. Jul 9 14:58:17.547290 systemd-logind[1534]: Session 23 logged out. Waiting for processes to exit. Jul 9 14:58:17.555551 systemd[1]: Started sshd@21-172.24.4.253:22-172.24.4.1:55196.service - OpenSSH per-connection server daemon (172.24.4.1:55196). Jul 9 14:58:17.561008 systemd-logind[1534]: Removed session 23. Jul 9 14:58:20.137784 containerd[1557]: time="2025-07-09T14:58:20.112401551Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\" id:\"4e8737f7797e04ee77c72f392a073cc7e6dbb6d08b1c6684d56a39c886f48cb2\" pid:6067 exited_at:{seconds:1752073100 nanos:110441957}" Jul 9 14:58:29.153071 systemd[1]: cri-containerd-0fd23b9d426cead4abb72733841d852addd332eb8f40fbde8dc04300e7d1f439.scope: Deactivated successfully. Jul 9 14:58:29.155138 systemd[1]: cri-containerd-0fd23b9d426cead4abb72733841d852addd332eb8f40fbde8dc04300e7d1f439.scope: Consumed 6.668s CPU time, 62.8M memory peak, 128K read from disk. Jul 9 14:58:29.251644 containerd[1557]: time="2025-07-09T14:58:29.243794876Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0fd23b9d426cead4abb72733841d852addd332eb8f40fbde8dc04300e7d1f439\" id:\"0fd23b9d426cead4abb72733841d852addd332eb8f40fbde8dc04300e7d1f439\" pid:2657 exit_status:1 exited_at:{seconds:1752073109 nanos:162918291}" Jul 9 14:58:29.254199 containerd[1557]: time="2025-07-09T14:58:29.253738890Z" level=info msg="received exit event container_id:\"0fd23b9d426cead4abb72733841d852addd332eb8f40fbde8dc04300e7d1f439\" id:\"0fd23b9d426cead4abb72733841d852addd332eb8f40fbde8dc04300e7d1f439\" pid:2657 exit_status:1 exited_at:{seconds:1752073109 nanos:162918291}" Jul 9 14:58:29.282445 systemd[1]: cri-containerd-dc6cb8e70a923179f706266dd6c312155afc5468cfb62e9042709defea3a0a49.scope: Deactivated successfully. Jul 9 14:58:29.283454 systemd[1]: cri-containerd-dc6cb8e70a923179f706266dd6c312155afc5468cfb62e9042709defea3a0a49.scope: Consumed 20.996s CPU time, 123.1M memory peak, 1.3M read from disk. Jul 9 14:58:29.570296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc6cb8e70a923179f706266dd6c312155afc5468cfb62e9042709defea3a0a49-rootfs.mount: Deactivated successfully. Jul 9 14:58:31.925522 containerd[1557]: time="2025-07-09T14:58:29.344542576Z" level=info msg="received exit event container_id:\"dc6cb8e70a923179f706266dd6c312155afc5468cfb62e9042709defea3a0a49\" id:\"dc6cb8e70a923179f706266dd6c312155afc5468cfb62e9042709defea3a0a49\" pid:3138 exit_status:1 exited_at:{seconds:1752073109 nanos:343829069}" Jul 9 14:58:31.925522 containerd[1557]: time="2025-07-09T14:58:29.346525624Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc6cb8e70a923179f706266dd6c312155afc5468cfb62e9042709defea3a0a49\" id:\"dc6cb8e70a923179f706266dd6c312155afc5468cfb62e9042709defea3a0a49\" pid:3138 exit_status:1 exited_at:{seconds:1752073109 nanos:343829069}" Jul 9 14:58:29.591361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fd23b9d426cead4abb72733841d852addd332eb8f40fbde8dc04300e7d1f439-rootfs.mount: Deactivated successfully. Jul 9 14:58:31.980978 kubelet[2817]: E0709 14:58:31.979604 2817 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 9 14:58:32.023326 kubelet[2817]: E0709 14:58:32.023261 2817 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-9999-9-100-3d8d1010bc.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 9 14:58:32.034187 systemd[1]: cri-containerd-543b5d3b60f9e23f0e3db53b0f9ee358399adbd7b8e0254cc58d1b83adfd66f0.scope: Deactivated successfully. Jul 9 14:58:32.034845 systemd[1]: cri-containerd-543b5d3b60f9e23f0e3db53b0f9ee358399adbd7b8e0254cc58d1b83adfd66f0.scope: Consumed 4.516s CPU time, 23.3M memory peak. Jul 9 14:58:32.042557 containerd[1557]: time="2025-07-09T14:58:32.042462234Z" level=info msg="received exit event container_id:\"543b5d3b60f9e23f0e3db53b0f9ee358399adbd7b8e0254cc58d1b83adfd66f0\" id:\"543b5d3b60f9e23f0e3db53b0f9ee358399adbd7b8e0254cc58d1b83adfd66f0\" pid:2645 exit_status:1 exited_at:{seconds:1752073112 nanos:41227649}" Jul 9 14:58:32.042796 containerd[1557]: time="2025-07-09T14:58:32.042466953Z" level=info msg="TaskExit event in podsandbox handler container_id:\"543b5d3b60f9e23f0e3db53b0f9ee358399adbd7b8e0254cc58d1b83adfd66f0\" id:\"543b5d3b60f9e23f0e3db53b0f9ee358399adbd7b8e0254cc58d1b83adfd66f0\" pid:2645 exit_status:1 exited_at:{seconds:1752073112 nanos:41227649}" Jul 9 14:58:32.115287 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-543b5d3b60f9e23f0e3db53b0f9ee358399adbd7b8e0254cc58d1b83adfd66f0-rootfs.mount: Deactivated successfully. Jul 9 14:58:32.950137 sshd[6052]: Accepted publickey for core from 172.24.4.1 port 55196 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:58:47.502687 kubelet[2817]: E0709 14:58:47.502463 2817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-9-100-3d8d1010bc.novalocal?timeout=10s\": context deadline exceeded" interval="200ms" Jul 9 14:58:47.596564 sshd-session[6052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:58:48.377485 containerd[1557]: time="2025-07-09T14:58:47.838285984Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a\" id:\"d22638b41bf7de7ba1cf5e8acdac390d4d361bc7fb4367bcb6ae228834e4a926\" pid:6159 exited_at:{seconds:1752073127 nanos:836079076}" Jul 9 14:58:48.377485 containerd[1557]: time="2025-07-09T14:58:47.841676453Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\" id:\"076904f93faf4c05f761ad17c3d8aa2b26f76a452877400220356faa9d93ea62\" pid:6161 exited_at:{seconds:1752073127 nanos:841237219}" Jul 9 14:58:47.618863 systemd-logind[1534]: New session 24 of user core. Jul 9 14:58:48.379173 kubelet[2817]: E0709 14:58:39.030569 2817 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal.18509d3660a052ea kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal,UID:6256d6b0ad5b6fd7dde2de9cddd9c6d5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-9999-9-100-3d8d1010bc.novalocal,},FirstTimestamp:2025-07-09 14:58:31.922037482 +0000 UTC m=+222.615981716,LastTimestamp:2025-07-09 14:58:31.922037482 +0000 UTC m=+222.615981716,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-9999-9-100-3d8d1010bc.novalocal,}" Jul 9 14:58:48.379173 kubelet[2817]: I0709 14:58:47.527101 2817 status_manager.go:890] "Failed to get status for pod" podUID="19e668e1706749a4125dc683e501ff82" pod="kube-system/kube-scheduler-ci-9999-9-100-3d8d1010bc.novalocal" err="etcdserver: request timed out" Jul 9 14:58:47.625290 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 9 14:58:48.921932 kubelet[2817]: I0709 14:58:48.921818 2817 scope.go:117] "RemoveContainer" containerID="543b5d3b60f9e23f0e3db53b0f9ee358399adbd7b8e0254cc58d1b83adfd66f0" Jul 9 14:58:48.926887 kubelet[2817]: I0709 14:58:48.926747 2817 scope.go:117] "RemoveContainer" containerID="dc6cb8e70a923179f706266dd6c312155afc5468cfb62e9042709defea3a0a49" Jul 9 14:58:49.148580 kubelet[2817]: I0709 14:58:49.148261 2817 scope.go:117] "RemoveContainer" containerID="0fd23b9d426cead4abb72733841d852addd332eb8f40fbde8dc04300e7d1f439" Jul 9 14:58:49.155478 containerd[1557]: time="2025-07-09T14:58:49.155361498Z" level=info msg="CreateContainer within sandbox \"ecfb635511f7b19f55333522cd6409f233ddac81d71adc9b76c21fff77a5f1ad\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 9 14:58:49.170872 containerd[1557]: time="2025-07-09T14:58:49.169629005Z" level=info msg="CreateContainer within sandbox \"6f1dd5e9273a209dfb86492befc4c6ea815940d85789bcd50fc2b5af3f51cf26\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 9 14:58:49.175540 containerd[1557]: time="2025-07-09T14:58:49.175307976Z" level=info msg="CreateContainer within sandbox \"381c6cdd3c485038b3987e8541bfefaff528485369f23e41e4750fe08ba07a9f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 9 14:58:49.395771 containerd[1557]: time="2025-07-09T14:58:49.395703707Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\" id:\"a6a43e55ec4329485e02b9b2ec795659d07984c11d05d448e5c8e51272be7d1e\" pid:6198 exit_status:1 exited_at:{seconds:1752073129 nanos:395067203}" Jul 9 14:58:50.378369 containerd[1557]: time="2025-07-09T14:58:50.378280396Z" level=info msg="Container e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:58:50.490531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3451070032.mount: Deactivated successfully. Jul 9 14:58:50.501551 containerd[1557]: time="2025-07-09T14:58:50.501478355Z" level=info msg="Container edea6e174e223b18028398371ca92447964fee3789a84e386ac6715a53a484fc: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:58:50.560667 containerd[1557]: time="2025-07-09T14:58:50.560535653Z" level=info msg="Container 0c71263b5e0be8dfd5766a603c377fe5d415d936e1584d6a5cc72a3b16fc171c: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:58:50.714492 sshd[6149]: Connection closed by 172.24.4.1 port 55196 Jul 9 14:58:50.718808 sshd-session[6052]: pam_unix(sshd:session): session closed for user core Jul 9 14:58:50.738829 systemd[1]: sshd@21-172.24.4.253:22-172.24.4.1:55196.service: Deactivated successfully. Jul 9 14:58:50.746673 systemd[1]: session-24.scope: Deactivated successfully. Jul 9 14:58:50.753292 systemd-logind[1534]: Session 24 logged out. Waiting for processes to exit. Jul 9 14:58:50.764415 systemd[1]: Started sshd@22-172.24.4.253:22-172.24.4.1:52494.service - OpenSSH per-connection server daemon (172.24.4.1:52494). Jul 9 14:58:50.767033 systemd-logind[1534]: Removed session 24. Jul 9 14:58:50.786206 containerd[1557]: time="2025-07-09T14:58:50.786162961Z" level=info msg="CreateContainer within sandbox \"ecfb635511f7b19f55333522cd6409f233ddac81d71adc9b76c21fff77a5f1ad\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb\"" Jul 9 14:58:50.789229 containerd[1557]: time="2025-07-09T14:58:50.788415045Z" level=info msg="StartContainer for \"e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb\"" Jul 9 14:58:50.807792 containerd[1557]: time="2025-07-09T14:58:50.807736059Z" level=info msg="connecting to shim e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb" address="unix:///run/containerd/s/35b466d41ab2056d14d96206e9c974340656d4f96ce6a3a4523a8686a0167281" protocol=ttrpc version=3 Jul 9 14:58:50.841237 systemd[1]: Started cri-containerd-e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb.scope - libcontainer container e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb. Jul 9 14:58:50.870986 containerd[1557]: time="2025-07-09T14:58:50.869906997Z" level=info msg="CreateContainer within sandbox \"381c6cdd3c485038b3987e8541bfefaff528485369f23e41e4750fe08ba07a9f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"edea6e174e223b18028398371ca92447964fee3789a84e386ac6715a53a484fc\"" Jul 9 14:58:50.870986 containerd[1557]: time="2025-07-09T14:58:50.870762501Z" level=info msg="StartContainer for \"edea6e174e223b18028398371ca92447964fee3789a84e386ac6715a53a484fc\"" Jul 9 14:58:50.876957 containerd[1557]: time="2025-07-09T14:58:50.876769947Z" level=info msg="connecting to shim edea6e174e223b18028398371ca92447964fee3789a84e386ac6715a53a484fc" address="unix:///run/containerd/s/7ea270e13a80a5c53576412116ca8fc6819e59f2f51bea4f26261444e457a472" protocol=ttrpc version=3 Jul 9 14:58:50.902734 containerd[1557]: time="2025-07-09T14:58:50.902496226Z" level=info msg="CreateContainer within sandbox \"6f1dd5e9273a209dfb86492befc4c6ea815940d85789bcd50fc2b5af3f51cf26\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0c71263b5e0be8dfd5766a603c377fe5d415d936e1584d6a5cc72a3b16fc171c\"" Jul 9 14:58:50.903953 containerd[1557]: time="2025-07-09T14:58:50.903832401Z" level=info msg="StartContainer for \"0c71263b5e0be8dfd5766a603c377fe5d415d936e1584d6a5cc72a3b16fc171c\"" Jul 9 14:58:50.908415 containerd[1557]: time="2025-07-09T14:58:50.908309668Z" level=info msg="connecting to shim 0c71263b5e0be8dfd5766a603c377fe5d415d936e1584d6a5cc72a3b16fc171c" address="unix:///run/containerd/s/971b3f1a29cba9adb94eb72d594bd23a244d369026fce099d4716c79f3abd835" protocol=ttrpc version=3 Jul 9 14:58:50.951243 systemd[1]: Started cri-containerd-edea6e174e223b18028398371ca92447964fee3789a84e386ac6715a53a484fc.scope - libcontainer container edea6e174e223b18028398371ca92447964fee3789a84e386ac6715a53a484fc. Jul 9 14:58:50.989995 containerd[1557]: time="2025-07-09T14:58:50.989417840Z" level=info msg="StartContainer for \"e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb\" returns successfully" Jul 9 14:58:50.996310 systemd[1]: Started cri-containerd-0c71263b5e0be8dfd5766a603c377fe5d415d936e1584d6a5cc72a3b16fc171c.scope - libcontainer container 0c71263b5e0be8dfd5766a603c377fe5d415d936e1584d6a5cc72a3b16fc171c. Jul 9 14:58:51.216490 containerd[1557]: time="2025-07-09T14:58:51.216407238Z" level=info msg="StartContainer for \"0c71263b5e0be8dfd5766a603c377fe5d415d936e1584d6a5cc72a3b16fc171c\" returns successfully" Jul 9 14:58:51.219582 containerd[1557]: time="2025-07-09T14:58:51.219458591Z" level=info msg="StartContainer for \"edea6e174e223b18028398371ca92447964fee3789a84e386ac6715a53a484fc\" returns successfully" Jul 9 14:58:51.385837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount240033968.mount: Deactivated successfully. Jul 9 14:58:51.867958 sshd[6213]: Accepted publickey for core from 172.24.4.1 port 52494 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:58:51.870166 sshd-session[6213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:58:51.878414 systemd-logind[1534]: New session 25 of user core. Jul 9 14:58:51.884650 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 9 14:58:52.889894 sshd[6314]: Connection closed by 172.24.4.1 port 52494 Jul 9 14:58:52.889212 sshd-session[6213]: pam_unix(sshd:session): session closed for user core Jul 9 14:58:52.898868 systemd-logind[1534]: Session 25 logged out. Waiting for processes to exit. Jul 9 14:58:52.903117 systemd[1]: sshd@22-172.24.4.253:22-172.24.4.1:52494.service: Deactivated successfully. Jul 9 14:58:52.908876 systemd[1]: session-25.scope: Deactivated successfully. Jul 9 14:58:52.918575 systemd-logind[1534]: Removed session 25. Jul 9 14:58:57.922435 systemd[1]: Started sshd@23-172.24.4.253:22-172.24.4.1:51206.service - OpenSSH per-connection server daemon (172.24.4.1:51206). Jul 9 14:59:08.431726 containerd[1557]: time="2025-07-09T14:59:08.431549315Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\" id:\"934b3e4797be0660dcf8a77d3c9f427dc94dd775ead9e19f896898b5daaabba4\" pid:6364 exited_at:{seconds:1752073148 nanos:430500538}" Jul 9 14:59:11.798244 sshd[6349]: Accepted publickey for core from 172.24.4.1 port 51206 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:59:11.801443 sshd-session[6349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:59:11.822981 systemd-logind[1534]: New session 26 of user core. Jul 9 14:59:11.836610 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 9 14:59:12.518608 containerd[1557]: time="2025-07-09T14:59:12.518330607Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\" id:\"2f34305e5e1be69a36a6eb60a38d3dc086c241cc96c55d965d4e7791182ff199\" pid:6389 exit_status:1 exited_at:{seconds:1752073152 nanos:518064388}" Jul 9 14:59:12.730270 containerd[1557]: time="2025-07-09T14:59:12.730214575Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\" id:\"9b11b7202aaa73eb2ef6eea398e914c30bc1e3be466d5afc24ea116fc6eb2141\" pid:6410 exited_at:{seconds:1752073152 nanos:729551291}" Jul 9 14:59:16.726391 containerd[1557]: time="2025-07-09T14:59:16.726328604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a\" id:\"966b0cfab6477f882d265eab124816a5f4cb560d2cdeb5188110e5f177e1ab2b\" pid:6433 exited_at:{seconds:1752073156 nanos:725619234}" Jul 9 14:59:16.935339 sshd[6376]: Connection closed by 172.24.4.1 port 51206 Jul 9 14:59:16.940234 sshd-session[6349]: pam_unix(sshd:session): session closed for user core Jul 9 14:59:16.960758 systemd[1]: sshd@23-172.24.4.253:22-172.24.4.1:51206.service: Deactivated successfully. Jul 9 14:59:16.970621 systemd[1]: session-26.scope: Deactivated successfully. Jul 9 14:59:16.974000 systemd-logind[1534]: Session 26 logged out. Waiting for processes to exit. Jul 9 14:59:16.978135 systemd-logind[1534]: Removed session 26. Jul 9 14:59:19.701497 containerd[1557]: time="2025-07-09T14:59:19.701189378Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\" id:\"12f37555c519fefd8fe941932d0bb478bc39af27c673fb65f4045044e2a4c904\" pid:6470 exited_at:{seconds:1752073159 nanos:699162746}" Jul 9 14:59:28.822075 kubelet[2817]: E0709 14:59:28.821982 2817 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 9 14:59:28.973836 systemd[1]: cri-containerd-0c71263b5e0be8dfd5766a603c377fe5d415d936e1584d6a5cc72a3b16fc171c.scope: Deactivated successfully. Jul 9 14:59:42.331359 kubelet[2817]: E0709 14:59:32.810407 2817 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal.18509d3660a052ea kube-system 1585 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal,UID:6256d6b0ad5b6fd7dde2de9cddd9c6d5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-9999-9-100-3d8d1010bc.novalocal,},FirstTimestamp:2025-07-09 14:58:31 +0000 UTC,LastTimestamp:2025-07-09 14:59:23.896915255 +0000 UTC m=+274.590859569,Count:11,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-9999-9-100-3d8d1010bc.novalocal,}" Jul 9 14:59:42.331359 kubelet[2817]: E0709 14:59:35.832674 2817 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 9 14:59:42.334807 containerd[1557]: time="2025-07-09T14:59:28.983795564Z" level=info msg="received exit event container_id:\"0c71263b5e0be8dfd5766a603c377fe5d415d936e1584d6a5cc72a3b16fc171c\" id:\"0c71263b5e0be8dfd5766a603c377fe5d415d936e1584d6a5cc72a3b16fc171c\" pid:6277 exit_status:1 exited_at:{seconds:1752073168 nanos:982489293}" Jul 9 14:59:42.334807 containerd[1557]: time="2025-07-09T14:59:28.987397121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c71263b5e0be8dfd5766a603c377fe5d415d936e1584d6a5cc72a3b16fc171c\" id:\"0c71263b5e0be8dfd5766a603c377fe5d415d936e1584d6a5cc72a3b16fc171c\" pid:6277 exit_status:1 exited_at:{seconds:1752073168 nanos:982489293}" Jul 9 14:59:42.334807 containerd[1557]: time="2025-07-09T14:59:29.345262547Z" level=info msg="received exit event container_id:\"e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb\" id:\"e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb\" pid:6228 exit_status:1 exited_at:{seconds:1752073169 nanos:344663744}" Jul 9 14:59:42.334807 containerd[1557]: time="2025-07-09T14:59:29.346616708Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb\" id:\"e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb\" pid:6228 exit_status:1 exited_at:{seconds:1752073169 nanos:344663744}" Jul 9 14:59:42.334807 containerd[1557]: time="2025-07-09T14:59:33.820827516Z" level=info msg="received exit event container_id:\"edea6e174e223b18028398371ca92447964fee3789a84e386ac6715a53a484fc\" id:\"edea6e174e223b18028398371ca92447964fee3789a84e386ac6715a53a484fc\" pid:6264 exit_status:1 exited_at:{seconds:1752073173 nanos:819859461}" Jul 9 14:59:42.334807 containerd[1557]: time="2025-07-09T14:59:33.821241212Z" level=info msg="TaskExit event in podsandbox handler container_id:\"edea6e174e223b18028398371ca92447964fee3789a84e386ac6715a53a484fc\" id:\"edea6e174e223b18028398371ca92447964fee3789a84e386ac6715a53a484fc\" pid:6264 exit_status:1 exited_at:{seconds:1752073173 nanos:819859461}" Jul 9 14:59:42.334807 containerd[1557]: time="2025-07-09T14:59:42.319875947Z" level=error msg="failed to handle container TaskExit event container_id:\"0c71263b5e0be8dfd5766a603c377fe5d415d936e1584d6a5cc72a3b16fc171c\" id:\"0c71263b5e0be8dfd5766a603c377fe5d415d936e1584d6a5cc72a3b16fc171c\" pid:6277 exit_status:1 exited_at:{seconds:1752073168 nanos:982489293}" error="failed to stop container: failed to delete task: context deadline exceeded" Jul 9 14:59:42.334807 containerd[1557]: time="2025-07-09T14:59:42.319833417Z" level=error msg="failed to handle container TaskExit event container_id:\"e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb\" id:\"e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb\" pid:6228 exit_status:1 exited_at:{seconds:1752073169 nanos:344663744}" error="failed to stop container: failed to delete task: context deadline exceeded" Jul 9 14:59:28.978367 systemd[1]: cri-containerd-0c71263b5e0be8dfd5766a603c377fe5d415d936e1584d6a5cc72a3b16fc171c.scope: Consumed 2.544s CPU time, 31.5M memory peak, 1.4M read from disk. Jul 9 14:59:29.340679 systemd[1]: cri-containerd-e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb.scope: Deactivated successfully. Jul 9 14:59:29.341418 systemd[1]: cri-containerd-e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb.scope: Consumed 1.354s CPU time, 64.4M memory peak, 1.1M read from disk. Jul 9 14:59:33.813847 systemd[1]: cri-containerd-edea6e174e223b18028398371ca92447964fee3789a84e386ac6715a53a484fc.scope: Deactivated successfully. Jul 9 14:59:33.816326 systemd[1]: cri-containerd-edea6e174e223b18028398371ca92447964fee3789a84e386ac6715a53a484fc.scope: Consumed 2.502s CPU time, 19.9M memory peak, 1.4M read from disk. Jul 9 14:59:35.944820 systemd[1]: Started sshd@24-172.24.4.253:22-172.24.4.1:43936.service - OpenSSH per-connection server daemon (172.24.4.1:43936). Jul 9 14:59:42.348528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c71263b5e0be8dfd5766a603c377fe5d415d936e1584d6a5cc72a3b16fc171c-rootfs.mount: Deactivated successfully. Jul 9 14:59:42.353400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb-rootfs.mount: Deactivated successfully. Jul 9 14:59:42.405427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edea6e174e223b18028398371ca92447964fee3789a84e386ac6715a53a484fc-rootfs.mount: Deactivated successfully. Jul 9 14:59:42.845539 kubelet[2817]: I0709 14:59:42.845194 2817 status_manager.go:890] "Failed to get status for pod" podUID="6256d6b0ad5b6fd7dde2de9cddd9c6d5" pod="kube-system/kube-apiserver-ci-9999-9-100-3d8d1010bc.novalocal" err="etcdserver: request timed out" Jul 9 14:59:42.884134 kubelet[2817]: E0709 14:59:42.872222 2817 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 9 14:59:43.040103 containerd[1557]: time="2025-07-09T14:59:43.040005221Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\" id:\"094e51536f1044cba455bbe42064f21167af86440086f254fbbe05bc7eaa9ab5\" pid:6523 exited_at:{seconds:1752073183 nanos:39322820}" Jul 9 14:59:43.104373 kubelet[2817]: E0709 14:59:43.103604 2817 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-9999-9-100-3d8d1010bc.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 9 14:59:43.684356 containerd[1557]: time="2025-07-09T14:59:43.684273201Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a\" id:\"8a7fcf5b246ff6ca9d41dd97c9d55d15e53afd5a87d5a294e430c37a52718335\" pid:6544 exited_at:{seconds:1752073183 nanos:683679376}" Jul 9 14:59:43.710690 containerd[1557]: time="2025-07-09T14:59:43.710611246Z" level=info msg="TaskExit event container_id:\"0c71263b5e0be8dfd5766a603c377fe5d415d936e1584d6a5cc72a3b16fc171c\" id:\"0c71263b5e0be8dfd5766a603c377fe5d415d936e1584d6a5cc72a3b16fc171c\" pid:6277 exit_status:1 exited_at:{seconds:1752073168 nanos:982489293}" Jul 9 14:59:43.805282 sshd[6501]: Accepted publickey for core from 172.24.4.1 port 43936 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:59:43.809508 sshd-session[6501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:59:43.812579 containerd[1557]: time="2025-07-09T14:59:43.812071987Z" level=warning msg="container event discarded" container=6be27c52e96e3c7f87a4eda61e82add1f5a0c31ca2a8c00cd42ae2ccd77ba710 type=CONTAINER_CREATED_EVENT Jul 9 14:59:43.821602 containerd[1557]: time="2025-07-09T14:59:43.821419725Z" level=error msg="failed to handle container TaskExit event container_id:\"edea6e174e223b18028398371ca92447964fee3789a84e386ac6715a53a484fc\" id:\"edea6e174e223b18028398371ca92447964fee3789a84e386ac6715a53a484fc\" pid:6264 exit_status:1 exited_at:{seconds:1752073173 nanos:819859461}" error="failed to stop container: failed to delete task: context deadline exceeded" Jul 9 14:59:43.851120 containerd[1557]: time="2025-07-09T14:59:43.824931314Z" level=warning msg="container event discarded" container=6be27c52e96e3c7f87a4eda61e82add1f5a0c31ca2a8c00cd42ae2ccd77ba710 type=CONTAINER_STARTED_EVENT Jul 9 14:59:43.866004 systemd-logind[1534]: New session 27 of user core. Jul 9 14:59:43.872539 containerd[1557]: time="2025-07-09T14:59:43.872311640Z" level=warning msg="container event discarded" container=381c6cdd3c485038b3987e8541bfefaff528485369f23e41e4750fe08ba07a9f type=CONTAINER_CREATED_EVENT Jul 9 14:59:43.873015 containerd[1557]: time="2025-07-09T14:59:43.872772594Z" level=warning msg="container event discarded" container=381c6cdd3c485038b3987e8541bfefaff528485369f23e41e4750fe08ba07a9f type=CONTAINER_STARTED_EVENT Jul 9 14:59:43.873015 containerd[1557]: time="2025-07-09T14:59:43.872818340Z" level=warning msg="container event discarded" container=6f1dd5e9273a209dfb86492befc4c6ea815940d85789bcd50fc2b5af3f51cf26 type=CONTAINER_CREATED_EVENT Jul 9 14:59:43.873015 containerd[1557]: time="2025-07-09T14:59:43.872839941Z" level=warning msg="container event discarded" container=6f1dd5e9273a209dfb86492befc4c6ea815940d85789bcd50fc2b5af3f51cf26 type=CONTAINER_STARTED_EVENT Jul 9 14:59:43.873015 containerd[1557]: time="2025-07-09T14:59:43.872888181Z" level=warning msg="container event discarded" container=6a0e7e8118789c6a17ccff4a86f9653ad5fd98fe4ebd61b0907e52ea73f0037c type=CONTAINER_CREATED_EVENT Jul 9 14:59:43.873473 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 9 14:59:43.888715 containerd[1557]: time="2025-07-09T14:59:43.888635556Z" level=warning msg="container event discarded" container=543b5d3b60f9e23f0e3db53b0f9ee358399adbd7b8e0254cc58d1b83adfd66f0 type=CONTAINER_CREATED_EVENT Jul 9 14:59:43.895711 containerd[1557]: time="2025-07-09T14:59:43.895568183Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Jul 9 14:59:43.896177 containerd[1557]: time="2025-07-09T14:59:43.896112365Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Jul 9 14:59:43.896426 containerd[1557]: time="2025-07-09T14:59:43.896357735Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Jul 9 14:59:43.901142 containerd[1557]: time="2025-07-09T14:59:43.901078953Z" level=warning msg="container event discarded" container=0fd23b9d426cead4abb72733841d852addd332eb8f40fbde8dc04300e7d1f439 type=CONTAINER_CREATED_EVENT Jul 9 14:59:44.050892 containerd[1557]: time="2025-07-09T14:59:44.003995866Z" level=warning msg="container event discarded" container=6a0e7e8118789c6a17ccff4a86f9653ad5fd98fe4ebd61b0907e52ea73f0037c type=CONTAINER_STARTED_EVENT Jul 9 14:59:44.078413 containerd[1557]: time="2025-07-09T14:59:44.078336926Z" level=warning msg="container event discarded" container=0fd23b9d426cead4abb72733841d852addd332eb8f40fbde8dc04300e7d1f439 type=CONTAINER_STARTED_EVENT Jul 9 14:59:44.096116 containerd[1557]: time="2025-07-09T14:59:44.096037817Z" level=warning msg="container event discarded" container=543b5d3b60f9e23f0e3db53b0f9ee358399adbd7b8e0254cc58d1b83adfd66f0 type=CONTAINER_STARTED_EVENT Jul 9 14:59:44.098562 containerd[1557]: time="2025-07-09T14:59:44.098453880Z" level=info msg="TaskExit event container_id:\"e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb\" id:\"e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb\" pid:6228 exit_status:1 exited_at:{seconds:1752073169 nanos:344663744}" Jul 9 14:59:44.830975 containerd[1557]: time="2025-07-09T14:59:44.830701975Z" level=info msg="TaskExit event container_id:\"edea6e174e223b18028398371ca92447964fee3789a84e386ac6715a53a484fc\" id:\"edea6e174e223b18028398371ca92447964fee3789a84e386ac6715a53a484fc\" pid:6264 exit_status:1 exited_at:{seconds:1752073173 nanos:819859461}" Jul 9 14:59:44.837981 containerd[1557]: time="2025-07-09T14:59:44.837874332Z" level=info msg="Ensure that container edea6e174e223b18028398371ca92447964fee3789a84e386ac6715a53a484fc in task-service has been cleanup successfully" Jul 9 14:59:44.977748 kubelet[2817]: I0709 14:59:44.977562 2817 scope.go:117] "RemoveContainer" containerID="543b5d3b60f9e23f0e3db53b0f9ee358399adbd7b8e0254cc58d1b83adfd66f0" Jul 9 14:59:44.984184 kubelet[2817]: I0709 14:59:44.983774 2817 scope.go:117] "RemoveContainer" containerID="edea6e174e223b18028398371ca92447964fee3789a84e386ac6715a53a484fc" Jul 9 14:59:45.007165 containerd[1557]: time="2025-07-09T14:59:45.006187995Z" level=info msg="RemoveContainer for \"543b5d3b60f9e23f0e3db53b0f9ee358399adbd7b8e0254cc58d1b83adfd66f0\"" Jul 9 14:59:45.010889 containerd[1557]: time="2025-07-09T14:59:45.009342704Z" level=info msg="CreateContainer within sandbox \"381c6cdd3c485038b3987e8541bfefaff528485369f23e41e4750fe08ba07a9f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Jul 9 14:59:45.011756 kubelet[2817]: I0709 14:59:45.011684 2817 scope.go:117] "RemoveContainer" containerID="e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb" Jul 9 14:59:45.018985 containerd[1557]: time="2025-07-09T14:59:45.018897571Z" level=info msg="CreateContainer within sandbox \"ecfb635511f7b19f55333522cd6409f233ddac81d71adc9b76c21fff77a5f1ad\" for container &ContainerMetadata{Name:tigera-operator,Attempt:2,}" Jul 9 14:59:45.026724 kubelet[2817]: I0709 14:59:45.026613 2817 scope.go:117] "RemoveContainer" containerID="0c71263b5e0be8dfd5766a603c377fe5d415d936e1584d6a5cc72a3b16fc171c" Jul 9 14:59:45.031968 containerd[1557]: time="2025-07-09T14:59:45.031085278Z" level=info msg="CreateContainer within sandbox \"6f1dd5e9273a209dfb86492befc4c6ea815940d85789bcd50fc2b5af3f51cf26\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Jul 9 14:59:45.166142 sshd[6556]: Connection closed by 172.24.4.1 port 43936 Jul 9 14:59:45.166622 sshd-session[6501]: pam_unix(sshd:session): session closed for user core Jul 9 14:59:45.186047 systemd[1]: sshd@24-172.24.4.253:22-172.24.4.1:43936.service: Deactivated successfully. Jul 9 14:59:45.201801 systemd[1]: session-27.scope: Deactivated successfully. Jul 9 14:59:45.206836 systemd-logind[1534]: Session 27 logged out. Waiting for processes to exit. Jul 9 14:59:45.212226 systemd-logind[1534]: Removed session 27. Jul 9 14:59:45.705104 containerd[1557]: time="2025-07-09T14:59:45.703036528Z" level=info msg="RemoveContainer for \"543b5d3b60f9e23f0e3db53b0f9ee358399adbd7b8e0254cc58d1b83adfd66f0\" returns successfully" Jul 9 14:59:45.707773 kubelet[2817]: I0709 14:59:45.707722 2817 scope.go:117] "RemoveContainer" containerID="dc6cb8e70a923179f706266dd6c312155afc5468cfb62e9042709defea3a0a49" Jul 9 14:59:45.715761 containerd[1557]: time="2025-07-09T14:59:45.714813826Z" level=info msg="RemoveContainer for \"dc6cb8e70a923179f706266dd6c312155afc5468cfb62e9042709defea3a0a49\"" Jul 9 14:59:45.839482 containerd[1557]: time="2025-07-09T14:59:45.839373257Z" level=info msg="Container e2416262d72237a6ff4e1757c9ccf3d5c8bdec4368ce9d9ce60b4b8adc635f8e: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:59:46.604054 containerd[1557]: time="2025-07-09T14:59:46.603987160Z" level=info msg="Container 9695705a48606f815886b71385ec7b0da8bf7cc203c06b0d8c33fea74677fa85: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:59:46.610816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount192108169.mount: Deactivated successfully. Jul 9 14:59:46.636930 containerd[1557]: time="2025-07-09T14:59:46.636733890Z" level=info msg="RemoveContainer for \"dc6cb8e70a923179f706266dd6c312155afc5468cfb62e9042709defea3a0a49\" returns successfully" Jul 9 14:59:46.638711 kubelet[2817]: I0709 14:59:46.638617 2817 scope.go:117] "RemoveContainer" containerID="0fd23b9d426cead4abb72733841d852addd332eb8f40fbde8dc04300e7d1f439" Jul 9 14:59:46.648856 containerd[1557]: time="2025-07-09T14:59:46.648761547Z" level=info msg="RemoveContainer for \"0fd23b9d426cead4abb72733841d852addd332eb8f40fbde8dc04300e7d1f439\"" Jul 9 14:59:46.672656 containerd[1557]: time="2025-07-09T14:59:46.672506548Z" level=info msg="Container 23f033796fa9b5019d26baa5b083e6933ffb25b107a40b8d96a8236d1260c010: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:59:46.812094 containerd[1557]: time="2025-07-09T14:59:46.811981074Z" level=info msg="CreateContainer within sandbox \"381c6cdd3c485038b3987e8541bfefaff528485369f23e41e4750fe08ba07a9f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"e2416262d72237a6ff4e1757c9ccf3d5c8bdec4368ce9d9ce60b4b8adc635f8e\"" Jul 9 14:59:46.813437 containerd[1557]: time="2025-07-09T14:59:46.813362566Z" level=info msg="StartContainer for \"e2416262d72237a6ff4e1757c9ccf3d5c8bdec4368ce9d9ce60b4b8adc635f8e\"" Jul 9 14:59:46.820570 containerd[1557]: time="2025-07-09T14:59:46.820343605Z" level=info msg="connecting to shim e2416262d72237a6ff4e1757c9ccf3d5c8bdec4368ce9d9ce60b4b8adc635f8e" address="unix:///run/containerd/s/7ea270e13a80a5c53576412116ca8fc6819e59f2f51bea4f26261444e457a472" protocol=ttrpc version=3 Jul 9 14:59:46.907333 systemd[1]: Started cri-containerd-e2416262d72237a6ff4e1757c9ccf3d5c8bdec4368ce9d9ce60b4b8adc635f8e.scope - libcontainer container e2416262d72237a6ff4e1757c9ccf3d5c8bdec4368ce9d9ce60b4b8adc635f8e. Jul 9 14:59:47.060685 containerd[1557]: time="2025-07-09T14:59:47.060183160Z" level=info msg="RemoveContainer for \"0fd23b9d426cead4abb72733841d852addd332eb8f40fbde8dc04300e7d1f439\" returns successfully" Jul 9 14:59:47.064952 containerd[1557]: time="2025-07-09T14:59:47.064879832Z" level=info msg="StartContainer for \"e2416262d72237a6ff4e1757c9ccf3d5c8bdec4368ce9d9ce60b4b8adc635f8e\" returns successfully" Jul 9 14:59:47.138342 containerd[1557]: time="2025-07-09T14:59:47.137852907Z" level=info msg="CreateContainer within sandbox \"6f1dd5e9273a209dfb86492befc4c6ea815940d85789bcd50fc2b5af3f51cf26\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"23f033796fa9b5019d26baa5b083e6933ffb25b107a40b8d96a8236d1260c010\"" Jul 9 14:59:47.169837 containerd[1557]: time="2025-07-09T14:59:47.143036032Z" level=info msg="StartContainer for \"23f033796fa9b5019d26baa5b083e6933ffb25b107a40b8d96a8236d1260c010\"" Jul 9 14:59:47.180102 containerd[1557]: time="2025-07-09T14:59:47.179777167Z" level=info msg="connecting to shim 23f033796fa9b5019d26baa5b083e6933ffb25b107a40b8d96a8236d1260c010" address="unix:///run/containerd/s/971b3f1a29cba9adb94eb72d594bd23a244d369026fce099d4716c79f3abd835" protocol=ttrpc version=3 Jul 9 14:59:47.217227 containerd[1557]: time="2025-07-09T14:59:47.215820563Z" level=info msg="CreateContainer within sandbox \"ecfb635511f7b19f55333522cd6409f233ddac81d71adc9b76c21fff77a5f1ad\" for &ContainerMetadata{Name:tigera-operator,Attempt:2,} returns container id \"9695705a48606f815886b71385ec7b0da8bf7cc203c06b0d8c33fea74677fa85\"" Jul 9 14:59:47.218859 containerd[1557]: time="2025-07-09T14:59:47.218802508Z" level=info msg="StartContainer for \"9695705a48606f815886b71385ec7b0da8bf7cc203c06b0d8c33fea74677fa85\"" Jul 9 14:59:47.219951 containerd[1557]: time="2025-07-09T14:59:47.219819366Z" level=info msg="connecting to shim 9695705a48606f815886b71385ec7b0da8bf7cc203c06b0d8c33fea74677fa85" address="unix:///run/containerd/s/35b466d41ab2056d14d96206e9c974340656d4f96ce6a3a4523a8686a0167281" protocol=ttrpc version=3 Jul 9 14:59:47.257178 systemd[1]: Started cri-containerd-23f033796fa9b5019d26baa5b083e6933ffb25b107a40b8d96a8236d1260c010.scope - libcontainer container 23f033796fa9b5019d26baa5b083e6933ffb25b107a40b8d96a8236d1260c010. Jul 9 14:59:47.290189 systemd[1]: Started cri-containerd-9695705a48606f815886b71385ec7b0da8bf7cc203c06b0d8c33fea74677fa85.scope - libcontainer container 9695705a48606f815886b71385ec7b0da8bf7cc203c06b0d8c33fea74677fa85. Jul 9 14:59:47.534820 containerd[1557]: time="2025-07-09T14:59:47.534305969Z" level=info msg="StartContainer for \"9695705a48606f815886b71385ec7b0da8bf7cc203c06b0d8c33fea74677fa85\" returns successfully" Jul 9 14:59:47.537296 containerd[1557]: time="2025-07-09T14:59:47.535800423Z" level=info msg="StartContainer for \"23f033796fa9b5019d26baa5b083e6933ffb25b107a40b8d96a8236d1260c010\" returns successfully" Jul 9 14:59:49.401374 containerd[1557]: time="2025-07-09T14:59:49.401270773Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\" id:\"291ef633ae6609b22e4862baa0e08590ec8189907b8bea77ca569202b09a266b\" pid:6697 exited_at:{seconds:1752073189 nanos:400659646}" Jul 9 14:59:50.220859 systemd[1]: Started sshd@25-172.24.4.253:22-172.24.4.1:54434.service - OpenSSH per-connection server daemon (172.24.4.1:54434). Jul 9 14:59:54.972790 containerd[1557]: time="2025-07-09T14:59:54.972383972Z" level=warning msg="container event discarded" container=4a486c7c41b23d11bb82d58caf542469083b6cb706c5c6cd0afc1a6d61c9e094 type=CONTAINER_CREATED_EVENT Jul 9 14:59:54.972790 containerd[1557]: time="2025-07-09T14:59:54.972690267Z" level=warning msg="container event discarded" container=4a486c7c41b23d11bb82d58caf542469083b6cb706c5c6cd0afc1a6d61c9e094 type=CONTAINER_STARTED_EVENT Jul 9 14:59:55.815447 containerd[1557]: time="2025-07-09T14:59:55.021919465Z" level=warning msg="container event discarded" container=c72212927a291f2b7e64f479b123544c57d39236e8e2249ee159bf940bbb2089 type=CONTAINER_CREATED_EVENT Jul 9 14:59:55.815447 containerd[1557]: time="2025-07-09T14:59:55.419221026Z" level=warning msg="container event discarded" container=c72212927a291f2b7e64f479b123544c57d39236e8e2249ee159bf940bbb2089 type=CONTAINER_STARTED_EVENT Jul 9 14:59:55.815447 containerd[1557]: time="2025-07-09T14:59:55.570752796Z" level=warning msg="container event discarded" container=ecfb635511f7b19f55333522cd6409f233ddac81d71adc9b76c21fff77a5f1ad type=CONTAINER_CREATED_EVENT Jul 9 14:59:55.815447 containerd[1557]: time="2025-07-09T14:59:55.570893350Z" level=warning msg="container event discarded" container=ecfb635511f7b19f55333522cd6409f233ddac81d71adc9b76c21fff77a5f1ad type=CONTAINER_STARTED_EVENT Jul 9 14:59:56.845012 sshd[6711]: Accepted publickey for core from 172.24.4.1 port 54434 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:59:56.846351 sshd-session[6711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:59:56.859778 systemd-logind[1534]: New session 28 of user core. Jul 9 14:59:56.876365 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 9 14:59:57.669162 sshd[6716]: Connection closed by 172.24.4.1 port 54434 Jul 9 14:59:57.671352 sshd-session[6711]: pam_unix(sshd:session): session closed for user core Jul 9 14:59:57.683718 systemd[1]: sshd@25-172.24.4.253:22-172.24.4.1:54434.service: Deactivated successfully. Jul 9 14:59:57.688823 systemd[1]: session-28.scope: Deactivated successfully. Jul 9 14:59:57.695897 systemd-logind[1534]: Session 28 logged out. Waiting for processes to exit. Jul 9 14:59:57.703476 systemd-logind[1534]: Removed session 28. Jul 9 14:59:58.203279 containerd[1557]: time="2025-07-09T14:59:58.203127872Z" level=warning msg="container event discarded" container=dc6cb8e70a923179f706266dd6c312155afc5468cfb62e9042709defea3a0a49 type=CONTAINER_CREATED_EVENT Jul 9 14:59:58.293091 containerd[1557]: time="2025-07-09T14:59:58.292863706Z" level=warning msg="container event discarded" container=dc6cb8e70a923179f706266dd6c312155afc5468cfb62e9042709defea3a0a49 type=CONTAINER_STARTED_EVENT Jul 9 15:00:02.703673 systemd[1]: Started sshd@26-172.24.4.253:22-172.24.4.1:39698.service - OpenSSH per-connection server daemon (172.24.4.1:39698). Jul 9 15:00:04.909994 sshd[6728]: Accepted publickey for core from 172.24.4.1 port 39698 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 15:00:04.911360 sshd-session[6728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 15:00:04.917864 systemd-logind[1534]: New session 29 of user core. Jul 9 15:00:04.926324 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 9 15:00:10.282412 containerd[1557]: time="2025-07-09T15:00:10.282265333Z" level=warning msg="container event discarded" container=fce36fcc985510e9e8570f862123d50170db83f9501276e11a6e4155c40627ab type=CONTAINER_CREATED_EVENT Jul 9 15:00:10.282412 containerd[1557]: time="2025-07-09T15:00:10.282378585Z" level=warning msg="container event discarded" container=fce36fcc985510e9e8570f862123d50170db83f9501276e11a6e4155c40627ab type=CONTAINER_STARTED_EVENT Jul 9 15:00:22.644520 kubelet[2817]: I0709 15:00:11.167246 2817 status_manager.go:914] "Failed to update status for pod" pod="kube-system/kube-scheduler-ci-9999-9-100-3d8d1010bc.novalocal" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"242d9f61-b848-4f86-a2d2-970891d02aab\\\"},\\\"status\\\":{\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"containerd://e2416262d72237a6ff4e1757c9ccf3d5c8bdec4368ce9d9ce60b4b8adc635f8e\\\",\\\"image\\\":\\\"registry.k8s.io/kube-scheduler:v1.32.6\\\",\\\"imageID\\\":\\\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"containerd://edea6e174e223b18028398371ca92447964fee3789a84e386ac6715a53a484fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-07-09T14:59:33Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-07-09T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-07-09T14:59:46Z\\\"}}}]}}\" for pod \"kube-system\"/\"kube-scheduler-ci-9999-9-100-3d8d1010bc.novalocal\": etcdserver: request timed out" Jul 9 15:00:22.644520 kubelet[2817]: E0709 15:00:14.857579 2817 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 9 15:00:22.644520 kubelet[2817]: E0709 15:00:16.654166 2817 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.101s" Jul 9 15:00:22.644520 kubelet[2817]: I0709 15:00:18.230150 2817 status_manager.go:890] "Failed to get status for pod" podUID="19e668e1706749a4125dc683e501ff82" pod="kube-system/kube-scheduler-ci-9999-9-100-3d8d1010bc.novalocal" err="etcdserver: request timed out" Jul 9 15:00:18.254547 systemd[1]: cri-containerd-9695705a48606f815886b71385ec7b0da8bf7cc203c06b0d8c33fea74677fa85.scope: Deactivated successfully. Jul 9 15:00:22.652161 containerd[1557]: time="2025-07-09T15:00:10.662668356Z" level=warning msg="container event discarded" container=0f4dc1946eade24e7ca241cea4ddb3a21e462a44dfc064e5f6246f9701147055 type=CONTAINER_CREATED_EVENT Jul 9 15:00:22.652161 containerd[1557]: time="2025-07-09T15:00:10.662747113Z" level=warning msg="container event discarded" container=0f4dc1946eade24e7ca241cea4ddb3a21e462a44dfc064e5f6246f9701147055 type=CONTAINER_STARTED_EVENT Jul 9 15:00:22.652161 containerd[1557]: time="2025-07-09T15:00:16.522366814Z" level=warning msg="container event discarded" container=946ba5287dfbe297b642555ef5f00a27c264f4fd9a5fc996af7278f7a58aa9d8 type=CONTAINER_CREATED_EVENT Jul 9 15:00:22.652161 containerd[1557]: time="2025-07-09T15:00:16.553139227Z" level=error msg="get state for 38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab" error="context deadline exceeded" Jul 9 15:00:22.652161 containerd[1557]: time="2025-07-09T15:00:16.553268670Z" level=warning msg="unknown status" status=0 Jul 9 15:00:22.652161 containerd[1557]: time="2025-07-09T15:00:16.610134927Z" level=error msg="ttrpc: received message on inactive stream" stream=455 Jul 9 15:00:22.652161 containerd[1557]: time="2025-07-09T15:00:16.873258173Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\" id:\"d666227a1115e23d4f1363a90fef88bc03cbf92c286a310b68423f368ed3a128\" pid:6807 exited_at:{seconds:1752073216 nanos:871851775}" Jul 9 15:00:22.652161 containerd[1557]: time="2025-07-09T15:00:16.942508638Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\" id:\"da76fe15f5eb7d3c56185b7a48a6db73923cf3205975e2f0107ca40705907884\" pid:6789 exited_at:{seconds:1752073216 nanos:941744835}" Jul 9 15:00:22.652161 containerd[1557]: time="2025-07-09T15:00:16.980682980Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\" id:\"2dee1aee0259ff3cfcbe37915aee046dc4f2ddda8077f879deb361d4aaa67ad0\" pid:6786 exited_at:{seconds:1752073216 nanos:979834569}" Jul 9 15:00:22.652161 containerd[1557]: time="2025-07-09T15:00:18.271672664Z" level=info msg="received exit event container_id:\"9695705a48606f815886b71385ec7b0da8bf7cc203c06b0d8c33fea74677fa85\" id:\"9695705a48606f815886b71385ec7b0da8bf7cc203c06b0d8c33fea74677fa85\" pid:6655 exit_status:1 exited_at:{seconds:1752073218 nanos:269661079}" Jul 9 15:00:22.652161 containerd[1557]: time="2025-07-09T15:00:18.272418383Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9695705a48606f815886b71385ec7b0da8bf7cc203c06b0d8c33fea74677fa85\" id:\"9695705a48606f815886b71385ec7b0da8bf7cc203c06b0d8c33fea74677fa85\" pid:6655 exit_status:1 exited_at:{seconds:1752073218 nanos:269661079}" Jul 9 15:00:22.652161 containerd[1557]: time="2025-07-09T15:00:18.831707556Z" level=warning msg="container event discarded" container=946ba5287dfbe297b642555ef5f00a27c264f4fd9a5fc996af7278f7a58aa9d8 type=CONTAINER_STARTED_EVENT Jul 9 15:00:22.652161 containerd[1557]: time="2025-07-09T15:00:18.831879739Z" level=warning msg="container event discarded" container=058ab76206788a5eb1ee687511fc15328cfe9a773e4a1ea4365641f39ea13f3c type=CONTAINER_CREATED_EVENT Jul 9 15:00:22.652161 containerd[1557]: time="2025-07-09T15:00:18.831902061Z" level=warning msg="container event discarded" container=058ab76206788a5eb1ee687511fc15328cfe9a773e4a1ea4365641f39ea13f3c type=CONTAINER_STARTED_EVENT Jul 9 15:00:22.652161 containerd[1557]: time="2025-07-09T15:00:18.831916508Z" level=warning msg="container event discarded" container=058ab76206788a5eb1ee687511fc15328cfe9a773e4a1ea4365641f39ea13f3c type=CONTAINER_STOPPED_EVENT Jul 9 15:00:22.652161 containerd[1557]: time="2025-07-09T15:00:19.369774980Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\" id:\"18552cb8a070696a53a94bff1a3710347715b90168b1bde229469cc15e950675\" pid:6849 exited_at:{seconds:1752073219 nanos:369258550}" Jul 9 15:00:18.386212 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9695705a48606f815886b71385ec7b0da8bf7cc203c06b0d8c33fea74677fa85-rootfs.mount: Deactivated successfully. Jul 9 15:00:22.752774 containerd[1557]: time="2025-07-09T15:00:22.752688984Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a\" id:\"33897d123897d57abe3903c96ddcdae589127587a3c74a28f2015c65d75307a7\" pid:6764 exited_at:{seconds:1752073222 nanos:751290248}" Jul 9 15:00:23.013024 kubelet[2817]: E0709 15:00:23.012680 2817 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-9999-9-100-3d8d1010bc.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 9 15:00:23.350000 sshd[6731]: Connection closed by 172.24.4.1 port 39698 Jul 9 15:00:23.353480 sshd-session[6728]: pam_unix(sshd:session): session closed for user core Jul 9 15:00:23.364110 systemd[1]: sshd@26-172.24.4.253:22-172.24.4.1:39698.service: Deactivated successfully. Jul 9 15:00:23.370580 systemd[1]: session-29.scope: Deactivated successfully. Jul 9 15:00:23.372414 systemd-logind[1534]: Session 29 logged out. Waiting for processes to exit. Jul 9 15:00:23.377133 systemd-logind[1534]: Removed session 29. Jul 9 15:00:23.662009 kubelet[2817]: I0709 15:00:23.661592 2817 scope.go:117] "RemoveContainer" containerID="e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb" Jul 9 15:00:23.664717 kubelet[2817]: I0709 15:00:23.664175 2817 scope.go:117] "RemoveContainer" containerID="9695705a48606f815886b71385ec7b0da8bf7cc203c06b0d8c33fea74677fa85" Jul 9 15:00:23.664717 kubelet[2817]: E0709 15:00:23.664482 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-rtv2w_tigera-operator(fb3593d4-ee9e-46ce-a93c-b7c5fa10975a)\"" pod="tigera-operator/tigera-operator-747864d56d-rtv2w" podUID="fb3593d4-ee9e-46ce-a93c-b7c5fa10975a" Jul 9 15:00:23.667656 containerd[1557]: time="2025-07-09T15:00:23.667597404Z" level=info msg="RemoveContainer for \"e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb\"" Jul 9 15:00:23.791974 containerd[1557]: time="2025-07-09T15:00:23.791500956Z" level=info msg="RemoveContainer for \"e237a22f45fc5081f41d2cad3947902397fb9b886cb89ec5eb2ef53823c0fcfb\" returns successfully" Jul 9 15:00:24.207445 containerd[1557]: time="2025-07-09T15:00:24.207322200Z" level=warning msg="container event discarded" container=e5a216d3edaa4ced7af5761ea993f7ce28c5ee2d0829fbe81c6fdc7d92b52029 type=CONTAINER_CREATED_EVENT Jul 9 15:00:24.351278 containerd[1557]: time="2025-07-09T15:00:24.351038983Z" level=warning msg="container event discarded" container=e5a216d3edaa4ced7af5761ea993f7ce28c5ee2d0829fbe81c6fdc7d92b52029 type=CONTAINER_STARTED_EVENT Jul 9 15:00:27.608352 containerd[1557]: time="2025-07-09T15:00:27.603263351Z" level=warning msg="container event discarded" container=e5a216d3edaa4ced7af5761ea993f7ce28c5ee2d0829fbe81c6fdc7d92b52029 type=CONTAINER_STOPPED_EVENT Jul 9 15:00:35.557106 kubelet[2817]: I0709 15:00:35.556867 2817 scope.go:117] "RemoveContainer" containerID="9695705a48606f815886b71385ec7b0da8bf7cc203c06b0d8c33fea74677fa85" Jul 9 15:00:35.557106 kubelet[2817]: E0709 15:00:35.557197 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-rtv2w_tigera-operator(fb3593d4-ee9e-46ce-a93c-b7c5fa10975a)\"" pod="tigera-operator/tigera-operator-747864d56d-rtv2w" podUID="fb3593d4-ee9e-46ce-a93c-b7c5fa10975a" Jul 9 15:00:36.406574 systemd[1]: Started sshd@27-172.24.4.253:22-172.24.4.1:60054.service - OpenSSH per-connection server daemon (172.24.4.1:60054). Jul 9 15:00:41.097259 containerd[1557]: time="2025-07-09T15:00:41.096895790Z" level=warning msg="container event discarded" container=fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a type=CONTAINER_CREATED_EVENT Jul 9 15:00:41.242632 containerd[1557]: time="2025-07-09T15:00:41.242490918Z" level=warning msg="container event discarded" container=fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a type=CONTAINER_STARTED_EVENT Jul 9 15:00:42.372142 containerd[1557]: time="2025-07-09T15:00:42.371865441Z" level=warning msg="container event discarded" container=28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126 type=CONTAINER_CREATED_EVENT Jul 9 15:00:42.373346 containerd[1557]: time="2025-07-09T15:00:42.372064495Z" level=warning msg="container event discarded" container=28d91c47731afcc5d667fa5404ff8b4393fb5ae21a7c8170d889a7ea2dcf4126 type=CONTAINER_STARTED_EVENT Jul 9 15:00:42.373346 containerd[1557]: time="2025-07-09T15:00:42.373267803Z" level=warning msg="container event discarded" container=f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7 type=CONTAINER_CREATED_EVENT Jul 9 15:00:42.373346 containerd[1557]: time="2025-07-09T15:00:42.373296046Z" level=warning msg="container event discarded" container=f36fe4373ea7c1b7b0ce52ab7aea3ee101b85c51e166e846b195607e0f1f7ff7 type=CONTAINER_STARTED_EVENT Jul 9 15:00:42.835931 containerd[1557]: time="2025-07-09T15:00:42.835845832Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\" id:\"8a1524ec7493e9fe9b01a9902fc5264e8506110356fca7b28ef181c3a5e972dd\" pid:6916 exited_at:{seconds:1752073242 nanos:831788478}" Jul 9 15:00:42.884013 sshd[6900]: Accepted publickey for core from 172.24.4.1 port 60054 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 15:00:42.889582 sshd-session[6900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 15:00:42.914903 systemd-logind[1534]: New session 30 of user core. Jul 9 15:00:42.930340 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 9 15:00:43.229084 containerd[1557]: time="2025-07-09T15:00:43.228706582Z" level=warning msg="container event discarded" container=bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74 type=CONTAINER_CREATED_EVENT Jul 9 15:00:43.229084 containerd[1557]: time="2025-07-09T15:00:43.228908431Z" level=warning msg="container event discarded" container=bde4b4b9706052cda10c4a2489e7a09ece6c8833ed3511bd7a6e3ed4899edf74 type=CONTAINER_STARTED_EVENT Jul 9 15:00:43.450688 containerd[1557]: time="2025-07-09T15:00:43.450606423Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a\" id:\"2c6822f09632a13b7bfc8d3d88c57839a7ccbd4d9086706f3583850e9530bc0c\" pid:6939 exited_at:{seconds:1752073243 nanos:449920195}" Jul 9 15:00:43.801158 sshd[6925]: Connection closed by 172.24.4.1 port 60054 Jul 9 15:00:43.802802 sshd-session[6900]: pam_unix(sshd:session): session closed for user core Jul 9 15:00:43.812768 systemd-logind[1534]: Session 30 logged out. Waiting for processes to exit. Jul 9 15:00:43.813497 systemd[1]: sshd@27-172.24.4.253:22-172.24.4.1:60054.service: Deactivated successfully. Jul 9 15:00:43.822496 systemd[1]: session-30.scope: Deactivated successfully. Jul 9 15:00:43.829323 systemd-logind[1534]: Removed session 30. Jul 9 15:00:46.555004 kubelet[2817]: I0709 15:00:46.554656 2817 scope.go:117] "RemoveContainer" containerID="9695705a48606f815886b71385ec7b0da8bf7cc203c06b0d8c33fea74677fa85" Jul 9 15:00:46.566229 containerd[1557]: time="2025-07-09T15:00:46.566120952Z" level=info msg="CreateContainer within sandbox \"ecfb635511f7b19f55333522cd6409f233ddac81d71adc9b76c21fff77a5f1ad\" for container &ContainerMetadata{Name:tigera-operator,Attempt:3,}" Jul 9 15:00:46.601017 containerd[1557]: time="2025-07-09T15:00:46.600320480Z" level=info msg="Container 2c21b5f3b2e6b19d07a5a4c4f7c86c317de2882ff6995416e71dd1db47f525c2: CDI devices from CRI Config.CDIDevices: []" Jul 9 15:00:46.627323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2765997024.mount: Deactivated successfully. Jul 9 15:00:46.652902 containerd[1557]: time="2025-07-09T15:00:46.652743226Z" level=info msg="CreateContainer within sandbox \"ecfb635511f7b19f55333522cd6409f233ddac81d71adc9b76c21fff77a5f1ad\" for &ContainerMetadata{Name:tigera-operator,Attempt:3,} returns container id \"2c21b5f3b2e6b19d07a5a4c4f7c86c317de2882ff6995416e71dd1db47f525c2\"" Jul 9 15:00:46.653965 containerd[1557]: time="2025-07-09T15:00:46.653890048Z" level=info msg="StartContainer for \"2c21b5f3b2e6b19d07a5a4c4f7c86c317de2882ff6995416e71dd1db47f525c2\"" Jul 9 15:00:46.655856 containerd[1557]: time="2025-07-09T15:00:46.655812766Z" level=info msg="connecting to shim 2c21b5f3b2e6b19d07a5a4c4f7c86c317de2882ff6995416e71dd1db47f525c2" address="unix:///run/containerd/s/35b466d41ab2056d14d96206e9c974340656d4f96ce6a3a4523a8686a0167281" protocol=ttrpc version=3 Jul 9 15:00:46.692139 systemd[1]: Started cri-containerd-2c21b5f3b2e6b19d07a5a4c4f7c86c317de2882ff6995416e71dd1db47f525c2.scope - libcontainer container 2c21b5f3b2e6b19d07a5a4c4f7c86c317de2882ff6995416e71dd1db47f525c2. Jul 9 15:00:46.764968 containerd[1557]: time="2025-07-09T15:00:46.763621458Z" level=info msg="StartContainer for \"2c21b5f3b2e6b19d07a5a4c4f7c86c317de2882ff6995416e71dd1db47f525c2\" returns successfully" Jul 9 15:00:48.408186 containerd[1557]: time="2025-07-09T15:00:48.407746638Z" level=warning msg="container event discarded" container=ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400 type=CONTAINER_CREATED_EVENT Jul 9 15:00:48.532593 containerd[1557]: time="2025-07-09T15:00:48.532357501Z" level=warning msg="container event discarded" container=ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400 type=CONTAINER_STARTED_EVENT Jul 9 15:00:48.833283 systemd[1]: Started sshd@28-172.24.4.253:22-172.24.4.1:34516.service - OpenSSH per-connection server daemon (172.24.4.1:34516). Jul 9 15:00:49.503263 containerd[1557]: time="2025-07-09T15:00:49.503180527Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\" id:\"02a0ceffb8fe50c8d6943d37285122f82fc1f306d3beedd946fbf11abcd486a6\" pid:7012 exited_at:{seconds:1752073249 nanos:502462570}" Jul 9 15:00:50.234340 sshd[6995]: Accepted publickey for core from 172.24.4.1 port 34516 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 15:00:50.239816 sshd-session[6995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 15:00:50.272523 systemd-logind[1534]: New session 31 of user core. Jul 9 15:00:50.280317 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 9 15:00:50.975037 sshd[7023]: Connection closed by 172.24.4.1 port 34516 Jul 9 15:00:50.974794 sshd-session[6995]: pam_unix(sshd:session): session closed for user core Jul 9 15:00:50.989751 systemd[1]: sshd@28-172.24.4.253:22-172.24.4.1:34516.service: Deactivated successfully. Jul 9 15:00:50.995612 systemd[1]: session-31.scope: Deactivated successfully. Jul 9 15:00:51.000568 systemd-logind[1534]: Session 31 logged out. Waiting for processes to exit. Jul 9 15:00:51.006097 systemd-logind[1534]: Removed session 31. Jul 9 15:00:52.934687 containerd[1557]: time="2025-07-09T15:00:52.934421463Z" level=warning msg="container event discarded" container=8d75e72404012573d2c4c28ecf68f40d9f947e9aa7252775436b0f00f6fafdd8 type=CONTAINER_CREATED_EVENT Jul 9 15:00:53.103351 containerd[1557]: time="2025-07-09T15:00:53.103095927Z" level=warning msg="container event discarded" container=8d75e72404012573d2c4c28ecf68f40d9f947e9aa7252775436b0f00f6fafdd8 type=CONTAINER_STARTED_EVENT Jul 9 15:00:54.298217 containerd[1557]: time="2025-07-09T15:00:54.298048602Z" level=warning msg="container event discarded" container=9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a type=CONTAINER_CREATED_EVENT Jul 9 15:00:54.298217 containerd[1557]: time="2025-07-09T15:00:54.298166864Z" level=warning msg="container event discarded" container=9fb761ea5306f5bd96495af07a6e8578ce921cbc9db0e99f39f8b8049819740a type=CONTAINER_STARTED_EVENT Jul 9 15:00:54.367236 containerd[1557]: time="2025-07-09T15:00:54.367040494Z" level=warning msg="container event discarded" container=30135be8c9b621979686ac3c0dc4b00e88adf393c9c6fde9ca33cf60a201647b type=CONTAINER_CREATED_EVENT Jul 9 15:00:54.475983 containerd[1557]: time="2025-07-09T15:00:54.475715263Z" level=warning msg="container event discarded" container=bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c type=CONTAINER_CREATED_EVENT Jul 9 15:00:54.476345 containerd[1557]: time="2025-07-09T15:00:54.475897364Z" level=warning msg="container event discarded" container=bfd8d7cac6f04a62a6f6ece4938de073c0e6624ce18df59f35bd6c8e6788815c type=CONTAINER_STARTED_EVENT Jul 9 15:00:54.492486 containerd[1557]: time="2025-07-09T15:00:54.492302742Z" level=warning msg="container event discarded" container=30135be8c9b621979686ac3c0dc4b00e88adf393c9c6fde9ca33cf60a201647b type=CONTAINER_STARTED_EVENT Jul 9 15:00:54.528198 containerd[1557]: time="2025-07-09T15:00:54.528072356Z" level=warning msg="container event discarded" container=8d9da4273c7688fc6f858b77dd4282a765443eefa29322f0256ba45b638414af type=CONTAINER_CREATED_EVENT Jul 9 15:00:54.701369 containerd[1557]: time="2025-07-09T15:00:54.701153583Z" level=warning msg="container event discarded" container=8d9da4273c7688fc6f858b77dd4282a765443eefa29322f0256ba45b638414af type=CONTAINER_STARTED_EVENT Jul 9 15:00:55.383533 containerd[1557]: time="2025-07-09T15:00:55.383356461Z" level=warning msg="container event discarded" container=13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc type=CONTAINER_CREATED_EVENT Jul 9 15:00:55.384480 containerd[1557]: time="2025-07-09T15:00:55.383512083Z" level=warning msg="container event discarded" container=13af6a96d65f8cfa460626dea1a2e091f2dc35b47b13013a322b58a775afb4dc type=CONTAINER_STARTED_EVENT Jul 9 15:00:55.487234 containerd[1557]: time="2025-07-09T15:00:55.487102802Z" level=warning msg="container event discarded" container=716e55bcde5ac6169db87993fbf92cd6138fc78e506f9b5fe7798e1c9b0a0540 type=CONTAINER_CREATED_EVENT Jul 9 15:00:55.842416 containerd[1557]: time="2025-07-09T15:00:55.842268089Z" level=warning msg="container event discarded" container=716e55bcde5ac6169db87993fbf92cd6138fc78e506f9b5fe7798e1c9b0a0540 type=CONTAINER_STARTED_EVENT Jul 9 15:00:56.073901 containerd[1557]: time="2025-07-09T15:00:56.073719254Z" level=warning msg="container event discarded" container=5ed07ebabe25f5f4ec303d21db93714dc247fb3b50daf4f721646e19e1ec6ae4 type=CONTAINER_CREATED_EVENT Jul 9 15:00:56.691902 containerd[1557]: time="2025-07-09T15:00:56.691760859Z" level=warning msg="container event discarded" container=44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509 type=CONTAINER_CREATED_EVENT Jul 9 15:00:56.691902 containerd[1557]: time="2025-07-09T15:00:56.691858773Z" level=warning msg="container event discarded" container=44bf02066d7a581f6502d6b2bc426be4c717eebf6932422f8d34bab506401509 type=CONTAINER_STARTED_EVENT Jul 9 15:00:56.787341 containerd[1557]: time="2025-07-09T15:00:56.787173842Z" level=warning msg="container event discarded" container=5ed07ebabe25f5f4ec303d21db93714dc247fb3b50daf4f721646e19e1ec6ae4 type=CONTAINER_STARTED_EVENT Jul 9 15:00:56.898197 containerd[1557]: time="2025-07-09T15:00:56.898017052Z" level=warning msg="container event discarded" container=1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca type=CONTAINER_CREATED_EVENT Jul 9 15:00:56.898567 containerd[1557]: time="2025-07-09T15:00:56.898131538Z" level=warning msg="container event discarded" container=1a1aebfb18a93301e58448b437eb1baf5a59b13591ad819efd09e5bdea7d67ca type=CONTAINER_STARTED_EVENT Jul 9 15:00:59.901704 containerd[1557]: time="2025-07-09T15:00:59.901537846Z" level=warning msg="container event discarded" container=bac7a5cd2865b25bfe10d29f04bb3c62b5d94237fbcd750a1b73c41a24d2191b type=CONTAINER_CREATED_EVENT Jul 9 15:01:00.085145 containerd[1557]: time="2025-07-09T15:01:00.085016086Z" level=warning msg="container event discarded" container=bac7a5cd2865b25bfe10d29f04bb3c62b5d94237fbcd750a1b73c41a24d2191b type=CONTAINER_STARTED_EVENT Jul 9 15:01:03.801771 containerd[1557]: time="2025-07-09T15:01:03.801560841Z" level=warning msg="container event discarded" container=b6dd00a334099754d81f28e2073f40b0f1e34c25406b21f503edda27a31603e6 type=CONTAINER_CREATED_EVENT Jul 9 15:01:03.947763 containerd[1557]: time="2025-07-09T15:01:03.947591117Z" level=warning msg="container event discarded" container=b6dd00a334099754d81f28e2073f40b0f1e34c25406b21f503edda27a31603e6 type=CONTAINER_STARTED_EVENT Jul 9 15:01:08.461088 containerd[1557]: time="2025-07-09T15:01:08.460618094Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\" id:\"a31a60ce3916d754d6f1864638e15068cc3911ee58e77cde2dbb780334fec538\" pid:7053 exited_at:{seconds:1752073268 nanos:459988854}" Jul 9 15:01:09.476180 containerd[1557]: time="2025-07-09T15:01:09.475099437Z" level=warning msg="container event discarded" container=38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab type=CONTAINER_CREATED_EVENT Jul 9 15:01:09.742627 containerd[1557]: time="2025-07-09T15:01:09.742155168Z" level=warning msg="container event discarded" container=38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab type=CONTAINER_STARTED_EVENT Jul 9 15:01:12.328195 containerd[1557]: time="2025-07-09T15:01:12.328093454Z" level=warning msg="container event discarded" container=a42a3b67c6d7e025661ae2a61cb2ea06559d3ada7157d6ac3430964f54c232c3 type=CONTAINER_CREATED_EVENT Jul 9 15:01:12.538143 containerd[1557]: time="2025-07-09T15:01:12.537828604Z" level=warning msg="container event discarded" container=a42a3b67c6d7e025661ae2a61cb2ea06559d3ada7157d6ac3430964f54c232c3 type=CONTAINER_STARTED_EVENT Jul 9 15:01:12.540082 containerd[1557]: time="2025-07-09T15:01:12.538060890Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\" id:\"0c3fab3d34bc74467ad943206f13b89ddeedfd7155097e0ff2f91b6cc1268c12\" pid:7076 exited_at:{seconds:1752073272 nanos:537221455}" Jul 9 15:01:12.758808 containerd[1557]: time="2025-07-09T15:01:12.758237085Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\" id:\"8c22955999ec63c0370387fcf994ac952d5cfe350071c49af356a1b018f51401\" pid:7100 exited_at:{seconds:1752073272 nanos:757628282}" Jul 9 15:01:13.487489 containerd[1557]: time="2025-07-09T15:01:13.487381248Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a\" id:\"c6d01e080296776ca03768ad9cba020115db4caf14d1321b952c7691bff897b7\" pid:7122 exited_at:{seconds:1752073273 nanos:486632012}" Jul 9 15:01:19.423615 containerd[1557]: time="2025-07-09T15:01:19.423288201Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\" id:\"c3b8b9a39e8660bfcc888be42ed4d83a2156e5f14d6d11e320730f777a081434\" pid:7148 exited_at:{seconds:1752073279 nanos:422579973}" Jul 9 15:01:42.905225 containerd[1557]: time="2025-07-09T15:01:42.903699723Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38169dc78070884cbd10432744c28e0736e836edb7f4c4f4088e9dd5a79adbab\" id:\"4c1fb19c6459ba855e0c4223f6deaf6b9e6e253433263b6a5a5bea960837582b\" pid:7171 exited_at:{seconds:1752073302 nanos:901758080}" Jul 9 15:01:43.415393 containerd[1557]: time="2025-07-09T15:01:43.414841129Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd174a37002923a204890916f7f8846314955e251d34dcc99e4232f1eb7a814a\" id:\"1f4aa674fdf9b8995a45f5c23856f9149616e4b0ec9b0c6290e7671b8fc92302\" pid:7193 exited_at:{seconds:1752073303 nanos:414413657}" Jul 9 15:01:49.368767 containerd[1557]: time="2025-07-09T15:01:49.368715851Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce095930f0ccaac461447f81d6b4a0f6ed86ddc2056a4d5e9ef50cd503f1a400\" id:\"533ada53c5f86a2c0a8413e0398a4cba707cf9c0c6b0512805da3c892a17bcf9\" pid:7217 exited_at:{seconds:1752073309 nanos:368126845}"