Sep 9 05:35:36.238713 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 9 03:39:34 -00 2025 Sep 9 05:35:36.238767 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:35:36.238786 kernel: BIOS-provided physical RAM map: Sep 9 05:35:36.238800 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 9 05:35:36.238814 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 9 05:35:36.238830 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 9 05:35:36.238969 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 9 05:35:36.238994 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 9 05:35:36.239017 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd329fff] usable Sep 9 05:35:36.239033 kernel: BIOS-e820: [mem 0x00000000bd32a000-0x00000000bd331fff] ACPI data Sep 9 05:35:36.239059 kernel: BIOS-e820: [mem 0x00000000bd332000-0x00000000bf8ecfff] usable Sep 9 05:35:36.239088 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Sep 9 05:35:36.239117 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 9 05:35:36.239146 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 9 05:35:36.239181 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 9 05:35:36.239208 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 9 05:35:36.239235 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 9 05:35:36.239272 kernel: NX (Execute Disable) protection: active Sep 9 05:35:36.239300 kernel: APIC: Static calls initialized Sep 9 05:35:36.239326 kernel: efi: EFI v2.7 by EDK II Sep 9 05:35:36.239354 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd32a018 Sep 9 05:35:36.239381 kernel: random: crng init done Sep 9 05:35:36.239411 kernel: secureboot: Secure boot disabled Sep 9 05:35:36.239438 kernel: SMBIOS 2.4 present. Sep 9 05:35:36.239465 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/14/2025 Sep 9 05:35:36.239492 kernel: DMI: Memory slots populated: 1/1 Sep 9 05:35:36.239511 kernel: Hypervisor detected: KVM Sep 9 05:35:36.239538 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 05:35:36.239565 kernel: kvm-clock: using sched offset of 15161086617 cycles Sep 9 05:35:36.239593 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 05:35:36.239620 kernel: tsc: Detected 2299.998 MHz processor Sep 9 05:35:36.239648 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 05:35:36.239680 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 05:35:36.239707 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 9 05:35:36.239735 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Sep 9 05:35:36.239762 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 05:35:36.239789 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 9 05:35:36.239816 kernel: Using GB pages for direct mapping Sep 9 05:35:36.239865 kernel: ACPI: Early table checksum verification disabled Sep 9 05:35:36.239893 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 9 05:35:36.239935 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 9 05:35:36.239964 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 9 05:35:36.239993 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 9 05:35:36.240022 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 9 05:35:36.240052 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Sep 9 05:35:36.240081 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 9 05:35:36.240114 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 9 05:35:36.240143 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 9 05:35:36.240173 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 9 05:35:36.240195 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 9 05:35:36.240224 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 9 05:35:36.240264 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 9 05:35:36.240293 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 9 05:35:36.240321 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 9 05:35:36.240351 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 9 05:35:36.240384 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 9 05:35:36.240412 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 9 05:35:36.240441 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 9 05:35:36.240470 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 9 05:35:36.240497 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 9 05:35:36.240526 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 9 05:35:36.240556 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 9 05:35:36.240585 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Sep 9 05:35:36.240615 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Sep 9 05:35:36.240648 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Sep 9 05:35:36.240678 kernel: Zone ranges: Sep 9 05:35:36.240707 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 05:35:36.240736 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 9 05:35:36.240765 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 9 05:35:36.240794 kernel: Device empty Sep 9 05:35:36.240823 kernel: Movable zone start for each node Sep 9 05:35:36.240869 kernel: Early memory node ranges Sep 9 05:35:36.240899 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 9 05:35:36.240932 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 9 05:35:36.240961 kernel: node 0: [mem 0x0000000000100000-0x00000000bd329fff] Sep 9 05:35:36.240990 kernel: node 0: [mem 0x00000000bd332000-0x00000000bf8ecfff] Sep 9 05:35:36.241019 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 9 05:35:36.241048 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 9 05:35:36.241077 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 9 05:35:36.241106 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 05:35:36.241135 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 9 05:35:36.241164 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 9 05:35:36.241194 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Sep 9 05:35:36.241227 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 9 05:35:36.241264 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 9 05:35:36.241293 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 9 05:35:36.241322 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 05:35:36.241351 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 05:35:36.241380 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 05:35:36.241410 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 05:35:36.241439 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 05:35:36.241468 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 05:35:36.241500 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 05:35:36.241530 kernel: CPU topo: Max. logical packages: 1 Sep 9 05:35:36.241559 kernel: CPU topo: Max. logical dies: 1 Sep 9 05:35:36.241588 kernel: CPU topo: Max. dies per package: 1 Sep 9 05:35:36.241616 kernel: CPU topo: Max. threads per core: 2 Sep 9 05:35:36.241646 kernel: CPU topo: Num. cores per package: 1 Sep 9 05:35:36.241674 kernel: CPU topo: Num. threads per package: 2 Sep 9 05:35:36.241703 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 9 05:35:36.241733 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 9 05:35:36.241766 kernel: Booting paravirtualized kernel on KVM Sep 9 05:35:36.241795 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 05:35:36.241825 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 9 05:35:36.241868 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 9 05:35:36.241897 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 9 05:35:36.241926 kernel: pcpu-alloc: [0] 0 1 Sep 9 05:35:36.241954 kernel: kvm-guest: PV spinlocks enabled Sep 9 05:35:36.241984 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 05:35:36.242015 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:35:36.242049 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 05:35:36.242078 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 9 05:35:36.242107 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 05:35:36.242137 kernel: Fallback order for Node 0: 0 Sep 9 05:35:36.242166 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965138 Sep 9 05:35:36.242196 kernel: Policy zone: Normal Sep 9 05:35:36.242225 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 05:35:36.242261 kernel: software IO TLB: area num 2. Sep 9 05:35:36.242311 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 9 05:35:36.242343 kernel: Kernel/User page tables isolation: enabled Sep 9 05:35:36.242374 kernel: ftrace: allocating 40102 entries in 157 pages Sep 9 05:35:36.242408 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 05:35:36.242439 kernel: Dynamic Preempt: voluntary Sep 9 05:35:36.242471 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 05:35:36.242503 kernel: rcu: RCU event tracing is enabled. Sep 9 05:35:36.242535 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 9 05:35:36.242570 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 05:35:36.242602 kernel: Rude variant of Tasks RCU enabled. Sep 9 05:35:36.242633 kernel: Tracing variant of Tasks RCU enabled. Sep 9 05:35:36.242663 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 05:35:36.242694 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 9 05:35:36.242726 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 05:35:36.242757 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 05:35:36.242788 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 05:35:36.242819 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 9 05:35:36.242867 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 05:35:36.242899 kernel: Console: colour dummy device 80x25 Sep 9 05:35:36.242930 kernel: printk: legacy console [ttyS0] enabled Sep 9 05:35:36.242962 kernel: ACPI: Core revision 20240827 Sep 9 05:35:36.242993 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 05:35:36.243024 kernel: x2apic enabled Sep 9 05:35:36.243055 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 05:35:36.243087 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 9 05:35:36.243118 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 9 05:35:36.243154 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 9 05:35:36.243185 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 9 05:35:36.243216 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 9 05:35:36.243254 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 05:35:36.243285 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Sep 9 05:35:36.243316 kernel: Spectre V2 : Mitigation: IBRS Sep 9 05:35:36.243348 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 05:35:36.243379 kernel: RETBleed: Mitigation: IBRS Sep 9 05:35:36.243410 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 05:35:36.243445 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Sep 9 05:35:36.243476 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 05:35:36.243504 kernel: MDS: Mitigation: Clear CPU buffers Sep 9 05:35:36.243535 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 9 05:35:36.243566 kernel: active return thunk: its_return_thunk Sep 9 05:35:36.243598 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 9 05:35:36.243629 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 05:35:36.243660 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 05:35:36.243691 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 05:35:36.243726 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 05:35:36.243757 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 9 05:35:36.243789 kernel: Freeing SMP alternatives memory: 32K Sep 9 05:35:36.243819 kernel: pid_max: default: 32768 minimum: 301 Sep 9 05:35:36.243866 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 05:35:36.243897 kernel: landlock: Up and running. Sep 9 05:35:36.243928 kernel: SELinux: Initializing. Sep 9 05:35:36.243959 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 9 05:35:36.243989 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 9 05:35:36.244022 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 9 05:35:36.244043 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 9 05:35:36.244059 kernel: signal: max sigframe size: 1776 Sep 9 05:35:36.244086 kernel: rcu: Hierarchical SRCU implementation. Sep 9 05:35:36.244111 kernel: rcu: Max phase no-delay instances is 400. Sep 9 05:35:36.244130 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 05:35:36.244144 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 9 05:35:36.244391 kernel: smp: Bringing up secondary CPUs ... Sep 9 05:35:36.244427 kernel: smpboot: x86: Booting SMP configuration: Sep 9 05:35:36.244457 kernel: .... node #0, CPUs: #1 Sep 9 05:35:36.246271 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 9 05:35:36.246302 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 9 05:35:36.246323 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 05:35:36.246342 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 9 05:35:36.246363 kernel: Memory: 7564028K/7860552K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54076K init, 2892K bss, 290704K reserved, 0K cma-reserved) Sep 9 05:35:36.246383 kernel: devtmpfs: initialized Sep 9 05:35:36.246403 kernel: x86/mm: Memory block size: 128MB Sep 9 05:35:36.246429 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 9 05:35:36.246445 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 05:35:36.246464 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 9 05:35:36.246484 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 05:35:36.246501 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 05:35:36.246520 kernel: audit: initializing netlink subsys (disabled) Sep 9 05:35:36.246539 kernel: audit: type=2000 audit(1757396131.558:1): state=initialized audit_enabled=0 res=1 Sep 9 05:35:36.246557 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 05:35:36.246580 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 05:35:36.246598 kernel: cpuidle: using governor menu Sep 9 05:35:36.246617 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 05:35:36.246635 kernel: dca service started, version 1.12.1 Sep 9 05:35:36.246654 kernel: PCI: Using configuration type 1 for base access Sep 9 05:35:36.246672 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 05:35:36.246691 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 05:35:36.246709 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 05:35:36.246727 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 05:35:36.246749 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 05:35:36.246766 kernel: ACPI: Added _OSI(Module Device) Sep 9 05:35:36.246784 kernel: ACPI: Added _OSI(Processor Device) Sep 9 05:35:36.246803 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 05:35:36.246821 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 9 05:35:36.246869 kernel: ACPI: Interpreter enabled Sep 9 05:35:36.246897 kernel: ACPI: PM: (supports S0 S3 S5) Sep 9 05:35:36.246921 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 05:35:36.246947 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 05:35:36.246971 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 9 05:35:36.247000 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 9 05:35:36.247024 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 05:35:36.247358 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 9 05:35:36.247624 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 9 05:35:36.250001 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 9 05:35:36.250045 kernel: PCI host bridge to bus 0000:00 Sep 9 05:35:36.250302 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 05:35:36.251615 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 05:35:36.252127 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 05:35:36.252359 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 9 05:35:36.252570 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 05:35:36.252831 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 9 05:35:36.255169 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Sep 9 05:35:36.255442 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Sep 9 05:35:36.255684 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 9 05:35:36.255950 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Sep 9 05:35:36.256193 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Sep 9 05:35:36.256468 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Sep 9 05:35:36.256735 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 05:35:36.259109 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Sep 9 05:35:36.259347 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Sep 9 05:35:36.259609 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 05:35:36.259856 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Sep 9 05:35:36.260087 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Sep 9 05:35:36.260122 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 05:35:36.260152 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 05:35:36.260188 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 05:35:36.260218 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 05:35:36.260259 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 9 05:35:36.260289 kernel: iommu: Default domain type: Translated Sep 9 05:35:36.260319 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 05:35:36.260348 kernel: efivars: Registered efivars operations Sep 9 05:35:36.260377 kernel: PCI: Using ACPI for IRQ routing Sep 9 05:35:36.260407 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 05:35:36.260436 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 9 05:35:36.260470 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 9 05:35:36.260497 kernel: e820: reserve RAM buffer [mem 0xbd32a000-0xbfffffff] Sep 9 05:35:36.260536 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 9 05:35:36.260567 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 9 05:35:36.260597 kernel: vgaarb: loaded Sep 9 05:35:36.260626 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 05:35:36.260651 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 05:35:36.260670 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 05:35:36.260689 kernel: pnp: PnP ACPI init Sep 9 05:35:36.260713 kernel: pnp: PnP ACPI: found 7 devices Sep 9 05:35:36.260733 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 05:35:36.260754 kernel: NET: Registered PF_INET protocol family Sep 9 05:35:36.260773 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 9 05:35:36.260793 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 9 05:35:36.260812 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 05:35:36.260831 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 05:35:36.264896 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 9 05:35:36.264921 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 9 05:35:36.264947 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 9 05:35:36.264968 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 9 05:35:36.264988 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 05:35:36.265007 kernel: NET: Registered PF_XDP protocol family Sep 9 05:35:36.265226 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 05:35:36.265427 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 05:35:36.265614 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 05:35:36.265800 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 9 05:35:36.266061 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 9 05:35:36.266089 kernel: PCI: CLS 0 bytes, default 64 Sep 9 05:35:36.266110 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 9 05:35:36.266129 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 9 05:35:36.266149 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 9 05:35:36.266168 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 9 05:35:36.266189 kernel: clocksource: Switched to clocksource tsc Sep 9 05:35:36.266209 kernel: Initialise system trusted keyrings Sep 9 05:35:36.266234 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 9 05:35:36.266260 kernel: Key type asymmetric registered Sep 9 05:35:36.266279 kernel: Asymmetric key parser 'x509' registered Sep 9 05:35:36.266297 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 05:35:36.266316 kernel: io scheduler mq-deadline registered Sep 9 05:35:36.266335 kernel: io scheduler kyber registered Sep 9 05:35:36.266354 kernel: io scheduler bfq registered Sep 9 05:35:36.266375 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 05:35:36.266395 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 9 05:35:36.266614 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 9 05:35:36.266640 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 9 05:35:36.270880 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 9 05:35:36.270922 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 9 05:35:36.271189 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 9 05:35:36.271216 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 05:35:36.271237 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 05:35:36.271264 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 9 05:35:36.271285 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 9 05:35:36.271310 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 9 05:35:36.271533 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 9 05:35:36.271562 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 05:35:36.271582 kernel: i8042: Warning: Keylock active Sep 9 05:35:36.271601 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 05:35:36.271623 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 05:35:36.271867 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 9 05:35:36.272079 kernel: rtc_cmos 00:00: registered as rtc0 Sep 9 05:35:36.272280 kernel: rtc_cmos 00:00: setting system clock to 2025-09-09T05:35:35 UTC (1757396135) Sep 9 05:35:36.272472 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 9 05:35:36.272498 kernel: intel_pstate: CPU model not supported Sep 9 05:35:36.272519 kernel: pstore: Using crash dump compression: deflate Sep 9 05:35:36.272538 kernel: pstore: Registered efi_pstore as persistent store backend Sep 9 05:35:36.272558 kernel: NET: Registered PF_INET6 protocol family Sep 9 05:35:36.272578 kernel: Segment Routing with IPv6 Sep 9 05:35:36.272598 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 05:35:36.272624 kernel: NET: Registered PF_PACKET protocol family Sep 9 05:35:36.272643 kernel: Key type dns_resolver registered Sep 9 05:35:36.272662 kernel: IPI shorthand broadcast: enabled Sep 9 05:35:36.272682 kernel: sched_clock: Marking stable (4108005329, 348910503)->(4825334755, -368418923) Sep 9 05:35:36.272703 kernel: registered taskstats version 1 Sep 9 05:35:36.272723 kernel: Loading compiled-in X.509 certificates Sep 9 05:35:36.272743 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 884b9ad6a330f59ae6e6488b20a5491e41ff24a3' Sep 9 05:35:36.272762 kernel: Demotion targets for Node 0: null Sep 9 05:35:36.272781 kernel: Key type .fscrypt registered Sep 9 05:35:36.272805 kernel: Key type fscrypt-provisioning registered Sep 9 05:35:36.272826 kernel: ima: Allocated hash algorithm: sha1 Sep 9 05:35:36.274440 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 05:35:36.274469 kernel: ima: No architecture policies found Sep 9 05:35:36.274488 kernel: clk: Disabling unused clocks Sep 9 05:35:36.274507 kernel: Warning: unable to open an initial console. Sep 9 05:35:36.274527 kernel: Freeing unused kernel image (initmem) memory: 54076K Sep 9 05:35:36.274548 kernel: Write protecting the kernel read-only data: 24576k Sep 9 05:35:36.274582 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 9 05:35:36.274604 kernel: Run /init as init process Sep 9 05:35:36.274627 kernel: with arguments: Sep 9 05:35:36.274649 kernel: /init Sep 9 05:35:36.274676 kernel: with environment: Sep 9 05:35:36.274697 kernel: HOME=/ Sep 9 05:35:36.274721 kernel: TERM=linux Sep 9 05:35:36.274746 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 05:35:36.274777 systemd[1]: Successfully made /usr/ read-only. Sep 9 05:35:36.274807 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:35:36.274829 systemd[1]: Detected virtualization google. Sep 9 05:35:36.274864 systemd[1]: Detected architecture x86-64. Sep 9 05:35:36.274886 systemd[1]: Running in initrd. Sep 9 05:35:36.274908 systemd[1]: No hostname configured, using default hostname. Sep 9 05:35:36.274933 systemd[1]: Hostname set to . Sep 9 05:35:36.274957 systemd[1]: Initializing machine ID from random generator. Sep 9 05:35:36.274989 systemd[1]: Queued start job for default target initrd.target. Sep 9 05:35:36.275038 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:35:36.275068 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:35:36.275096 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 05:35:36.275117 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:35:36.275143 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 05:35:36.275177 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 05:35:36.275204 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 05:35:36.275229 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 05:35:36.275264 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:35:36.275288 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:35:36.275311 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:35:36.275333 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:35:36.275360 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:35:36.275386 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:35:36.275411 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:35:36.275437 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:35:36.275465 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 05:35:36.275492 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 05:35:36.275514 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:35:36.275539 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:35:36.275571 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:35:36.275594 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:35:36.275622 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 05:35:36.275648 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:35:36.275674 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 05:35:36.275702 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 05:35:36.275726 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 05:35:36.275753 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:35:36.275778 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:35:36.275808 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:36.275833 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 05:35:36.276920 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:35:36.276951 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 05:35:36.276978 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 05:35:36.277041 systemd-journald[207]: Collecting audit messages is disabled. Sep 9 05:35:36.277089 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:36.277117 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 05:35:36.277139 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:35:36.277161 systemd-journald[207]: Journal started Sep 9 05:35:36.277205 systemd-journald[207]: Runtime Journal (/run/log/journal/1f0276afa71c4210bb4f5ac2929f5a0d) is 8M, max 148.9M, 140.9M free. Sep 9 05:35:36.256728 systemd-modules-load[208]: Inserted module 'overlay' Sep 9 05:35:36.285068 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:35:36.303083 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:35:36.313957 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 05:35:36.312096 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:35:36.320880 kernel: Bridge firewalling registered Sep 9 05:35:36.321983 systemd-modules-load[208]: Inserted module 'br_netfilter' Sep 9 05:35:36.328462 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:35:36.334919 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:35:36.342607 systemd-tmpfiles[228]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 05:35:36.343440 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 05:35:36.352069 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:35:36.352803 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:35:36.359581 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:35:36.390588 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:35:36.400970 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:35:36.408033 dracut-cmdline[238]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:35:36.492777 systemd-resolved[255]: Positive Trust Anchors: Sep 9 05:35:36.493491 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:35:36.494193 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:35:36.509753 systemd-resolved[255]: Defaulting to hostname 'linux'. Sep 9 05:35:36.513669 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:35:36.517085 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:35:36.554896 kernel: SCSI subsystem initialized Sep 9 05:35:36.567891 kernel: Loading iSCSI transport class v2.0-870. Sep 9 05:35:36.581892 kernel: iscsi: registered transport (tcp) Sep 9 05:35:36.609010 kernel: iscsi: registered transport (qla4xxx) Sep 9 05:35:36.609106 kernel: QLogic iSCSI HBA Driver Sep 9 05:35:36.635310 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:35:36.656106 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:35:36.665205 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:35:36.734388 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 05:35:36.741306 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 05:35:36.805902 kernel: raid6: avx2x4 gen() 17781 MB/s Sep 9 05:35:36.822910 kernel: raid6: avx2x2 gen() 17879 MB/s Sep 9 05:35:36.840902 kernel: raid6: avx2x1 gen() 13841 MB/s Sep 9 05:35:36.841010 kernel: raid6: using algorithm avx2x2 gen() 17879 MB/s Sep 9 05:35:36.858486 kernel: raid6: .... xor() 17718 MB/s, rmw enabled Sep 9 05:35:36.858564 kernel: raid6: using avx2x2 recovery algorithm Sep 9 05:35:36.882901 kernel: xor: automatically using best checksumming function avx Sep 9 05:35:37.083901 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 05:35:37.094287 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:35:37.098745 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:35:37.150162 systemd-udevd[454]: Using default interface naming scheme 'v255'. Sep 9 05:35:37.159981 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:35:37.167901 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 05:35:37.201751 dracut-pre-trigger[459]: rd.md=0: removing MD RAID activation Sep 9 05:35:37.240024 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:35:37.244022 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:35:37.351926 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:35:37.359714 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 05:35:37.496906 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 05:35:37.502115 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Sep 9 05:35:37.530935 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 9 05:35:37.541881 kernel: AES CTR mode by8 optimization enabled Sep 9 05:35:37.550099 kernel: scsi host0: Virtio SCSI HBA Sep 9 05:35:37.556882 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 9 05:35:37.616383 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:35:37.620524 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:37.635213 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:37.642644 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:37.653701 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:35:37.670899 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 9 05:35:37.671339 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 9 05:35:37.671650 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 9 05:35:37.673616 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 9 05:35:37.673973 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 9 05:35:37.688214 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 05:35:37.688280 kernel: GPT:17805311 != 25165823 Sep 9 05:35:37.688312 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 05:35:37.688337 kernel: GPT:17805311 != 25165823 Sep 9 05:35:37.688361 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 05:35:37.688385 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 05:35:37.690883 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 9 05:35:37.700368 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:37.783253 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Sep 9 05:35:37.800464 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 05:35:37.823760 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Sep 9 05:35:37.836691 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Sep 9 05:35:37.837119 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Sep 9 05:35:37.858438 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 9 05:35:37.859006 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:35:37.864164 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:35:37.869114 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:35:37.874726 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 05:35:37.881867 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 05:35:37.901722 disk-uuid[610]: Primary Header is updated. Sep 9 05:35:37.901722 disk-uuid[610]: Secondary Entries is updated. Sep 9 05:35:37.901722 disk-uuid[610]: Secondary Header is updated. Sep 9 05:35:37.909626 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:35:37.923897 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 05:35:37.947879 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 05:35:38.966874 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 05:35:38.966972 disk-uuid[615]: The operation has completed successfully. Sep 9 05:35:39.068327 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 05:35:39.068509 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 05:35:39.124972 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 05:35:39.148686 sh[632]: Success Sep 9 05:35:39.173361 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 05:35:39.173455 kernel: device-mapper: uevent: version 1.0.3 Sep 9 05:35:39.173506 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 05:35:39.187869 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 9 05:35:39.275340 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 05:35:39.281967 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 05:35:39.302727 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 05:35:39.320918 kernel: BTRFS: device fsid 9ca60a92-6b53-4529-adc0-1f4392d2ad56 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (644) Sep 9 05:35:39.324678 kernel: BTRFS info (device dm-0): first mount of filesystem 9ca60a92-6b53-4529-adc0-1f4392d2ad56 Sep 9 05:35:39.324755 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:35:39.349306 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 9 05:35:39.349431 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 05:35:39.349466 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 05:35:39.354312 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 05:35:39.356592 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:35:39.357561 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 05:35:39.360189 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 05:35:39.369621 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 05:35:39.409469 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (675) Sep 9 05:35:39.409556 kernel: BTRFS info (device sda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:35:39.411093 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:35:39.421328 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 05:35:39.421437 kernel: BTRFS info (device sda6): turning on async discard Sep 9 05:35:39.421471 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 05:35:39.430941 kernel: BTRFS info (device sda6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:35:39.433611 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 05:35:39.441069 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 05:35:39.576090 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:35:39.595127 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:35:39.713974 ignition[734]: Ignition 2.22.0 Sep 9 05:35:39.714486 ignition[734]: Stage: fetch-offline Sep 9 05:35:39.714548 ignition[734]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:39.719431 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:35:39.714567 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 9 05:35:39.719715 systemd-networkd[813]: lo: Link UP Sep 9 05:35:39.714766 ignition[734]: parsed url from cmdline: "" Sep 9 05:35:39.719720 systemd-networkd[813]: lo: Gained carrier Sep 9 05:35:39.714774 ignition[734]: no config URL provided Sep 9 05:35:39.724317 systemd-networkd[813]: Enumeration completed Sep 9 05:35:39.714785 ignition[734]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 05:35:39.725292 systemd-networkd[813]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:35:39.714803 ignition[734]: no config at "/usr/lib/ignition/user.ign" Sep 9 05:35:39.725302 systemd-networkd[813]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:35:39.714817 ignition[734]: failed to fetch config: resource requires networking Sep 9 05:35:39.726295 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:35:39.715197 ignition[734]: Ignition finished successfully Sep 9 05:35:39.728017 systemd-networkd[813]: eth0: Link UP Sep 9 05:35:39.728309 systemd-networkd[813]: eth0: Gained carrier Sep 9 05:35:39.728327 systemd-networkd[813]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:35:39.733168 systemd[1]: Reached target network.target - Network. Sep 9 05:35:39.738499 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 9 05:35:39.741704 systemd-networkd[813]: eth0: Overlong DHCP hostname received, shortened from 'ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb.c.flatcar-212911.internal' to 'ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb' Sep 9 05:35:39.741725 systemd-networkd[813]: eth0: DHCPv4 address 10.128.0.4/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 9 05:35:39.799418 ignition[822]: Ignition 2.22.0 Sep 9 05:35:39.799440 ignition[822]: Stage: fetch Sep 9 05:35:39.799698 ignition[822]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:39.799719 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 9 05:35:39.799913 ignition[822]: parsed url from cmdline: "" Sep 9 05:35:39.799921 ignition[822]: no config URL provided Sep 9 05:35:39.799931 ignition[822]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 05:35:39.799944 ignition[822]: no config at "/usr/lib/ignition/user.ign" Sep 9 05:35:39.815784 unknown[822]: fetched base config from "system" Sep 9 05:35:39.799993 ignition[822]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 9 05:35:39.815798 unknown[822]: fetched base config from "system" Sep 9 05:35:39.805329 ignition[822]: GET result: OK Sep 9 05:35:39.815807 unknown[822]: fetched user config from "gcp" Sep 9 05:35:39.805467 ignition[822]: parsing config with SHA512: cc36f4eb9073e6be029e6928be8f2625d045e88d52ac2a5deabf832a4681f0084ce877dc844ee14041bb432c229cc2291df41eb5f8b81064fedce7460e7e0952 Sep 9 05:35:39.820634 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 9 05:35:39.816501 ignition[822]: fetch: fetch complete Sep 9 05:35:39.828108 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 05:35:39.816515 ignition[822]: fetch: fetch passed Sep 9 05:35:39.816588 ignition[822]: Ignition finished successfully Sep 9 05:35:39.882091 ignition[830]: Ignition 2.22.0 Sep 9 05:35:39.882117 ignition[830]: Stage: kargs Sep 9 05:35:39.882395 ignition[830]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:39.887818 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 05:35:39.882418 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 9 05:35:39.895006 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 05:35:39.884113 ignition[830]: kargs: kargs passed Sep 9 05:35:39.884184 ignition[830]: Ignition finished successfully Sep 9 05:35:39.935911 ignition[837]: Ignition 2.22.0 Sep 9 05:35:39.935931 ignition[837]: Stage: disks Sep 9 05:35:39.936234 ignition[837]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:39.940237 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 05:35:39.936256 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 9 05:35:39.945491 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 05:35:39.937511 ignition[837]: disks: disks passed Sep 9 05:35:39.950203 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 05:35:39.937570 ignition[837]: Ignition finished successfully Sep 9 05:35:39.953673 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:35:39.958382 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:35:39.962207 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:35:39.969146 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 05:35:40.013586 systemd-fsck[846]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 9 05:35:40.026520 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 05:35:40.033499 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 05:35:40.231434 kernel: EXT4-fs (sda9): mounted filesystem d2d7815e-fa16-4396-ab9d-ac540c1d8856 r/w with ordered data mode. Quota mode: none. Sep 9 05:35:40.232614 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 05:35:40.235021 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 05:35:40.241097 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:35:40.246583 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 05:35:40.251095 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 05:35:40.252683 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 05:35:40.254758 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:35:40.271874 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (854) Sep 9 05:35:40.275884 kernel: BTRFS info (device sda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:35:40.279020 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 05:35:40.280513 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:35:40.283933 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 05:35:40.292348 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 05:35:40.292398 kernel: BTRFS info (device sda6): turning on async discard Sep 9 05:35:40.292428 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 05:35:40.296495 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:35:40.425120 initrd-setup-root[878]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 05:35:40.435560 initrd-setup-root[885]: cut: /sysroot/etc/group: No such file or directory Sep 9 05:35:40.442800 initrd-setup-root[892]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 05:35:40.450136 initrd-setup-root[899]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 05:35:40.603643 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 05:35:40.607708 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 05:35:40.623557 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 05:35:40.637877 kernel: BTRFS info (device sda6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:35:40.637875 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 05:35:40.688025 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 05:35:40.691950 ignition[966]: INFO : Ignition 2.22.0 Sep 9 05:35:40.695037 ignition[966]: INFO : Stage: mount Sep 9 05:35:40.695037 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:40.695037 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 9 05:35:40.695037 ignition[966]: INFO : mount: mount passed Sep 9 05:35:40.695037 ignition[966]: INFO : Ignition finished successfully Sep 9 05:35:40.696817 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 05:35:40.700758 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 05:35:40.727866 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:35:40.757920 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (979) Sep 9 05:35:40.761098 kernel: BTRFS info (device sda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:35:40.761165 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:35:40.768023 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 05:35:40.768124 kernel: BTRFS info (device sda6): turning on async discard Sep 9 05:35:40.768157 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 05:35:40.772171 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:35:40.817515 ignition[995]: INFO : Ignition 2.22.0 Sep 9 05:35:40.821061 ignition[995]: INFO : Stage: files Sep 9 05:35:40.821061 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:40.821061 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 9 05:35:40.821061 ignition[995]: DEBUG : files: compiled without relabeling support, skipping Sep 9 05:35:40.833009 ignition[995]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 05:35:40.833009 ignition[995]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 05:35:40.833009 ignition[995]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 05:35:40.833009 ignition[995]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 05:35:40.833009 ignition[995]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 05:35:40.829569 unknown[995]: wrote ssh authorized keys file for user: core Sep 9 05:35:40.853013 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 05:35:40.853013 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 9 05:35:41.047383 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 05:35:41.384085 systemd-networkd[813]: eth0: Gained IPv6LL Sep 9 05:35:42.178497 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 05:35:42.182509 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 05:35:42.182509 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 05:35:42.182509 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:35:42.182509 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:35:42.182509 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:35:42.182509 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:35:42.182509 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:35:42.182509 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:35:42.212974 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:35:42.212974 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:35:42.212974 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 05:35:42.212974 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 05:35:42.212974 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 05:35:42.212974 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 9 05:35:42.656352 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 05:35:43.905693 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 05:35:43.905693 ignition[995]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 05:35:43.916051 ignition[995]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:35:43.916051 ignition[995]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:35:43.916051 ignition[995]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 05:35:43.916051 ignition[995]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 9 05:35:43.916051 ignition[995]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 05:35:43.916051 ignition[995]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:35:43.916051 ignition[995]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:35:43.916051 ignition[995]: INFO : files: files passed Sep 9 05:35:43.916051 ignition[995]: INFO : Ignition finished successfully Sep 9 05:35:43.918581 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 05:35:43.923405 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 05:35:43.933984 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 05:35:43.959739 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 05:35:43.960037 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 05:35:43.971001 initrd-setup-root-after-ignition[1026]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:35:43.971001 initrd-setup-root-after-ignition[1026]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:35:43.977015 initrd-setup-root-after-ignition[1030]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:35:43.977109 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:35:43.983794 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 05:35:43.989585 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 05:35:44.073146 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 05:35:44.073742 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 05:35:44.079555 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 05:35:44.082270 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 05:35:44.086722 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 05:35:44.089355 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 05:35:44.126007 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:35:44.129272 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 05:35:44.159350 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:35:44.159935 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:35:44.166464 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 05:35:44.169534 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 05:35:44.169826 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:35:44.178295 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 05:35:44.179999 systemd[1]: Stopped target basic.target - Basic System. Sep 9 05:35:44.183600 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 05:35:44.187936 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:35:44.191415 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 05:35:44.195824 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:35:44.201928 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 05:35:44.206137 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:35:44.209921 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 05:35:44.213615 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 05:35:44.217815 systemd[1]: Stopped target swap.target - Swaps. Sep 9 05:35:44.222759 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 05:35:44.223941 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:35:44.230885 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:35:44.234471 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:35:44.238225 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 05:35:44.238894 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:35:44.242955 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 05:35:44.243796 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 05:35:44.250115 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 05:35:44.250744 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:35:44.252935 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 05:35:44.253201 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 05:35:44.263075 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 05:35:44.272045 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 05:35:44.272364 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:35:44.279464 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 05:35:44.291029 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 05:35:44.292984 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:35:44.298926 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 05:35:44.299247 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:35:44.319084 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 05:35:44.325428 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 05:35:44.326985 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 05:35:44.333569 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 05:35:44.334235 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 05:35:44.349318 ignition[1050]: INFO : Ignition 2.22.0 Sep 9 05:35:44.352044 ignition[1050]: INFO : Stage: umount Sep 9 05:35:44.352044 ignition[1050]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:44.352044 ignition[1050]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 9 05:35:44.352044 ignition[1050]: INFO : umount: umount passed Sep 9 05:35:44.352044 ignition[1050]: INFO : Ignition finished successfully Sep 9 05:35:44.355958 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 05:35:44.356564 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 05:35:44.359553 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 05:35:44.359820 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 05:35:44.363548 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 05:35:44.363803 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 05:35:44.367510 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 9 05:35:44.367771 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 9 05:35:44.372413 systemd[1]: Stopped target network.target - Network. Sep 9 05:35:44.376232 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 05:35:44.376647 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:35:44.383072 systemd[1]: Stopped target paths.target - Path Units. Sep 9 05:35:44.386983 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 05:35:44.390985 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:35:44.395993 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 05:35:44.399319 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 05:35:44.403387 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 05:35:44.403493 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:35:44.410309 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 05:35:44.410401 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:35:44.413341 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 05:35:44.413479 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 05:35:44.421110 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 05:35:44.421237 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 05:35:44.424049 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 05:35:44.424169 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 05:35:44.430310 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 05:35:44.434491 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 05:35:44.440184 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 05:35:44.440405 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 05:35:44.448393 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 05:35:44.448743 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 05:35:44.448939 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 05:35:44.455233 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 05:35:44.456524 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 05:35:44.462085 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 05:35:44.462182 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:35:44.468954 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 05:35:44.474970 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 05:35:44.475087 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:35:44.479337 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 05:35:44.479405 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:35:44.488722 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 05:35:44.489001 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 05:35:44.491507 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 05:35:44.491748 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:35:44.499052 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:35:44.509079 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 05:35:44.509168 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:35:44.525259 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 05:35:44.527510 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:35:44.532770 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 05:35:44.533106 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 05:35:44.540274 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 05:35:44.540381 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 05:35:44.546409 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 05:35:44.546664 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:35:44.550458 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 05:35:44.550583 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:35:44.562064 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 05:35:44.562246 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 05:35:44.569979 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 05:35:44.570128 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:35:44.579698 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 05:35:44.585198 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 05:35:44.585328 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:35:44.586685 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 05:35:44.586819 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:35:44.592325 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:35:44.592423 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:44.599066 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 05:35:44.599182 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 05:35:44.599279 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:35:44.611242 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 05:35:44.611412 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 05:35:44.615445 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 05:35:44.620894 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 05:35:44.656681 systemd[1]: Switching root. Sep 9 05:35:44.707755 systemd-journald[207]: Journal stopped Sep 9 05:35:47.022849 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Sep 9 05:35:47.024965 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 05:35:47.025005 kernel: SELinux: policy capability open_perms=1 Sep 9 05:35:47.025035 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 05:35:47.025057 kernel: SELinux: policy capability always_check_network=0 Sep 9 05:35:47.025086 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 05:35:47.025123 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 05:35:47.025154 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 05:35:47.025183 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 05:35:47.025212 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 05:35:47.025242 kernel: audit: type=1403 audit(1757396145.334:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 05:35:47.025274 systemd[1]: Successfully loaded SELinux policy in 72.637ms. Sep 9 05:35:47.025318 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.181ms. Sep 9 05:35:47.025357 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:35:47.025394 systemd[1]: Detected virtualization google. Sep 9 05:35:47.025427 systemd[1]: Detected architecture x86-64. Sep 9 05:35:47.025460 systemd[1]: Detected first boot. Sep 9 05:35:47.025499 systemd[1]: Initializing machine ID from random generator. Sep 9 05:35:47.025535 zram_generator::config[1094]: No configuration found. Sep 9 05:35:47.029591 kernel: Guest personality initialized and is inactive Sep 9 05:35:47.029628 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 05:35:47.029655 kernel: Initialized host personality Sep 9 05:35:47.029685 kernel: NET: Registered PF_VSOCK protocol family Sep 9 05:35:47.029718 systemd[1]: Populated /etc with preset unit settings. Sep 9 05:35:47.029752 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 05:35:47.029792 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 05:35:47.029872 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 05:35:47.031913 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 05:35:47.031957 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 05:35:47.031990 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 05:35:47.032025 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 05:35:47.032053 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 05:35:47.032094 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 05:35:47.032127 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 05:35:47.032160 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 05:35:47.032191 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 05:35:47.032220 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:35:47.032253 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:35:47.032285 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 05:35:47.032328 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 05:35:47.032370 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 05:35:47.032407 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:35:47.032440 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 05:35:47.032470 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:35:47.032502 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:35:47.032534 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 05:35:47.032567 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 05:35:47.032605 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 05:35:47.032638 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 05:35:47.032671 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:35:47.032703 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:35:47.032735 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:35:47.032769 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:35:47.032800 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 05:35:47.032833 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 05:35:47.032898 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 05:35:47.032938 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:35:47.032971 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:35:47.033001 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:35:47.033034 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 05:35:47.033070 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 05:35:47.033104 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 05:35:47.033136 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 05:35:47.033170 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:47.033203 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 05:35:47.033236 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 05:35:47.033271 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 05:35:47.033311 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 05:35:47.033346 systemd[1]: Reached target machines.target - Containers. Sep 9 05:35:47.033383 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 05:35:47.033417 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:35:47.033449 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:35:47.033481 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 05:35:47.033514 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:35:47.033547 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:35:47.033578 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:35:47.033611 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 05:35:47.033649 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:35:47.033682 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 05:35:47.033715 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 05:35:47.033749 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 05:35:47.033781 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 05:35:47.033814 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 05:35:47.042720 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:35:47.044928 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:35:47.044977 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:35:47.045013 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:35:47.045045 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 05:35:47.045075 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 05:35:47.045110 kernel: fuse: init (API version 7.41) Sep 9 05:35:47.045143 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:35:47.045176 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 05:35:47.045210 systemd[1]: Stopped verity-setup.service. Sep 9 05:35:47.045245 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:47.045285 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 05:35:47.045329 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 05:35:47.045362 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 05:35:47.045393 kernel: loop: module loaded Sep 9 05:35:47.045424 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 05:35:47.045458 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 05:35:47.045486 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 05:35:47.045520 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:35:47.045557 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 05:35:47.045591 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 05:35:47.045621 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:35:47.045654 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:35:47.045686 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:35:47.045718 kernel: ACPI: bus type drm_connector registered Sep 9 05:35:47.045751 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:35:47.049183 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:35:47.049945 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:35:47.050033 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 05:35:47.050068 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 05:35:47.050101 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:35:47.050134 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:35:47.050168 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:35:47.050197 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 05:35:47.050232 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 05:35:47.050282 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 05:35:47.050329 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 05:35:47.050364 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 05:35:47.053922 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:35:47.053982 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 05:35:47.054015 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 05:35:47.054050 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:35:47.054085 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 05:35:47.054131 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:35:47.054165 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 05:35:47.054201 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:35:47.054234 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:35:47.054338 systemd-journald[1161]: Collecting audit messages is disabled. Sep 9 05:35:47.054415 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 05:35:47.054448 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:35:47.054482 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 05:35:47.054520 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 05:35:47.054555 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:35:47.054595 kernel: loop0: detected capacity change from 0 to 50720 Sep 9 05:35:47.054626 systemd-journald[1161]: Journal started Sep 9 05:35:47.054693 systemd-journald[1161]: Runtime Journal (/run/log/journal/cdb9ee432be344fb8d37a0784d77c2d1) is 8M, max 148.9M, 140.9M free. Sep 9 05:35:46.302684 systemd[1]: Queued start job for default target multi-user.target. Sep 9 05:35:46.317690 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 9 05:35:46.318472 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 05:35:47.068521 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:35:47.066974 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 05:35:47.069570 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 05:35:47.089286 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 05:35:47.086787 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 05:35:47.095442 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 05:35:47.157736 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:35:47.161748 systemd-journald[1161]: Time spent on flushing to /var/log/journal/cdb9ee432be344fb8d37a0784d77c2d1 is 104.077ms for 965 entries. Sep 9 05:35:47.161748 systemd-journald[1161]: System Journal (/var/log/journal/cdb9ee432be344fb8d37a0784d77c2d1) is 8M, max 584.8M, 576.8M free. Sep 9 05:35:47.334561 systemd-journald[1161]: Received client request to flush runtime journal. Sep 9 05:35:47.334661 kernel: loop1: detected capacity change from 0 to 221472 Sep 9 05:35:47.334707 kernel: loop2: detected capacity change from 0 to 128016 Sep 9 05:35:47.213995 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:35:47.247987 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 05:35:47.272018 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 05:35:47.290820 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 05:35:47.321279 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 05:35:47.339889 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 05:35:47.393881 kernel: loop3: detected capacity change from 0 to 110984 Sep 9 05:35:47.410338 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 05:35:47.424264 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:35:47.488239 kernel: loop4: detected capacity change from 0 to 50720 Sep 9 05:35:47.513926 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Sep 9 05:35:47.515099 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Sep 9 05:35:47.549910 kernel: loop5: detected capacity change from 0 to 221472 Sep 9 05:35:47.550338 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:35:47.583987 kernel: loop6: detected capacity change from 0 to 128016 Sep 9 05:35:47.633641 kernel: loop7: detected capacity change from 0 to 110984 Sep 9 05:35:47.685079 (sd-merge)[1238]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Sep 9 05:35:47.689895 (sd-merge)[1238]: Merged extensions into '/usr'. Sep 9 05:35:47.703790 systemd[1]: Reload requested from client PID 1187 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 05:35:47.703946 systemd[1]: Reloading... Sep 9 05:35:47.874234 zram_generator::config[1268]: No configuration found. Sep 9 05:35:48.296638 ldconfig[1179]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 05:35:48.457497 systemd[1]: Reloading finished in 751 ms. Sep 9 05:35:48.477548 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 05:35:48.488916 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 05:35:48.514362 systemd[1]: Starting ensure-sysext.service... Sep 9 05:35:48.528086 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:35:48.564033 systemd[1]: Reload requested from client PID 1305 ('systemctl') (unit ensure-sysext.service)... Sep 9 05:35:48.564239 systemd[1]: Reloading... Sep 9 05:35:48.588600 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 05:35:48.588667 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 05:35:48.593045 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 05:35:48.593598 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 05:35:48.599459 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 05:35:48.601961 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Sep 9 05:35:48.602102 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Sep 9 05:35:48.619952 systemd-tmpfiles[1306]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:35:48.620189 systemd-tmpfiles[1306]: Skipping /boot Sep 9 05:35:48.669886 zram_generator::config[1332]: No configuration found. Sep 9 05:35:48.687826 systemd-tmpfiles[1306]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:35:48.694937 systemd-tmpfiles[1306]: Skipping /boot Sep 9 05:35:49.078995 systemd[1]: Reloading finished in 513 ms. Sep 9 05:35:49.102623 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 05:35:49.134789 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:35:49.156053 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:35:49.170112 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 05:35:49.188283 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 05:35:49.207324 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:35:49.219448 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:35:49.232406 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 05:35:49.251999 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:49.252433 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:35:49.256704 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:35:49.271185 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:35:49.289039 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:35:49.298173 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:35:49.298415 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:35:49.307812 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 05:35:49.317031 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:49.324540 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:35:49.324985 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:35:49.337126 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:35:49.337482 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:35:49.350069 augenrules[1404]: No rules Sep 9 05:35:49.349850 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:35:49.351254 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:35:49.362138 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:35:49.362599 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:35:49.364564 systemd-udevd[1388]: Using default interface naming scheme 'v255'. Sep 9 05:35:49.375269 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 05:35:49.416501 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:49.418385 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:35:49.422534 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:35:49.435452 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:35:49.455906 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:35:49.465155 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:35:49.465947 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:35:49.474463 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 05:35:49.483991 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:49.487417 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 05:35:49.496833 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:35:49.508659 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 05:35:49.524680 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 05:35:49.535965 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:35:49.536303 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:35:49.548082 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:35:49.548436 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:35:49.560088 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:35:49.560416 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:35:49.570121 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 05:35:49.608635 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:49.610986 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:35:49.619380 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:35:49.623240 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:35:49.636069 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:35:49.644123 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:35:49.649024 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:35:49.669213 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 9 05:35:49.677170 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:35:49.677250 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:35:49.688265 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:35:49.697044 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 05:35:49.707083 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 05:35:49.707139 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:49.717120 systemd[1]: Finished ensure-sysext.service. Sep 9 05:35:49.726798 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:35:49.727927 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:35:49.768812 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 9 05:35:49.779077 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:35:49.779449 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:35:49.790591 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:35:49.792084 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:35:49.793568 augenrules[1454]: /sbin/augenrules: No change Sep 9 05:35:49.804759 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:35:49.805132 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:35:49.829886 augenrules[1488]: No rules Sep 9 05:35:49.830671 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Sep 9 05:35:49.840005 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:35:49.840141 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:35:49.840987 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:35:49.842951 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:35:49.883626 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Sep 9 05:35:49.885231 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Sep 9 05:35:49.897526 systemd-resolved[1385]: Positive Trust Anchors: Sep 9 05:35:49.899154 systemd-resolved[1385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:35:49.899238 systemd-resolved[1385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:35:49.921778 systemd-resolved[1385]: Defaulting to hostname 'linux'. Sep 9 05:35:49.929954 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 05:35:49.932581 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:35:49.942985 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:35:49.953042 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:35:49.962179 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 05:35:49.973821 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 05:35:49.984038 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 05:35:49.994403 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 05:35:50.004335 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 05:35:50.015113 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 05:35:50.027095 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 05:35:50.027180 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:35:50.035055 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:35:50.046015 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 05:35:50.059719 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 05:35:50.076569 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 05:35:50.090338 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 05:35:50.101091 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 05:35:50.113880 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 05:35:50.116584 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 05:35:50.131935 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Sep 9 05:35:50.142318 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 05:35:50.185564 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 05:35:50.195797 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:35:50.203044 systemd-networkd[1467]: lo: Link UP Sep 9 05:35:50.203766 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:35:50.204101 systemd-networkd[1467]: lo: Gained carrier Sep 9 05:35:50.211187 systemd-networkd[1467]: Enumeration completed Sep 9 05:35:50.211496 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:35:50.211553 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:35:50.217064 systemd-networkd[1467]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:35:50.217082 systemd-networkd[1467]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:35:50.217457 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 9 05:35:50.222179 systemd-networkd[1467]: eth0: Link UP Sep 9 05:35:50.222521 systemd-networkd[1467]: eth0: Gained carrier Sep 9 05:35:50.222566 systemd-networkd[1467]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:35:50.233540 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 05:35:50.237950 systemd-networkd[1467]: eth0: Overlong DHCP hostname received, shortened from 'ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb.c.flatcar-212911.internal' to 'ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb' Sep 9 05:35:50.237981 systemd-networkd[1467]: eth0: DHCPv4 address 10.128.0.4/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 9 05:35:50.247265 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 05:35:50.264268 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 05:35:50.293234 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 05:35:50.302065 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 05:35:50.312889 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 9 05:35:50.319370 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 05:35:50.338466 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 05:35:50.353100 systemd[1]: Started ntpd.service - Network Time Service. Sep 9 05:35:50.365928 jq[1520]: false Sep 9 05:35:50.365120 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 05:35:50.381933 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 05:35:50.420879 extend-filesystems[1521]: Found /dev/sda6 Sep 9 05:35:50.438397 kernel: ACPI: button: Power Button [PWRF] Sep 9 05:35:50.459214 extend-filesystems[1521]: Found /dev/sda9 Sep 9 05:35:50.470744 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Refreshing passwd entry cache Sep 9 05:35:50.470770 oslogin_cache_refresh[1524]: Refreshing passwd entry cache Sep 9 05:35:50.479010 extend-filesystems[1521]: Checking size of /dev/sda9 Sep 9 05:35:50.483813 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 05:35:50.499912 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Sep 9 05:35:50.514889 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 05:35:50.517290 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Failure getting users, quitting Sep 9 05:35:50.517290 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 05:35:50.517290 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Refreshing group entry cache Sep 9 05:35:50.515451 oslogin_cache_refresh[1524]: Failure getting users, quitting Sep 9 05:35:50.515490 oslogin_cache_refresh[1524]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 05:35:50.515566 oslogin_cache_refresh[1524]: Refreshing group entry cache Sep 9 05:35:50.525828 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Sep 9 05:35:50.527725 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Failure getting groups, quitting Sep 9 05:35:50.527833 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 05:35:50.527721 oslogin_cache_refresh[1524]: Failure getting groups, quitting Sep 9 05:35:50.527749 oslogin_cache_refresh[1524]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 05:35:50.528696 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 05:35:50.532210 coreos-metadata[1517]: Sep 09 05:35:50.532 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Sep 9 05:35:50.535144 ntpd[1528]: ntpd 4.2.8p17@1.4004-o Tue Sep 9 03:09:56 UTC 2025 (1): Starting Sep 9 05:35:50.535748 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: ntpd 4.2.8p17@1.4004-o Tue Sep 9 03:09:56 UTC 2025 (1): Starting Sep 9 05:35:50.535748 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 9 05:35:50.535748 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: ---------------------------------------------------- Sep 9 05:35:50.535748 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: ntp-4 is maintained by Network Time Foundation, Sep 9 05:35:50.535748 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 9 05:35:50.535748 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: corporation. Support and training for ntp-4 are Sep 9 05:35:50.535748 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: available at https://www.nwtime.org/support Sep 9 05:35:50.535748 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: ---------------------------------------------------- Sep 9 05:35:50.535189 ntpd[1528]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 9 05:35:50.536669 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 05:35:50.535210 ntpd[1528]: ---------------------------------------------------- Sep 9 05:35:50.535228 ntpd[1528]: ntp-4 is maintained by Network Time Foundation, Sep 9 05:35:50.535246 ntpd[1528]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 9 05:35:50.535266 ntpd[1528]: corporation. Support and training for ntp-4 are Sep 9 05:35:50.535285 ntpd[1528]: available at https://www.nwtime.org/support Sep 9 05:35:50.535306 ntpd[1528]: ---------------------------------------------------- Sep 9 05:35:50.541895 coreos-metadata[1517]: Sep 09 05:35:50.541 INFO Fetch successful Sep 9 05:35:50.541895 coreos-metadata[1517]: Sep 09 05:35:50.541 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Sep 9 05:35:50.545212 coreos-metadata[1517]: Sep 09 05:35:50.544 INFO Fetch successful Sep 9 05:35:50.545212 coreos-metadata[1517]: Sep 09 05:35:50.545 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Sep 9 05:35:50.547742 coreos-metadata[1517]: Sep 09 05:35:50.547 INFO Fetch successful Sep 9 05:35:50.547884 coreos-metadata[1517]: Sep 09 05:35:50.547 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Sep 9 05:35:50.551679 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 05:35:50.554588 ntpd[1528]: proto: precision = 0.086 usec (-23) Sep 9 05:35:50.555196 coreos-metadata[1517]: Sep 09 05:35:50.555 INFO Fetch successful Sep 9 05:35:50.555613 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: proto: precision = 0.086 usec (-23) Sep 9 05:35:50.558052 ntpd[1528]: basedate set to 2025-08-28 Sep 9 05:35:50.560072 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: basedate set to 2025-08-28 Sep 9 05:35:50.560072 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: gps base set to 2025-08-31 (week 2382) Sep 9 05:35:50.558090 ntpd[1528]: gps base set to 2025-08-31 (week 2382) Sep 9 05:35:50.563653 ntpd[1528]: Listen and drop on 0 v6wildcard [::]:123 Sep 9 05:35:50.565382 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: Listen and drop on 0 v6wildcard [::]:123 Sep 9 05:35:50.565592 ntpd[1528]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 9 05:35:50.565693 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:35:50.566126 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 9 05:35:50.566222 extend-filesystems[1521]: Resized partition /dev/sda9 Sep 9 05:35:50.599011 kernel: ACPI: button: Sleep Button [SLPF] Sep 9 05:35:50.599096 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Sep 9 05:35:50.599146 extend-filesystems[1552]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 05:35:50.627095 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 9 05:35:50.578122 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 05:35:50.566958 ntpd[1528]: Listen normally on 2 lo 127.0.0.1:123 Sep 9 05:35:50.627924 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: Listen normally on 2 lo 127.0.0.1:123 Sep 9 05:35:50.627924 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: Listen normally on 3 eth0 10.128.0.4:123 Sep 9 05:35:50.627924 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: Listen normally on 4 lo [::1]:123 Sep 9 05:35:50.627924 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: bind(21) AF_INET6 fe80::4001:aff:fe80:4%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 05:35:50.627924 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:4%2#123 Sep 9 05:35:50.627924 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: failed to init interface for address fe80::4001:aff:fe80:4%2 Sep 9 05:35:50.627924 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: Listening on routing socket on fd #21 for interface updates Sep 9 05:35:50.627924 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 05:35:50.627924 ntpd[1528]: 9 Sep 05:35:50 ntpd[1528]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 05:35:50.625954 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 05:35:50.567058 ntpd[1528]: Listen normally on 3 eth0 10.128.0.4:123 Sep 9 05:35:50.627969 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 05:35:50.567599 ntpd[1528]: Listen normally on 4 lo [::1]:123 Sep 9 05:35:50.628603 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 05:35:50.568941 ntpd[1528]: bind(21) AF_INET6 fe80::4001:aff:fe80:4%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 05:35:50.568990 ntpd[1528]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:4%2#123 Sep 9 05:35:50.629315 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 05:35:50.569021 ntpd[1528]: failed to init interface for address fe80::4001:aff:fe80:4%2 Sep 9 05:35:50.569086 ntpd[1528]: Listening on routing socket on fd #21 for interface updates Sep 9 05:35:50.571319 ntpd[1528]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 05:35:50.571364 ntpd[1528]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 05:35:50.638888 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Sep 9 05:35:50.645929 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 05:35:50.655642 kernel: EDAC MC: Ver: 3.0.0 Sep 9 05:35:50.646966 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 05:35:50.653683 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 05:35:50.655042 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 05:35:50.661525 extend-filesystems[1552]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 9 05:35:50.661525 extend-filesystems[1552]: old_desc_blocks = 1, new_desc_blocks = 2 Sep 9 05:35:50.661525 extend-filesystems[1552]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Sep 9 05:35:50.670395 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 05:35:50.728338 extend-filesystems[1521]: Resized filesystem in /dev/sda9 Sep 9 05:35:50.737088 jq[1550]: true Sep 9 05:35:50.670874 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 05:35:50.783254 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 9 05:35:50.816804 jq[1572]: true Sep 9 05:35:50.865901 update_engine[1544]: I20250909 05:35:50.860567 1544 main.cc:92] Flatcar Update Engine starting Sep 9 05:35:50.880541 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 05:35:50.891041 systemd[1]: Reached target network.target - Network. Sep 9 05:35:50.904087 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 05:35:50.917284 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 05:35:50.932257 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 05:35:50.938646 tar[1562]: linux-amd64/helm Sep 9 05:35:50.954969 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 05:35:50.968735 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 9 05:35:51.002205 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 05:35:51.045235 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 05:35:51.105725 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:51.144656 (ntainerd)[1614]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 05:35:51.162906 bash[1617]: Updated "/home/core/.ssh/authorized_keys" Sep 9 05:35:51.160454 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 05:35:51.191696 systemd[1]: Starting sshkeys.service... Sep 9 05:35:51.232271 dbus-daemon[1518]: [system] SELinux support is enabled Sep 9 05:35:51.232575 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 05:35:51.248091 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 05:35:51.259144 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 05:35:51.259268 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 05:35:51.260977 dbus-daemon[1518]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1467 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 9 05:35:51.265323 update_engine[1544]: I20250909 05:35:51.265254 1544 update_check_scheduler.cc:74] Next update check in 7m51s Sep 9 05:35:51.270142 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 05:35:51.270186 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 05:35:51.292030 systemd[1]: Started update-engine.service - Update Engine. Sep 9 05:35:51.303570 dbus-daemon[1518]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 05:35:51.350167 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 05:35:51.376378 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 9 05:35:51.385355 systemd-logind[1542]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 05:35:51.385398 systemd-logind[1542]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 9 05:35:51.385431 systemd-logind[1542]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 05:35:51.388501 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 9 05:35:51.390445 systemd-logind[1542]: New seat seat0. Sep 9 05:35:51.402563 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 9 05:35:51.403125 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 05:35:51.571081 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 9 05:35:51.591808 dbus-daemon[1518]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 9 05:35:51.603377 dbus-daemon[1518]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1625 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 9 05:35:51.617179 ntpd[1528]: bind(24) AF_INET6 fe80::4001:aff:fe80:4%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 05:35:51.617247 ntpd[1528]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:4%2#123 Sep 9 05:35:51.617788 ntpd[1528]: 9 Sep 05:35:51 ntpd[1528]: bind(24) AF_INET6 fe80::4001:aff:fe80:4%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 05:35:51.617788 ntpd[1528]: 9 Sep 05:35:51 ntpd[1528]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:4%2#123 Sep 9 05:35:51.617788 ntpd[1528]: 9 Sep 05:35:51 ntpd[1528]: failed to init interface for address fe80::4001:aff:fe80:4%2 Sep 9 05:35:51.617275 ntpd[1528]: failed to init interface for address fe80::4001:aff:fe80:4%2 Sep 9 05:35:51.626111 systemd[1]: Starting polkit.service - Authorization Manager... Sep 9 05:35:51.726574 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:51.862877 coreos-metadata[1626]: Sep 09 05:35:51.861 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Sep 9 05:35:51.867752 coreos-metadata[1626]: Sep 09 05:35:51.866 INFO Fetch failed with 404: resource not found Sep 9 05:35:51.867752 coreos-metadata[1626]: Sep 09 05:35:51.866 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Sep 9 05:35:51.867752 coreos-metadata[1626]: Sep 09 05:35:51.867 INFO Fetch successful Sep 9 05:35:51.867752 coreos-metadata[1626]: Sep 09 05:35:51.867 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Sep 9 05:35:51.870113 coreos-metadata[1626]: Sep 09 05:35:51.868 INFO Fetch failed with 404: resource not found Sep 9 05:35:51.870113 coreos-metadata[1626]: Sep 09 05:35:51.868 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Sep 9 05:35:51.874468 coreos-metadata[1626]: Sep 09 05:35:51.873 INFO Fetch failed with 404: resource not found Sep 9 05:35:51.874468 coreos-metadata[1626]: Sep 09 05:35:51.873 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Sep 9 05:35:51.875354 coreos-metadata[1626]: Sep 09 05:35:51.875 INFO Fetch successful Sep 9 05:35:51.880070 systemd-networkd[1467]: eth0: Gained IPv6LL Sep 9 05:35:51.883049 unknown[1626]: wrote ssh authorized keys file for user: core Sep 9 05:35:51.907440 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 05:35:51.970260 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 05:35:51.984881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:35:51.993273 update-ssh-keys[1639]: Updated "/home/core/.ssh/authorized_keys" Sep 9 05:35:51.999787 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 05:35:52.013389 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Sep 9 05:35:52.040526 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 9 05:35:52.069974 systemd[1]: Finished sshkeys.service. Sep 9 05:35:52.191286 init.sh[1644]: + '[' -e /etc/default/instance_configs.cfg.template ']' Sep 9 05:35:52.194221 init.sh[1644]: + echo -e '[InstanceSetup]\nset_host_keys = false' Sep 9 05:35:52.194465 init.sh[1644]: + /usr/bin/google_instance_setup Sep 9 05:35:52.206412 sshd_keygen[1581]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 05:35:52.306143 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 05:35:52.356566 locksmithd[1622]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 05:35:52.431828 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 05:35:52.439985 containerd[1614]: time="2025-09-09T05:35:52Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 05:35:52.447930 containerd[1614]: time="2025-09-09T05:35:52.447738987Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 05:35:52.453477 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 05:35:52.468386 systemd[1]: Started sshd@0-10.128.0.4:22-139.178.89.65:37002.service - OpenSSH per-connection server daemon (139.178.89.65:37002). Sep 9 05:35:52.532015 polkitd[1631]: Started polkitd version 126 Sep 9 05:35:52.539179 containerd[1614]: time="2025-09-09T05:35:52.539121085Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.761µs" Sep 9 05:35:52.539940 containerd[1614]: time="2025-09-09T05:35:52.539901019Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 05:35:52.540460 containerd[1614]: time="2025-09-09T05:35:52.540427112Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 05:35:52.542213 containerd[1614]: time="2025-09-09T05:35:52.541936205Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 05:35:52.546734 containerd[1614]: time="2025-09-09T05:35:52.546197758Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 05:35:52.546734 containerd[1614]: time="2025-09-09T05:35:52.546272320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:35:52.546734 containerd[1614]: time="2025-09-09T05:35:52.546392458Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:35:52.546734 containerd[1614]: time="2025-09-09T05:35:52.546415476Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:35:52.549877 containerd[1614]: time="2025-09-09T05:35:52.549083146Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:35:52.549877 containerd[1614]: time="2025-09-09T05:35:52.549206861Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:35:52.549877 containerd[1614]: time="2025-09-09T05:35:52.549269696Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:35:52.549877 containerd[1614]: time="2025-09-09T05:35:52.549289615Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 05:35:52.549877 containerd[1614]: time="2025-09-09T05:35:52.549470218Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 05:35:52.549877 containerd[1614]: time="2025-09-09T05:35:52.549822226Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:35:52.550907 containerd[1614]: time="2025-09-09T05:35:52.550386367Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:35:52.550907 containerd[1614]: time="2025-09-09T05:35:52.550417280Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 05:35:52.550907 containerd[1614]: time="2025-09-09T05:35:52.550467864Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 05:35:52.552874 containerd[1614]: time="2025-09-09T05:35:52.552120354Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 05:35:52.553330 containerd[1614]: time="2025-09-09T05:35:52.553126772Z" level=info msg="metadata content store policy set" policy=shared Sep 9 05:35:52.568196 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 05:35:52.569535 polkitd[1631]: Loading rules from directory /etc/polkit-1/rules.d Sep 9 05:35:52.568625 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 05:35:52.570402 polkitd[1631]: Loading rules from directory /run/polkit-1/rules.d Sep 9 05:35:52.570549 polkitd[1631]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 9 05:35:52.571457 polkitd[1631]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 9 05:35:52.571522 polkitd[1631]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 9 05:35:52.571584 polkitd[1631]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 9 05:35:52.572688 polkitd[1631]: Finished loading, compiling and executing 2 rules Sep 9 05:35:52.573742 dbus-daemon[1518]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 9 05:35:52.575838 polkitd[1631]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 9 05:35:52.576225 containerd[1614]: time="2025-09-09T05:35:52.574756314Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 05:35:52.576225 containerd[1614]: time="2025-09-09T05:35:52.574835471Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 05:35:52.576225 containerd[1614]: time="2025-09-09T05:35:52.575012218Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 05:35:52.576225 containerd[1614]: time="2025-09-09T05:35:52.575087983Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 05:35:52.576225 containerd[1614]: time="2025-09-09T05:35:52.575154228Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 05:35:52.576225 containerd[1614]: time="2025-09-09T05:35:52.575176260Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 05:35:52.576225 containerd[1614]: time="2025-09-09T05:35:52.575198564Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 05:35:52.576225 containerd[1614]: time="2025-09-09T05:35:52.575242245Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 05:35:52.576225 containerd[1614]: time="2025-09-09T05:35:52.575265876Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 05:35:52.576225 containerd[1614]: time="2025-09-09T05:35:52.575285562Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 05:35:52.576225 containerd[1614]: time="2025-09-09T05:35:52.575324661Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 05:35:52.576225 containerd[1614]: time="2025-09-09T05:35:52.575366739Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 05:35:52.576225 containerd[1614]: time="2025-09-09T05:35:52.575773628Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 05:35:52.576225 containerd[1614]: time="2025-09-09T05:35:52.575810429Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 05:35:52.576819 containerd[1614]: time="2025-09-09T05:35:52.576071597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 05:35:52.576819 containerd[1614]: time="2025-09-09T05:35:52.576102900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 05:35:52.576819 containerd[1614]: time="2025-09-09T05:35:52.576124768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 05:35:52.576819 containerd[1614]: time="2025-09-09T05:35:52.576142452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 05:35:52.576819 containerd[1614]: time="2025-09-09T05:35:52.576162893Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 05:35:52.576819 containerd[1614]: time="2025-09-09T05:35:52.576181928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 05:35:52.576819 containerd[1614]: time="2025-09-09T05:35:52.576202203Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 05:35:52.576819 containerd[1614]: time="2025-09-09T05:35:52.576220694Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 05:35:52.576819 containerd[1614]: time="2025-09-09T05:35:52.576244334Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 05:35:52.576819 containerd[1614]: time="2025-09-09T05:35:52.576344781Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 05:35:52.576819 containerd[1614]: time="2025-09-09T05:35:52.576367417Z" level=info msg="Start snapshots syncer" Sep 9 05:35:52.578012 containerd[1614]: time="2025-09-09T05:35:52.577963941Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 05:35:52.578865 containerd[1614]: time="2025-09-09T05:35:52.578757576Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 05:35:52.579235 containerd[1614]: time="2025-09-09T05:35:52.579185730Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 05:35:52.579602 containerd[1614]: time="2025-09-09T05:35:52.579574641Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 05:35:52.580035 containerd[1614]: time="2025-09-09T05:35:52.579986322Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 05:35:52.580171 containerd[1614]: time="2025-09-09T05:35:52.580147634Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 05:35:52.580338 containerd[1614]: time="2025-09-09T05:35:52.580272987Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 05:35:52.580338 containerd[1614]: time="2025-09-09T05:35:52.580301334Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 05:35:52.580493 containerd[1614]: time="2025-09-09T05:35:52.580469350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 05:35:52.580634 containerd[1614]: time="2025-09-09T05:35:52.580612553Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 05:35:52.580813 containerd[1614]: time="2025-09-09T05:35:52.580751920Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 05:35:52.580785 systemd[1]: Started polkit.service - Authorization Manager. Sep 9 05:35:52.581610 containerd[1614]: time="2025-09-09T05:35:52.581448783Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 05:35:52.581610 containerd[1614]: time="2025-09-09T05:35:52.581494120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 05:35:52.581610 containerd[1614]: time="2025-09-09T05:35:52.581544468Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 05:35:52.581999 containerd[1614]: time="2025-09-09T05:35:52.581939787Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:35:52.582113 containerd[1614]: time="2025-09-09T05:35:52.582089312Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:35:52.582272 containerd[1614]: time="2025-09-09T05:35:52.582199833Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:35:52.582272 containerd[1614]: time="2025-09-09T05:35:52.582225557Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:35:52.582272 containerd[1614]: time="2025-09-09T05:35:52.582241109Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 05:35:52.582693 containerd[1614]: time="2025-09-09T05:35:52.582666304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 05:35:52.582953 containerd[1614]: time="2025-09-09T05:35:52.582837676Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 05:35:52.582953 containerd[1614]: time="2025-09-09T05:35:52.582919210Z" level=info msg="runtime interface created" Sep 9 05:35:52.582953 containerd[1614]: time="2025-09-09T05:35:52.582930623Z" level=info msg="created NRI interface" Sep 9 05:35:52.583173 containerd[1614]: time="2025-09-09T05:35:52.583149375Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 05:35:52.583289 containerd[1614]: time="2025-09-09T05:35:52.583272923Z" level=info msg="Connect containerd service" Sep 9 05:35:52.583444 containerd[1614]: time="2025-09-09T05:35:52.583386878Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 05:35:52.585358 containerd[1614]: time="2025-09-09T05:35:52.585319676Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:35:52.596485 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 05:35:52.680604 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 05:35:52.681506 systemd-hostnamed[1625]: Hostname set to (transient) Sep 9 05:35:52.686736 systemd-resolved[1385]: System hostname changed to 'ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb'. Sep 9 05:35:52.696748 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 05:35:52.710761 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 05:35:52.720548 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 05:35:52.891738 containerd[1614]: time="2025-09-09T05:35:52.891665081Z" level=info msg="Start subscribing containerd event" Sep 9 05:35:52.892816 containerd[1614]: time="2025-09-09T05:35:52.892007620Z" level=info msg="Start recovering state" Sep 9 05:35:52.892816 containerd[1614]: time="2025-09-09T05:35:52.892195282Z" level=info msg="Start event monitor" Sep 9 05:35:52.892816 containerd[1614]: time="2025-09-09T05:35:52.892220476Z" level=info msg="Start cni network conf syncer for default" Sep 9 05:35:52.892816 containerd[1614]: time="2025-09-09T05:35:52.892239230Z" level=info msg="Start streaming server" Sep 9 05:35:52.892816 containerd[1614]: time="2025-09-09T05:35:52.892257213Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 05:35:52.892816 containerd[1614]: time="2025-09-09T05:35:52.892269771Z" level=info msg="runtime interface starting up..." Sep 9 05:35:52.892816 containerd[1614]: time="2025-09-09T05:35:52.892280925Z" level=info msg="starting plugins..." Sep 9 05:35:52.892816 containerd[1614]: time="2025-09-09T05:35:52.892302800Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 05:35:52.895126 containerd[1614]: time="2025-09-09T05:35:52.894750337Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 05:35:52.896865 containerd[1614]: time="2025-09-09T05:35:52.895509013Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 05:35:52.898483 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 05:35:52.899041 containerd[1614]: time="2025-09-09T05:35:52.899009406Z" level=info msg="containerd successfully booted in 0.463321s" Sep 9 05:35:53.010969 sshd[1671]: Accepted publickey for core from 139.178.89.65 port 37002 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:35:53.017978 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:53.040722 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 05:35:53.056070 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 05:35:53.110282 systemd-logind[1542]: New session 1 of user core. Sep 9 05:35:53.128369 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 05:35:53.149336 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 05:35:53.207647 (systemd)[1704]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 05:35:53.220997 systemd-logind[1542]: New session c1 of user core. Sep 9 05:35:53.287590 tar[1562]: linux-amd64/LICENSE Sep 9 05:35:53.287590 tar[1562]: linux-amd64/README.md Sep 9 05:35:53.328159 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 05:35:53.657190 instance-setup[1652]: INFO Running google_set_multiqueue. Sep 9 05:35:53.689938 systemd[1704]: Queued start job for default target default.target. Sep 9 05:35:53.694429 instance-setup[1652]: INFO Set channels for eth0 to 2. Sep 9 05:35:53.699156 systemd[1704]: Created slice app.slice - User Application Slice. Sep 9 05:35:53.699210 systemd[1704]: Reached target paths.target - Paths. Sep 9 05:35:53.699437 systemd[1704]: Reached target timers.target - Timers. Sep 9 05:35:53.701598 instance-setup[1652]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Sep 9 05:35:53.703997 systemd[1704]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 05:35:53.708135 instance-setup[1652]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Sep 9 05:35:53.708229 instance-setup[1652]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Sep 9 05:35:53.709195 instance-setup[1652]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Sep 9 05:35:53.709799 instance-setup[1652]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Sep 9 05:35:53.712147 instance-setup[1652]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Sep 9 05:35:53.712718 instance-setup[1652]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Sep 9 05:35:53.714900 instance-setup[1652]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Sep 9 05:35:53.728092 instance-setup[1652]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 9 05:35:53.737062 instance-setup[1652]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 9 05:35:53.739171 systemd[1704]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 05:35:53.741457 systemd[1704]: Reached target sockets.target - Sockets. Sep 9 05:35:53.741584 systemd[1704]: Reached target basic.target - Basic System. Sep 9 05:35:53.741675 systemd[1704]: Reached target default.target - Main User Target. Sep 9 05:35:53.741741 systemd[1704]: Startup finished in 504ms. Sep 9 05:35:53.741749 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 05:35:53.743782 instance-setup[1652]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Sep 9 05:35:53.744182 instance-setup[1652]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Sep 9 05:35:53.760106 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 05:35:53.783743 init.sh[1644]: + /usr/bin/google_metadata_script_runner --script-type startup Sep 9 05:35:54.028339 systemd[1]: Started sshd@1-10.128.0.4:22-139.178.89.65:37014.service - OpenSSH per-connection server daemon (139.178.89.65:37014). Sep 9 05:35:54.050771 startup-script[1746]: INFO Starting startup scripts. Sep 9 05:35:54.063706 startup-script[1746]: INFO No startup scripts found in metadata. Sep 9 05:35:54.063803 startup-script[1746]: INFO Finished running startup scripts. Sep 9 05:35:54.117983 init.sh[1644]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Sep 9 05:35:54.117983 init.sh[1644]: + daemon_pids=() Sep 9 05:35:54.118204 init.sh[1644]: + for d in accounts clock_skew network Sep 9 05:35:54.118450 init.sh[1644]: + daemon_pids+=($!) Sep 9 05:35:54.118559 init.sh[1644]: + for d in accounts clock_skew network Sep 9 05:35:54.118802 init.sh[1644]: + daemon_pids+=($!) Sep 9 05:35:54.118879 init.sh[1754]: + /usr/bin/google_accounts_daemon Sep 9 05:35:54.119732 init.sh[1644]: + for d in accounts clock_skew network Sep 9 05:35:54.119732 init.sh[1644]: + daemon_pids+=($!) Sep 9 05:35:54.120338 init.sh[1755]: + /usr/bin/google_clock_skew_daemon Sep 9 05:35:54.120668 init.sh[1644]: + NOTIFY_SOCKET=/run/systemd/notify Sep 9 05:35:54.120668 init.sh[1644]: + /usr/bin/systemd-notify --ready Sep 9 05:35:54.120821 init.sh[1756]: + /usr/bin/google_network_daemon Sep 9 05:35:54.143436 systemd[1]: Started oem-gce.service - GCE Linux Agent. Sep 9 05:35:54.156133 init.sh[1644]: + wait -n 1754 1755 1756 Sep 9 05:35:54.409383 sshd[1751]: Accepted publickey for core from 139.178.89.65 port 37014 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:35:54.412578 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:54.430959 systemd-logind[1542]: New session 2 of user core. Sep 9 05:35:54.439196 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 05:35:54.450143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:35:54.467538 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 05:35:54.471215 (kubelet)[1764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:35:54.476692 systemd[1]: Startup finished in 4.340s (kernel) + 9.471s (initrd) + 9.211s (userspace) = 23.022s. Sep 9 05:35:54.537377 ntpd[1528]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:4%2]:123 Sep 9 05:35:54.537915 ntpd[1528]: 9 Sep 05:35:54 ntpd[1528]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:4%2]:123 Sep 9 05:35:54.668902 sshd[1766]: Connection closed by 139.178.89.65 port 37014 Sep 9 05:35:54.668184 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:54.681423 systemd[1]: sshd@1-10.128.0.4:22-139.178.89.65:37014.service: Deactivated successfully. Sep 9 05:35:54.682929 systemd-logind[1542]: Session 2 logged out. Waiting for processes to exit. Sep 9 05:35:54.686769 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 05:35:54.695522 systemd-logind[1542]: Removed session 2. Sep 9 05:35:54.724455 google-networking[1756]: INFO Starting Google Networking daemon. Sep 9 05:35:54.730299 systemd[1]: Started sshd@2-10.128.0.4:22-139.178.89.65:37030.service - OpenSSH per-connection server daemon (139.178.89.65:37030). Sep 9 05:35:54.775455 google-clock-skew[1755]: INFO Starting Google Clock Skew daemon. Sep 9 05:35:54.800680 google-clock-skew[1755]: INFO Clock drift token has changed: 0. Sep 9 05:35:54.838910 groupadd[1787]: group added to /etc/group: name=google-sudoers, GID=1000 Sep 9 05:35:54.844216 groupadd[1787]: group added to /etc/gshadow: name=google-sudoers Sep 9 05:35:54.905217 groupadd[1787]: new group: name=google-sudoers, GID=1000 Sep 9 05:35:54.939215 google-accounts[1754]: INFO Starting Google Accounts daemon. Sep 9 05:35:54.958034 google-accounts[1754]: WARNING OS Login not installed. Sep 9 05:35:54.960208 google-accounts[1754]: INFO Creating a new user account for 0. Sep 9 05:35:54.966642 init.sh[1797]: useradd: invalid user name '0': use --badname to ignore Sep 9 05:35:54.967124 google-accounts[1754]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Sep 9 05:35:55.079312 sshd[1784]: Accepted publickey for core from 139.178.89.65 port 37030 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:35:55.082129 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:55.094206 systemd-logind[1542]: New session 3 of user core. Sep 9 05:35:55.101152 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 05:35:55.293456 sshd[1799]: Connection closed by 139.178.89.65 port 37030 Sep 9 05:35:55.295688 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:55.306325 systemd[1]: sshd@2-10.128.0.4:22-139.178.89.65:37030.service: Deactivated successfully. Sep 9 05:35:55.310895 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 05:35:55.314451 systemd-logind[1542]: Session 3 logged out. Waiting for processes to exit. Sep 9 05:35:55.317467 systemd-logind[1542]: Removed session 3. Sep 9 05:35:55.353231 systemd[1]: Started sshd@3-10.128.0.4:22-139.178.89.65:37046.service - OpenSSH per-connection server daemon (139.178.89.65:37046). Sep 9 05:35:55.423882 kubelet[1764]: E0909 05:35:55.423798 1764 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:35:55.427590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:35:55.427919 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:35:55.428579 systemd[1]: kubelet.service: Consumed 1.390s CPU time, 265.1M memory peak. Sep 9 05:35:56.000089 systemd-resolved[1385]: Clock change detected. Flushing caches. Sep 9 05:35:56.001300 google-clock-skew[1755]: INFO Synced system time with hardware clock. Sep 9 05:35:56.125488 sshd[1805]: Accepted publickey for core from 139.178.89.65 port 37046 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:35:56.127303 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:56.134562 systemd-logind[1542]: New session 4 of user core. Sep 9 05:35:56.142769 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 05:35:56.347426 sshd[1811]: Connection closed by 139.178.89.65 port 37046 Sep 9 05:35:56.348842 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:56.355105 systemd[1]: sshd@3-10.128.0.4:22-139.178.89.65:37046.service: Deactivated successfully. Sep 9 05:35:56.357852 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 05:35:56.359186 systemd-logind[1542]: Session 4 logged out. Waiting for processes to exit. Sep 9 05:35:56.361722 systemd-logind[1542]: Removed session 4. Sep 9 05:35:56.402279 systemd[1]: Started sshd@4-10.128.0.4:22-139.178.89.65:37052.service - OpenSSH per-connection server daemon (139.178.89.65:37052). Sep 9 05:35:56.707963 sshd[1817]: Accepted publickey for core from 139.178.89.65 port 37052 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:35:56.710194 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:56.718528 systemd-logind[1542]: New session 5 of user core. Sep 9 05:35:56.729698 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 05:35:56.903029 sudo[1821]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 05:35:56.903606 sudo[1821]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:35:56.924506 sudo[1821]: pam_unix(sudo:session): session closed for user root Sep 9 05:35:56.968271 sshd[1820]: Connection closed by 139.178.89.65 port 37052 Sep 9 05:35:56.969797 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:56.975749 systemd[1]: sshd@4-10.128.0.4:22-139.178.89.65:37052.service: Deactivated successfully. Sep 9 05:35:56.978623 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 05:35:56.981078 systemd-logind[1542]: Session 5 logged out. Waiting for processes to exit. Sep 9 05:35:56.983400 systemd-logind[1542]: Removed session 5. Sep 9 05:35:57.027819 systemd[1]: Started sshd@5-10.128.0.4:22-139.178.89.65:37060.service - OpenSSH per-connection server daemon (139.178.89.65:37060). Sep 9 05:35:57.335133 sshd[1827]: Accepted publickey for core from 139.178.89.65 port 37060 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:35:57.337231 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:57.345348 systemd-logind[1542]: New session 6 of user core. Sep 9 05:35:57.347764 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 05:35:57.516721 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 05:35:57.517284 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:35:57.524898 sudo[1832]: pam_unix(sudo:session): session closed for user root Sep 9 05:35:57.539738 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 05:35:57.540265 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:35:57.554647 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:35:57.639824 augenrules[1854]: No rules Sep 9 05:35:57.642113 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:35:57.642497 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:35:57.644896 sudo[1831]: pam_unix(sudo:session): session closed for user root Sep 9 05:35:57.689075 sshd[1830]: Connection closed by 139.178.89.65 port 37060 Sep 9 05:35:57.690010 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:57.695746 systemd[1]: sshd@5-10.128.0.4:22-139.178.89.65:37060.service: Deactivated successfully. Sep 9 05:35:57.698630 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 05:35:57.701872 systemd-logind[1542]: Session 6 logged out. Waiting for processes to exit. Sep 9 05:35:57.703449 systemd-logind[1542]: Removed session 6. Sep 9 05:36:02.746809 systemd[1]: Started sshd@6-10.128.0.4:22-139.178.89.65:50492.service - OpenSSH per-connection server daemon (139.178.89.65:50492). Sep 9 05:36:03.059091 sshd[1863]: Accepted publickey for core from 139.178.89.65 port 50492 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:36:03.061039 sshd-session[1863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:36:03.069262 systemd-logind[1542]: New session 7 of user core. Sep 9 05:36:03.074740 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 05:36:03.244105 sudo[1867]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 05:36:03.244774 sudo[1867]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:36:03.732827 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 05:36:03.755227 (dockerd)[1885]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 05:36:04.106091 dockerd[1885]: time="2025-09-09T05:36:04.105904937Z" level=info msg="Starting up" Sep 9 05:36:04.107323 dockerd[1885]: time="2025-09-09T05:36:04.107284930Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 05:36:04.123477 dockerd[1885]: time="2025-09-09T05:36:04.123332880Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 05:36:04.185351 dockerd[1885]: time="2025-09-09T05:36:04.185059866Z" level=info msg="Loading containers: start." Sep 9 05:36:04.208507 kernel: Initializing XFRM netlink socket Sep 9 05:36:04.591966 systemd-networkd[1467]: docker0: Link UP Sep 9 05:36:04.599142 dockerd[1885]: time="2025-09-09T05:36:04.599077564Z" level=info msg="Loading containers: done." Sep 9 05:36:04.621729 dockerd[1885]: time="2025-09-09T05:36:04.621660982Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 05:36:04.621990 dockerd[1885]: time="2025-09-09T05:36:04.621773946Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 05:36:04.621990 dockerd[1885]: time="2025-09-09T05:36:04.621916809Z" level=info msg="Initializing buildkit" Sep 9 05:36:04.659357 dockerd[1885]: time="2025-09-09T05:36:04.659296574Z" level=info msg="Completed buildkit initialization" Sep 9 05:36:04.664475 dockerd[1885]: time="2025-09-09T05:36:04.664346045Z" level=info msg="Daemon has completed initialization" Sep 9 05:36:04.664630 dockerd[1885]: time="2025-09-09T05:36:04.664561836Z" level=info msg="API listen on /run/docker.sock" Sep 9 05:36:04.664852 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 05:36:05.535791 containerd[1614]: time="2025-09-09T05:36:05.535707668Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 05:36:06.047390 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 05:36:06.052708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:36:06.088290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1788008575.mount: Deactivated successfully. Sep 9 05:36:06.453675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:36:06.466685 (kubelet)[2117]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:36:06.579136 kubelet[2117]: E0909 05:36:06.579042 2117 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:36:06.591239 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:36:06.591590 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:36:06.592627 systemd[1]: kubelet.service: Consumed 292ms CPU time, 108.9M memory peak. Sep 9 05:36:08.210860 containerd[1614]: time="2025-09-09T05:36:08.210777825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:08.212493 containerd[1614]: time="2025-09-09T05:36:08.212335657Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28086259" Sep 9 05:36:08.213932 containerd[1614]: time="2025-09-09T05:36:08.213861364Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:08.218038 containerd[1614]: time="2025-09-09T05:36:08.217487134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:08.218856 containerd[1614]: time="2025-09-09T05:36:08.218810353Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 2.683045362s" Sep 9 05:36:08.218978 containerd[1614]: time="2025-09-09T05:36:08.218868767Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 9 05:36:08.219726 containerd[1614]: time="2025-09-09T05:36:08.219590228Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 05:36:09.809195 containerd[1614]: time="2025-09-09T05:36:09.809109030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:09.810803 containerd[1614]: time="2025-09-09T05:36:09.810666787Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24716615" Sep 9 05:36:09.812011 containerd[1614]: time="2025-09-09T05:36:09.811965426Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:09.815603 containerd[1614]: time="2025-09-09T05:36:09.815526766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:09.817455 containerd[1614]: time="2025-09-09T05:36:09.816954946Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 1.597321832s" Sep 9 05:36:09.817455 containerd[1614]: time="2025-09-09T05:36:09.817001250Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 9 05:36:09.818322 containerd[1614]: time="2025-09-09T05:36:09.818290781Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 05:36:11.117555 containerd[1614]: time="2025-09-09T05:36:11.117474031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:11.119266 containerd[1614]: time="2025-09-09T05:36:11.119082351Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18784343" Sep 9 05:36:11.120553 containerd[1614]: time="2025-09-09T05:36:11.120508894Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:11.125415 containerd[1614]: time="2025-09-09T05:36:11.125313499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:11.127037 containerd[1614]: time="2025-09-09T05:36:11.126867622Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 1.308393s" Sep 9 05:36:11.127037 containerd[1614]: time="2025-09-09T05:36:11.126914894Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 9 05:36:11.128120 containerd[1614]: time="2025-09-09T05:36:11.128089058Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 05:36:12.413409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3797162707.mount: Deactivated successfully. Sep 9 05:36:13.132760 containerd[1614]: time="2025-09-09T05:36:13.132651555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:13.134218 containerd[1614]: time="2025-09-09T05:36:13.133969116Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30386150" Sep 9 05:36:13.135367 containerd[1614]: time="2025-09-09T05:36:13.135327093Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:13.138881 containerd[1614]: time="2025-09-09T05:36:13.138841752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:13.139808 containerd[1614]: time="2025-09-09T05:36:13.139764256Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 2.011631463s" Sep 9 05:36:13.139907 containerd[1614]: time="2025-09-09T05:36:13.139814481Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 9 05:36:13.140418 containerd[1614]: time="2025-09-09T05:36:13.140382007Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 05:36:13.589479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3019436023.mount: Deactivated successfully. Sep 9 05:36:14.836974 containerd[1614]: time="2025-09-09T05:36:14.836896356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:14.838662 containerd[1614]: time="2025-09-09T05:36:14.838617955Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Sep 9 05:36:14.841454 containerd[1614]: time="2025-09-09T05:36:14.839923443Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:14.844331 containerd[1614]: time="2025-09-09T05:36:14.844286438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:14.846145 containerd[1614]: time="2025-09-09T05:36:14.846105521Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.705674595s" Sep 9 05:36:14.846321 containerd[1614]: time="2025-09-09T05:36:14.846293639Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 05:36:14.847775 containerd[1614]: time="2025-09-09T05:36:14.847740425Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 05:36:15.292818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4063768125.mount: Deactivated successfully. Sep 9 05:36:15.300640 containerd[1614]: time="2025-09-09T05:36:15.300554028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:36:15.302169 containerd[1614]: time="2025-09-09T05:36:15.301816871Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Sep 9 05:36:15.303736 containerd[1614]: time="2025-09-09T05:36:15.303669129Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:36:15.307035 containerd[1614]: time="2025-09-09T05:36:15.306979655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:36:15.308215 containerd[1614]: time="2025-09-09T05:36:15.308161554Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 460.374134ms" Sep 9 05:36:15.308391 containerd[1614]: time="2025-09-09T05:36:15.308365145Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 05:36:15.309352 containerd[1614]: time="2025-09-09T05:36:15.309300963Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 05:36:15.719284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1843628295.mount: Deactivated successfully. Sep 9 05:36:16.841736 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 05:36:16.846743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:36:17.186673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:36:17.202247 (kubelet)[2297]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:36:17.295718 kubelet[2297]: E0909 05:36:17.295615 2297 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:36:17.300035 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:36:17.300336 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:36:17.301210 systemd[1]: kubelet.service: Consumed 272ms CPU time, 108.6M memory peak. Sep 9 05:36:18.210455 containerd[1614]: time="2025-09-09T05:36:18.210358472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:18.212137 containerd[1614]: time="2025-09-09T05:36:18.212078440Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56918218" Sep 9 05:36:18.213697 containerd[1614]: time="2025-09-09T05:36:18.213620129Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:18.217489 containerd[1614]: time="2025-09-09T05:36:18.217390801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:18.219597 containerd[1614]: time="2025-09-09T05:36:18.218960930Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.909620379s" Sep 9 05:36:18.219597 containerd[1614]: time="2025-09-09T05:36:18.219015034Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 9 05:36:21.890997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:36:21.891426 systemd[1]: kubelet.service: Consumed 272ms CPU time, 108.6M memory peak. Sep 9 05:36:21.895374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:36:21.947570 systemd[1]: Reload requested from client PID 2333 ('systemctl') (unit session-7.scope)... Sep 9 05:36:21.947595 systemd[1]: Reloading... Sep 9 05:36:22.183468 zram_generator::config[2378]: No configuration found. Sep 9 05:36:22.500454 systemd[1]: Reloading finished in 551 ms. Sep 9 05:36:22.584583 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 05:36:22.584964 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 05:36:22.585597 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:36:22.585672 systemd[1]: kubelet.service: Consumed 189ms CPU time, 98.3M memory peak. Sep 9 05:36:22.589282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:36:22.971976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:36:22.987190 (kubelet)[2430]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:36:23.046976 kubelet[2430]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:36:23.046976 kubelet[2430]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 05:36:23.046976 kubelet[2430]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:36:23.047585 kubelet[2430]: I0909 05:36:23.047046 2430 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:36:23.164522 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 9 05:36:23.660422 kubelet[2430]: I0909 05:36:23.660345 2430 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 05:36:23.660422 kubelet[2430]: I0909 05:36:23.660386 2430 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:36:23.660848 kubelet[2430]: I0909 05:36:23.660800 2430 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 05:36:23.695970 kubelet[2430]: E0909 05:36:23.695920 2430 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.4:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:36:23.697674 kubelet[2430]: I0909 05:36:23.697489 2430 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:36:23.710495 kubelet[2430]: I0909 05:36:23.710400 2430 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:36:23.717663 kubelet[2430]: I0909 05:36:23.717611 2430 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:36:23.717910 kubelet[2430]: I0909 05:36:23.717788 2430 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 05:36:23.718090 kubelet[2430]: I0909 05:36:23.718036 2430 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:36:23.718363 kubelet[2430]: I0909 05:36:23.718088 2430 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:36:23.718363 kubelet[2430]: I0909 05:36:23.718358 2430 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:36:23.718604 kubelet[2430]: I0909 05:36:23.718380 2430 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 05:36:23.718604 kubelet[2430]: I0909 05:36:23.718566 2430 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:36:23.726328 kubelet[2430]: I0909 05:36:23.726250 2430 kubelet.go:408] "Attempting to sync node with API server" Sep 9 05:36:23.726328 kubelet[2430]: I0909 05:36:23.726294 2430 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:36:23.726560 kubelet[2430]: I0909 05:36:23.726354 2430 kubelet.go:314] "Adding apiserver pod source" Sep 9 05:36:23.726560 kubelet[2430]: I0909 05:36:23.726380 2430 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:36:23.732083 kubelet[2430]: W0909 05:36:23.732015 2430 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb&limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Sep 9 05:36:23.732571 kubelet[2430]: E0909 05:36:23.732508 2430 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb&limit=500&resourceVersion=0\": dial tcp 10.128.0.4:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:36:23.732903 kubelet[2430]: I0909 05:36:23.732880 2430 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:36:23.733714 kubelet[2430]: I0909 05:36:23.733690 2430 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 05:36:23.733892 kubelet[2430]: W0909 05:36:23.733876 2430 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 05:36:23.739572 kubelet[2430]: W0909 05:36:23.738954 2430 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Sep 9 05:36:23.739572 kubelet[2430]: E0909 05:36:23.739055 2430 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.4:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:36:23.739572 kubelet[2430]: I0909 05:36:23.739209 2430 server.go:1274] "Started kubelet" Sep 9 05:36:23.741497 kubelet[2430]: I0909 05:36:23.741457 2430 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:36:23.751656 kubelet[2430]: E0909 05:36:23.748517 2430 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.4:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb.186386849d667507 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb,UID:ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb,},FirstTimestamp:2025-09-09 05:36:23.739168007 +0000 UTC m=+0.745984735,LastTimestamp:2025-09-09 05:36:23.739168007 +0000 UTC m=+0.745984735,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb,}" Sep 9 05:36:23.752287 kubelet[2430]: I0909 05:36:23.752223 2430 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:36:23.754455 kubelet[2430]: I0909 05:36:23.753756 2430 server.go:449] "Adding debug handlers to kubelet server" Sep 9 05:36:23.754455 kubelet[2430]: I0909 05:36:23.753926 2430 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:36:23.754455 kubelet[2430]: I0909 05:36:23.754309 2430 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:36:23.755526 kubelet[2430]: I0909 05:36:23.755492 2430 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:36:23.758457 kubelet[2430]: I0909 05:36:23.757649 2430 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 05:36:23.758457 kubelet[2430]: E0909 05:36:23.757985 2430 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" not found" Sep 9 05:36:23.762085 kubelet[2430]: E0909 05:36:23.762023 2430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb?timeout=10s\": dial tcp 10.128.0.4:6443: connect: connection refused" interval="200ms" Sep 9 05:36:23.762547 kubelet[2430]: I0909 05:36:23.762507 2430 factory.go:221] Registration of the systemd container factory successfully Sep 9 05:36:23.762948 kubelet[2430]: I0909 05:36:23.762912 2430 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:36:23.763644 kubelet[2430]: I0909 05:36:23.763617 2430 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:36:23.763732 kubelet[2430]: I0909 05:36:23.763680 2430 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 05:36:23.764262 kubelet[2430]: W0909 05:36:23.764195 2430 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Sep 9 05:36:23.764368 kubelet[2430]: E0909 05:36:23.764279 2430 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.4:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:36:23.765381 kubelet[2430]: E0909 05:36:23.765344 2430 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:36:23.766329 kubelet[2430]: I0909 05:36:23.766298 2430 factory.go:221] Registration of the containerd container factory successfully Sep 9 05:36:23.779621 kubelet[2430]: I0909 05:36:23.779556 2430 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 05:36:23.782165 kubelet[2430]: I0909 05:36:23.782132 2430 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 05:36:23.783901 kubelet[2430]: I0909 05:36:23.783471 2430 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 05:36:23.783901 kubelet[2430]: I0909 05:36:23.783525 2430 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 05:36:23.783901 kubelet[2430]: E0909 05:36:23.783603 2430 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:36:23.796660 kubelet[2430]: W0909 05:36:23.796574 2430 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Sep 9 05:36:23.796660 kubelet[2430]: E0909 05:36:23.796653 2430 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.4:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:36:23.812077 kubelet[2430]: I0909 05:36:23.812043 2430 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 05:36:23.812707 kubelet[2430]: I0909 05:36:23.812343 2430 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 05:36:23.812707 kubelet[2430]: I0909 05:36:23.812384 2430 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:36:23.814828 kubelet[2430]: I0909 05:36:23.814804 2430 policy_none.go:49] "None policy: Start" Sep 9 05:36:23.816546 kubelet[2430]: I0909 05:36:23.816054 2430 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 05:36:23.816546 kubelet[2430]: I0909 05:36:23.816109 2430 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:36:23.825386 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 05:36:23.840451 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 05:36:23.852749 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 05:36:23.855686 kubelet[2430]: I0909 05:36:23.855656 2430 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 05:36:23.856452 kubelet[2430]: I0909 05:36:23.856331 2430 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:36:23.856452 kubelet[2430]: I0909 05:36:23.856355 2430 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:36:23.857050 kubelet[2430]: I0909 05:36:23.857029 2430 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:36:23.860323 kubelet[2430]: E0909 05:36:23.860290 2430 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" not found" Sep 9 05:36:23.904526 systemd[1]: Created slice kubepods-burstable-podc88e97bacb250c009b5ab67ff27748d6.slice - libcontainer container kubepods-burstable-podc88e97bacb250c009b5ab67ff27748d6.slice. Sep 9 05:36:23.921891 systemd[1]: Created slice kubepods-burstable-pod0a43fa0012f81757fc79373c6d1b0084.slice - libcontainer container kubepods-burstable-pod0a43fa0012f81757fc79373c6d1b0084.slice. Sep 9 05:36:23.931394 systemd[1]: Created slice kubepods-burstable-pod35d9ef1b04cc91955051101b9586634c.slice - libcontainer container kubepods-burstable-pod35d9ef1b04cc91955051101b9586634c.slice. Sep 9 05:36:23.962750 kubelet[2430]: I0909 05:36:23.962508 2430 kubelet_node_status.go:72] "Attempting to register node" node="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:23.962750 kubelet[2430]: E0909 05:36:23.962693 2430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb?timeout=10s\": dial tcp 10.128.0.4:6443: connect: connection refused" interval="400ms" Sep 9 05:36:23.963146 kubelet[2430]: E0909 05:36:23.962997 2430 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.4:6443/api/v1/nodes\": dial tcp 10.128.0.4:6443: connect: connection refused" node="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:24.013190 kubelet[2430]: E0909 05:36:24.013009 2430 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.4:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb.186386849d667507 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb,UID:ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb,},FirstTimestamp:2025-09-09 05:36:23.739168007 +0000 UTC m=+0.745984735,LastTimestamp:2025-09-09 05:36:23.739168007 +0000 UTC m=+0.745984735,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb,}" Sep 9 05:36:24.065171 kubelet[2430]: I0909 05:36:24.065063 2430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a43fa0012f81757fc79373c6d1b0084-ca-certs\") pod \"kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" (UID: \"0a43fa0012f81757fc79373c6d1b0084\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:24.065171 kubelet[2430]: I0909 05:36:24.065161 2430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a43fa0012f81757fc79373c6d1b0084-k8s-certs\") pod \"kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" (UID: \"0a43fa0012f81757fc79373c6d1b0084\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:24.065833 kubelet[2430]: I0909 05:36:24.065206 2430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a43fa0012f81757fc79373c6d1b0084-kubeconfig\") pod \"kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" (UID: \"0a43fa0012f81757fc79373c6d1b0084\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:24.065833 kubelet[2430]: I0909 05:36:24.065240 2430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a43fa0012f81757fc79373c6d1b0084-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" (UID: \"0a43fa0012f81757fc79373c6d1b0084\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:24.065833 kubelet[2430]: I0909 05:36:24.065277 2430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c88e97bacb250c009b5ab67ff27748d6-ca-certs\") pod \"kube-apiserver-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" (UID: \"c88e97bacb250c009b5ab67ff27748d6\") " pod="kube-system/kube-apiserver-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:24.065833 kubelet[2430]: I0909 05:36:24.065314 2430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0a43fa0012f81757fc79373c6d1b0084-flexvolume-dir\") pod \"kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" (UID: \"0a43fa0012f81757fc79373c6d1b0084\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:24.065974 kubelet[2430]: I0909 05:36:24.065349 2430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/35d9ef1b04cc91955051101b9586634c-kubeconfig\") pod \"kube-scheduler-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" (UID: \"35d9ef1b04cc91955051101b9586634c\") " pod="kube-system/kube-scheduler-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:24.065974 kubelet[2430]: I0909 05:36:24.065389 2430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c88e97bacb250c009b5ab67ff27748d6-k8s-certs\") pod \"kube-apiserver-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" (UID: \"c88e97bacb250c009b5ab67ff27748d6\") " pod="kube-system/kube-apiserver-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:24.065974 kubelet[2430]: I0909 05:36:24.065424 2430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c88e97bacb250c009b5ab67ff27748d6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" (UID: \"c88e97bacb250c009b5ab67ff27748d6\") " pod="kube-system/kube-apiserver-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:24.168325 kubelet[2430]: I0909 05:36:24.168264 2430 kubelet_node_status.go:72] "Attempting to register node" node="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:24.168894 kubelet[2430]: E0909 05:36:24.168853 2430 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.4:6443/api/v1/nodes\": dial tcp 10.128.0.4:6443: connect: connection refused" node="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:24.216886 containerd[1614]: time="2025-09-09T05:36:24.216611529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb,Uid:c88e97bacb250c009b5ab67ff27748d6,Namespace:kube-system,Attempt:0,}" Sep 9 05:36:24.227730 containerd[1614]: time="2025-09-09T05:36:24.227669197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb,Uid:0a43fa0012f81757fc79373c6d1b0084,Namespace:kube-system,Attempt:0,}" Sep 9 05:36:24.237281 containerd[1614]: time="2025-09-09T05:36:24.236980699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb,Uid:35d9ef1b04cc91955051101b9586634c,Namespace:kube-system,Attempt:0,}" Sep 9 05:36:24.269474 containerd[1614]: time="2025-09-09T05:36:24.269287989Z" level=info msg="connecting to shim 73e01149627a6a91c16fffcc621ed4284228a26dab7be1386fb2898f87b1fd63" address="unix:///run/containerd/s/c4e3e6103c1a9003e4f41474636b469ed618dbfe30b1839442dffcf9fd737cf4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:24.324682 containerd[1614]: time="2025-09-09T05:36:24.324600159Z" level=info msg="connecting to shim 47ac94763cad30cab3afb09e7e2d912418dc2f76409cf7158b115cef02921a4e" address="unix:///run/containerd/s/b42b1bfaf42836e0152fdf83f731fced5f703db3c05cf3b5f6e37ef4feb3a227" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:24.349974 systemd[1]: Started cri-containerd-73e01149627a6a91c16fffcc621ed4284228a26dab7be1386fb2898f87b1fd63.scope - libcontainer container 73e01149627a6a91c16fffcc621ed4284228a26dab7be1386fb2898f87b1fd63. Sep 9 05:36:24.365571 kubelet[2430]: E0909 05:36:24.365455 2430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb?timeout=10s\": dial tcp 10.128.0.4:6443: connect: connection refused" interval="800ms" Sep 9 05:36:24.373023 containerd[1614]: time="2025-09-09T05:36:24.372880266Z" level=info msg="connecting to shim 62fa6f22a8ad227c75bde26ae9c830b0b3b968e973d3a96dc7c6b08cf3531410" address="unix:///run/containerd/s/56a3eec5025b804b3543a68efeaceeaaf7b6eee4988697053e0b5451e537a874" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:24.391733 systemd[1]: Started cri-containerd-47ac94763cad30cab3afb09e7e2d912418dc2f76409cf7158b115cef02921a4e.scope - libcontainer container 47ac94763cad30cab3afb09e7e2d912418dc2f76409cf7158b115cef02921a4e. Sep 9 05:36:24.436720 systemd[1]: Started cri-containerd-62fa6f22a8ad227c75bde26ae9c830b0b3b968e973d3a96dc7c6b08cf3531410.scope - libcontainer container 62fa6f22a8ad227c75bde26ae9c830b0b3b968e973d3a96dc7c6b08cf3531410. Sep 9 05:36:24.512760 containerd[1614]: time="2025-09-09T05:36:24.510967304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb,Uid:c88e97bacb250c009b5ab67ff27748d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"73e01149627a6a91c16fffcc621ed4284228a26dab7be1386fb2898f87b1fd63\"" Sep 9 05:36:24.517910 kubelet[2430]: E0909 05:36:24.517853 2430 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4452-0-0-nightly-20250908-2100-345b4714103ff5" Sep 9 05:36:24.523244 containerd[1614]: time="2025-09-09T05:36:24.523070595Z" level=info msg="CreateContainer within sandbox \"73e01149627a6a91c16fffcc621ed4284228a26dab7be1386fb2898f87b1fd63\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 05:36:24.546535 containerd[1614]: time="2025-09-09T05:36:24.546478703Z" level=info msg="Container 21b66c335228968363e527593398bc781ac59e4f1c588343a2eece8b66116616: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:24.558789 containerd[1614]: time="2025-09-09T05:36:24.558708335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb,Uid:0a43fa0012f81757fc79373c6d1b0084,Namespace:kube-system,Attempt:0,} returns sandbox id \"47ac94763cad30cab3afb09e7e2d912418dc2f76409cf7158b115cef02921a4e\"" Sep 9 05:36:24.560270 containerd[1614]: time="2025-09-09T05:36:24.560113943Z" level=info msg="CreateContainer within sandbox \"73e01149627a6a91c16fffcc621ed4284228a26dab7be1386fb2898f87b1fd63\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"21b66c335228968363e527593398bc781ac59e4f1c588343a2eece8b66116616\"" Sep 9 05:36:24.561177 containerd[1614]: time="2025-09-09T05:36:24.561148601Z" level=info msg="StartContainer for \"21b66c335228968363e527593398bc781ac59e4f1c588343a2eece8b66116616\"" Sep 9 05:36:24.563057 containerd[1614]: time="2025-09-09T05:36:24.562980596Z" level=info msg="connecting to shim 21b66c335228968363e527593398bc781ac59e4f1c588343a2eece8b66116616" address="unix:///run/containerd/s/c4e3e6103c1a9003e4f41474636b469ed618dbfe30b1839442dffcf9fd737cf4" protocol=ttrpc version=3 Sep 9 05:36:24.564639 kubelet[2430]: E0909 05:36:24.564537 2430 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4" Sep 9 05:36:24.569479 containerd[1614]: time="2025-09-09T05:36:24.568788462Z" level=info msg="CreateContainer within sandbox \"47ac94763cad30cab3afb09e7e2d912418dc2f76409cf7158b115cef02921a4e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 05:36:24.590373 kubelet[2430]: I0909 05:36:24.590327 2430 kubelet_node_status.go:72] "Attempting to register node" node="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:24.592528 kubelet[2430]: E0909 05:36:24.592415 2430 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.4:6443/api/v1/nodes\": dial tcp 10.128.0.4:6443: connect: connection refused" node="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:24.598305 containerd[1614]: time="2025-09-09T05:36:24.598254102Z" level=info msg="Container ad0847724609624f0483381687bda2d9be2473094dec68212fb3fd863df8d0b0: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:24.612775 systemd[1]: Started cri-containerd-21b66c335228968363e527593398bc781ac59e4f1c588343a2eece8b66116616.scope - libcontainer container 21b66c335228968363e527593398bc781ac59e4f1c588343a2eece8b66116616. Sep 9 05:36:24.615706 containerd[1614]: time="2025-09-09T05:36:24.615650976Z" level=info msg="CreateContainer within sandbox \"47ac94763cad30cab3afb09e7e2d912418dc2f76409cf7158b115cef02921a4e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ad0847724609624f0483381687bda2d9be2473094dec68212fb3fd863df8d0b0\"" Sep 9 05:36:24.618898 containerd[1614]: time="2025-09-09T05:36:24.617288028Z" level=info msg="StartContainer for \"ad0847724609624f0483381687bda2d9be2473094dec68212fb3fd863df8d0b0\"" Sep 9 05:36:24.625296 containerd[1614]: time="2025-09-09T05:36:24.625128232Z" level=info msg="connecting to shim ad0847724609624f0483381687bda2d9be2473094dec68212fb3fd863df8d0b0" address="unix:///run/containerd/s/b42b1bfaf42836e0152fdf83f731fced5f703db3c05cf3b5f6e37ef4feb3a227" protocol=ttrpc version=3 Sep 9 05:36:24.641460 containerd[1614]: time="2025-09-09T05:36:24.639869877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb,Uid:35d9ef1b04cc91955051101b9586634c,Namespace:kube-system,Attempt:0,} returns sandbox id \"62fa6f22a8ad227c75bde26ae9c830b0b3b968e973d3a96dc7c6b08cf3531410\"" Sep 9 05:36:24.647467 kubelet[2430]: E0909 05:36:24.645951 2430 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4452-0-0-nightly-20250908-2100-345b4714103ff5" Sep 9 05:36:24.651466 containerd[1614]: time="2025-09-09T05:36:24.651030247Z" level=info msg="CreateContainer within sandbox \"62fa6f22a8ad227c75bde26ae9c830b0b3b968e973d3a96dc7c6b08cf3531410\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 05:36:24.673690 systemd[1]: Started cri-containerd-ad0847724609624f0483381687bda2d9be2473094dec68212fb3fd863df8d0b0.scope - libcontainer container ad0847724609624f0483381687bda2d9be2473094dec68212fb3fd863df8d0b0. Sep 9 05:36:24.674478 containerd[1614]: time="2025-09-09T05:36:24.673403124Z" level=info msg="Container b28ab01cafcce7c47cb4b5a18ccc4faa4af7aaee5b47ddd3a8b9c04a64364179: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:24.688160 containerd[1614]: time="2025-09-09T05:36:24.688068886Z" level=info msg="CreateContainer within sandbox \"62fa6f22a8ad227c75bde26ae9c830b0b3b968e973d3a96dc7c6b08cf3531410\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b28ab01cafcce7c47cb4b5a18ccc4faa4af7aaee5b47ddd3a8b9c04a64364179\"" Sep 9 05:36:24.690948 containerd[1614]: time="2025-09-09T05:36:24.690868486Z" level=info msg="StartContainer for \"b28ab01cafcce7c47cb4b5a18ccc4faa4af7aaee5b47ddd3a8b9c04a64364179\"" Sep 9 05:36:24.694476 containerd[1614]: time="2025-09-09T05:36:24.692422896Z" level=info msg="connecting to shim b28ab01cafcce7c47cb4b5a18ccc4faa4af7aaee5b47ddd3a8b9c04a64364179" address="unix:///run/containerd/s/56a3eec5025b804b3543a68efeaceeaaf7b6eee4988697053e0b5451e537a874" protocol=ttrpc version=3 Sep 9 05:36:24.728643 systemd[1]: Started cri-containerd-b28ab01cafcce7c47cb4b5a18ccc4faa4af7aaee5b47ddd3a8b9c04a64364179.scope - libcontainer container b28ab01cafcce7c47cb4b5a18ccc4faa4af7aaee5b47ddd3a8b9c04a64364179. Sep 9 05:36:24.781774 containerd[1614]: time="2025-09-09T05:36:24.781516363Z" level=info msg="StartContainer for \"21b66c335228968363e527593398bc781ac59e4f1c588343a2eece8b66116616\" returns successfully" Sep 9 05:36:24.830028 kubelet[2430]: W0909 05:36:24.829898 2430 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Sep 9 05:36:24.830028 kubelet[2430]: E0909 05:36:24.830020 2430 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.4:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:36:24.854090 containerd[1614]: time="2025-09-09T05:36:24.854019667Z" level=info msg="StartContainer for \"ad0847724609624f0483381687bda2d9be2473094dec68212fb3fd863df8d0b0\" returns successfully" Sep 9 05:36:24.916872 containerd[1614]: time="2025-09-09T05:36:24.916806017Z" level=info msg="StartContainer for \"b28ab01cafcce7c47cb4b5a18ccc4faa4af7aaee5b47ddd3a8b9c04a64364179\" returns successfully" Sep 9 05:36:24.921041 kubelet[2430]: W0909 05:36:24.920830 2430 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Sep 9 05:36:24.921217 kubelet[2430]: E0909 05:36:24.921082 2430 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.4:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:36:25.400459 kubelet[2430]: I0909 05:36:25.399930 2430 kubelet_node_status.go:72] "Attempting to register node" node="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:28.000797 kubelet[2430]: E0909 05:36:28.000742 2430 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" not found" node="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:28.149730 kubelet[2430]: I0909 05:36:28.149656 2430 kubelet_node_status.go:75] "Successfully registered node" node="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:28.741733 kubelet[2430]: I0909 05:36:28.741663 2430 apiserver.go:52] "Watching apiserver" Sep 9 05:36:28.764841 kubelet[2430]: I0909 05:36:28.764737 2430 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 05:36:29.288632 kubelet[2430]: W0909 05:36:29.288582 2430 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 9 05:36:29.957721 systemd[1]: Reload requested from client PID 2701 ('systemctl') (unit session-7.scope)... Sep 9 05:36:29.957760 systemd[1]: Reloading... Sep 9 05:36:30.100746 zram_generator::config[2741]: No configuration found. Sep 9 05:36:30.484749 systemd[1]: Reloading finished in 526 ms. Sep 9 05:36:30.527878 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:36:30.547168 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 05:36:30.547901 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:36:30.548008 systemd[1]: kubelet.service: Consumed 1.282s CPU time, 130M memory peak. Sep 9 05:36:30.551836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:36:30.868119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:36:30.882166 (kubelet)[2793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:36:30.970385 kubelet[2793]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:36:30.970385 kubelet[2793]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 05:36:30.970385 kubelet[2793]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:36:30.971005 kubelet[2793]: I0909 05:36:30.970543 2793 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:36:30.987138 kubelet[2793]: I0909 05:36:30.987083 2793 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 05:36:30.987138 kubelet[2793]: I0909 05:36:30.987131 2793 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:36:30.987675 kubelet[2793]: I0909 05:36:30.987640 2793 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 05:36:30.991340 kubelet[2793]: I0909 05:36:30.991106 2793 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 05:36:30.999476 kubelet[2793]: I0909 05:36:30.999099 2793 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:36:31.010454 kubelet[2793]: I0909 05:36:31.010397 2793 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:36:31.015478 kubelet[2793]: I0909 05:36:31.015417 2793 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:36:31.015854 kubelet[2793]: I0909 05:36:31.015828 2793 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 05:36:31.016539 kubelet[2793]: I0909 05:36:31.016115 2793 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:36:31.016539 kubelet[2793]: I0909 05:36:31.016177 2793 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:36:31.016838 kubelet[2793]: I0909 05:36:31.016556 2793 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:36:31.016838 kubelet[2793]: I0909 05:36:31.016574 2793 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 05:36:31.016838 kubelet[2793]: I0909 05:36:31.016619 2793 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:36:31.016838 kubelet[2793]: I0909 05:36:31.016787 2793 kubelet.go:408] "Attempting to sync node with API server" Sep 9 05:36:31.016838 kubelet[2793]: I0909 05:36:31.016808 2793 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:36:31.017106 kubelet[2793]: I0909 05:36:31.016855 2793 kubelet.go:314] "Adding apiserver pod source" Sep 9 05:36:31.017106 kubelet[2793]: I0909 05:36:31.016882 2793 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:36:31.021841 kubelet[2793]: I0909 05:36:31.021590 2793 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:36:31.022265 kubelet[2793]: I0909 05:36:31.022240 2793 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 05:36:31.024449 kubelet[2793]: I0909 05:36:31.023948 2793 server.go:1274] "Started kubelet" Sep 9 05:36:31.029524 kubelet[2793]: I0909 05:36:31.029335 2793 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:36:31.032349 kubelet[2793]: I0909 05:36:31.031637 2793 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:36:31.039263 kubelet[2793]: I0909 05:36:31.039228 2793 server.go:449] "Adding debug handlers to kubelet server" Sep 9 05:36:31.043844 kubelet[2793]: I0909 05:36:31.043600 2793 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:36:31.044085 kubelet[2793]: I0909 05:36:31.044047 2793 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:36:31.050911 kubelet[2793]: I0909 05:36:31.050457 2793 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 05:36:31.050911 kubelet[2793]: E0909 05:36:31.050634 2793 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" not found" Sep 9 05:36:31.051447 kubelet[2793]: I0909 05:36:31.051410 2793 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 05:36:31.071547 kubelet[2793]: I0909 05:36:31.053175 2793 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:36:31.071894 kubelet[2793]: I0909 05:36:31.053373 2793 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:36:31.102322 kubelet[2793]: I0909 05:36:31.102010 2793 factory.go:221] Registration of the systemd container factory successfully Sep 9 05:36:31.102322 kubelet[2793]: I0909 05:36:31.102196 2793 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:36:31.124472 kubelet[2793]: I0909 05:36:31.122247 2793 factory.go:221] Registration of the containerd container factory successfully Sep 9 05:36:31.134554 kubelet[2793]: I0909 05:36:31.133576 2793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 05:36:31.138574 kubelet[2793]: E0909 05:36:31.138026 2793 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:36:31.152037 kubelet[2793]: I0909 05:36:31.150415 2793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 05:36:31.152037 kubelet[2793]: I0909 05:36:31.151633 2793 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 05:36:31.152037 kubelet[2793]: I0909 05:36:31.151666 2793 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 05:36:31.152037 kubelet[2793]: E0909 05:36:31.151753 2793 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:36:31.246527 kubelet[2793]: I0909 05:36:31.246490 2793 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 05:36:31.248098 kubelet[2793]: I0909 05:36:31.248064 2793 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 05:36:31.248310 kubelet[2793]: I0909 05:36:31.248284 2793 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:36:31.249520 kubelet[2793]: I0909 05:36:31.248567 2793 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 05:36:31.249520 kubelet[2793]: I0909 05:36:31.248590 2793 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 05:36:31.249520 kubelet[2793]: I0909 05:36:31.248622 2793 policy_none.go:49] "None policy: Start" Sep 9 05:36:31.251874 kubelet[2793]: I0909 05:36:31.251812 2793 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 05:36:31.251874 kubelet[2793]: I0909 05:36:31.251846 2793 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:36:31.252355 kubelet[2793]: I0909 05:36:31.252235 2793 state_mem.go:75] "Updated machine memory state" Sep 9 05:36:31.253813 kubelet[2793]: E0909 05:36:31.253480 2793 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 05:36:31.261456 kubelet[2793]: I0909 05:36:31.261400 2793 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 05:36:31.261698 kubelet[2793]: I0909 05:36:31.261678 2793 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:36:31.261786 kubelet[2793]: I0909 05:36:31.261701 2793 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:36:31.265039 kubelet[2793]: I0909 05:36:31.263516 2793 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:36:31.389417 kubelet[2793]: I0909 05:36:31.388973 2793 kubelet_node_status.go:72] "Attempting to register node" node="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:31.403523 kubelet[2793]: I0909 05:36:31.403473 2793 kubelet_node_status.go:111] "Node was previously registered" node="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:31.403721 kubelet[2793]: I0909 05:36:31.403596 2793 kubelet_node_status.go:75] "Successfully registered node" node="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:31.465457 kubelet[2793]: W0909 05:36:31.465393 2793 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 9 05:36:31.467713 kubelet[2793]: W0909 05:36:31.467639 2793 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 9 05:36:31.469902 kubelet[2793]: W0909 05:36:31.469835 2793 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 9 05:36:31.470044 kubelet[2793]: E0909 05:36:31.469959 2793 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" already exists" pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:31.478923 kubelet[2793]: I0909 05:36:31.477106 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0a43fa0012f81757fc79373c6d1b0084-flexvolume-dir\") pod \"kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" (UID: \"0a43fa0012f81757fc79373c6d1b0084\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:31.478923 kubelet[2793]: I0909 05:36:31.477181 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a43fa0012f81757fc79373c6d1b0084-kubeconfig\") pod \"kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" (UID: \"0a43fa0012f81757fc79373c6d1b0084\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:31.478923 kubelet[2793]: I0909 05:36:31.477225 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/35d9ef1b04cc91955051101b9586634c-kubeconfig\") pod \"kube-scheduler-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" (UID: \"35d9ef1b04cc91955051101b9586634c\") " pod="kube-system/kube-scheduler-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:31.478923 kubelet[2793]: I0909 05:36:31.477261 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c88e97bacb250c009b5ab67ff27748d6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" (UID: \"c88e97bacb250c009b5ab67ff27748d6\") " pod="kube-system/kube-apiserver-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:31.479286 kubelet[2793]: I0909 05:36:31.477294 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a43fa0012f81757fc79373c6d1b0084-ca-certs\") pod \"kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" (UID: \"0a43fa0012f81757fc79373c6d1b0084\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:31.479286 kubelet[2793]: I0909 05:36:31.477326 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a43fa0012f81757fc79373c6d1b0084-k8s-certs\") pod \"kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" (UID: \"0a43fa0012f81757fc79373c6d1b0084\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:31.479286 kubelet[2793]: I0909 05:36:31.477355 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a43fa0012f81757fc79373c6d1b0084-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" (UID: \"0a43fa0012f81757fc79373c6d1b0084\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:31.479565 kubelet[2793]: I0909 05:36:31.479503 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c88e97bacb250c009b5ab67ff27748d6-ca-certs\") pod \"kube-apiserver-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" (UID: \"c88e97bacb250c009b5ab67ff27748d6\") " pod="kube-system/kube-apiserver-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:31.479650 kubelet[2793]: I0909 05:36:31.479594 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c88e97bacb250c009b5ab67ff27748d6-k8s-certs\") pod \"kube-apiserver-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" (UID: \"c88e97bacb250c009b5ab67ff27748d6\") " pod="kube-system/kube-apiserver-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:32.019014 kubelet[2793]: I0909 05:36:32.018950 2793 apiserver.go:52] "Watching apiserver" Sep 9 05:36:32.062905 kubelet[2793]: I0909 05:36:32.062744 2793 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 05:36:32.203271 kubelet[2793]: W0909 05:36:32.203197 2793 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 9 05:36:32.203747 kubelet[2793]: E0909 05:36:32.203550 2793 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" already exists" pod="kube-system/kube-apiserver-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:36:32.236027 kubelet[2793]: I0909 05:36:32.235358 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" podStartSLOduration=1.235329503 podStartE2EDuration="1.235329503s" podCreationTimestamp="2025-09-09 05:36:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:36:32.225899574 +0000 UTC m=+1.335089292" watchObservedRunningTime="2025-09-09 05:36:32.235329503 +0000 UTC m=+1.344519217" Sep 9 05:36:32.247077 kubelet[2793]: I0909 05:36:32.247004 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" podStartSLOduration=3.246982003 podStartE2EDuration="3.246982003s" podCreationTimestamp="2025-09-09 05:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:36:32.246546752 +0000 UTC m=+1.355736471" watchObservedRunningTime="2025-09-09 05:36:32.246982003 +0000 UTC m=+1.356171724" Sep 9 05:36:32.247355 kubelet[2793]: I0909 05:36:32.247135 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" podStartSLOduration=1.247124881 podStartE2EDuration="1.247124881s" podCreationTimestamp="2025-09-09 05:36:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:36:32.236187891 +0000 UTC m=+1.345377582" watchObservedRunningTime="2025-09-09 05:36:32.247124881 +0000 UTC m=+1.356314596" Sep 9 05:36:36.915728 update_engine[1544]: I20250909 05:36:36.915624 1544 update_attempter.cc:509] Updating boot flags... Sep 9 05:36:36.977202 kubelet[2793]: I0909 05:36:36.976707 2793 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 05:36:36.979141 containerd[1614]: time="2025-09-09T05:36:36.977856017Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 05:36:36.979704 kubelet[2793]: I0909 05:36:36.978971 2793 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 05:36:38.039463 systemd[1]: Created slice kubepods-besteffort-podd2d3bdc7_6692_4acf_9e0d_ddd01cc56075.slice - libcontainer container kubepods-besteffort-podd2d3bdc7_6692_4acf_9e0d_ddd01cc56075.slice. Sep 9 05:36:38.129123 kubelet[2793]: I0909 05:36:38.128027 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2d3bdc7-6692-4acf-9e0d-ddd01cc56075-lib-modules\") pod \"kube-proxy-2q9dp\" (UID: \"d2d3bdc7-6692-4acf-9e0d-ddd01cc56075\") " pod="kube-system/kube-proxy-2q9dp" Sep 9 05:36:38.129123 kubelet[2793]: I0909 05:36:38.128078 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vldf2\" (UniqueName: \"kubernetes.io/projected/d2d3bdc7-6692-4acf-9e0d-ddd01cc56075-kube-api-access-vldf2\") pod \"kube-proxy-2q9dp\" (UID: \"d2d3bdc7-6692-4acf-9e0d-ddd01cc56075\") " pod="kube-system/kube-proxy-2q9dp" Sep 9 05:36:38.129123 kubelet[2793]: I0909 05:36:38.128116 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d2d3bdc7-6692-4acf-9e0d-ddd01cc56075-kube-proxy\") pod \"kube-proxy-2q9dp\" (UID: \"d2d3bdc7-6692-4acf-9e0d-ddd01cc56075\") " pod="kube-system/kube-proxy-2q9dp" Sep 9 05:36:38.129123 kubelet[2793]: I0909 05:36:38.128141 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2d3bdc7-6692-4acf-9e0d-ddd01cc56075-xtables-lock\") pod \"kube-proxy-2q9dp\" (UID: \"d2d3bdc7-6692-4acf-9e0d-ddd01cc56075\") " pod="kube-system/kube-proxy-2q9dp" Sep 9 05:36:38.129518 systemd[1]: Created slice kubepods-besteffort-podd56495d6_b3ac_403f_9cb0_bc5f50a5ee1b.slice - libcontainer container kubepods-besteffort-podd56495d6_b3ac_403f_9cb0_bc5f50a5ee1b.slice. Sep 9 05:36:38.228906 kubelet[2793]: I0909 05:36:38.228836 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d56495d6-b3ac-403f-9cb0-bc5f50a5ee1b-var-lib-calico\") pod \"tigera-operator-58fc44c59b-d4gjg\" (UID: \"d56495d6-b3ac-403f-9cb0-bc5f50a5ee1b\") " pod="tigera-operator/tigera-operator-58fc44c59b-d4gjg" Sep 9 05:36:38.229079 kubelet[2793]: I0909 05:36:38.228980 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plgj5\" (UniqueName: \"kubernetes.io/projected/d56495d6-b3ac-403f-9cb0-bc5f50a5ee1b-kube-api-access-plgj5\") pod \"tigera-operator-58fc44c59b-d4gjg\" (UID: \"d56495d6-b3ac-403f-9cb0-bc5f50a5ee1b\") " pod="tigera-operator/tigera-operator-58fc44c59b-d4gjg" Sep 9 05:36:38.352814 containerd[1614]: time="2025-09-09T05:36:38.352306390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2q9dp,Uid:d2d3bdc7-6692-4acf-9e0d-ddd01cc56075,Namespace:kube-system,Attempt:0,}" Sep 9 05:36:38.391262 containerd[1614]: time="2025-09-09T05:36:38.390607996Z" level=info msg="connecting to shim c5ed503723c88cc4e9013f4dbf229efc53888e0562e8cf118647f09d51f0bc8d" address="unix:///run/containerd/s/3f888e769df6fbdb67c8a290df82961d85fed16f2e13095b7e5ad8ab92048d41" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:38.436467 containerd[1614]: time="2025-09-09T05:36:38.436324885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-d4gjg,Uid:d56495d6-b3ac-403f-9cb0-bc5f50a5ee1b,Namespace:tigera-operator,Attempt:0,}" Sep 9 05:36:38.442192 systemd[1]: Started cri-containerd-c5ed503723c88cc4e9013f4dbf229efc53888e0562e8cf118647f09d51f0bc8d.scope - libcontainer container c5ed503723c88cc4e9013f4dbf229efc53888e0562e8cf118647f09d51f0bc8d. Sep 9 05:36:38.475192 containerd[1614]: time="2025-09-09T05:36:38.474954671Z" level=info msg="connecting to shim a645284803930904e43497dd6a4e2e15226d0b5bc70fff116c0176ad6f2242fb" address="unix:///run/containerd/s/7b99542054ad77a4516ec21f028dccabd47af72db7a9c823491cf573c03ceba3" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:38.525187 containerd[1614]: time="2025-09-09T05:36:38.525026306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2q9dp,Uid:d2d3bdc7-6692-4acf-9e0d-ddd01cc56075,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5ed503723c88cc4e9013f4dbf229efc53888e0562e8cf118647f09d51f0bc8d\"" Sep 9 05:36:38.534958 containerd[1614]: time="2025-09-09T05:36:38.534756992Z" level=info msg="CreateContainer within sandbox \"c5ed503723c88cc4e9013f4dbf229efc53888e0562e8cf118647f09d51f0bc8d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 05:36:38.537178 systemd[1]: Started cri-containerd-a645284803930904e43497dd6a4e2e15226d0b5bc70fff116c0176ad6f2242fb.scope - libcontainer container a645284803930904e43497dd6a4e2e15226d0b5bc70fff116c0176ad6f2242fb. Sep 9 05:36:38.559954 containerd[1614]: time="2025-09-09T05:36:38.559905207Z" level=info msg="Container c5bae7cc8507c9db4b874e7e4c071426e40b63cfcedc73e5fa4c76542569633e: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:38.573290 containerd[1614]: time="2025-09-09T05:36:38.573201374Z" level=info msg="CreateContainer within sandbox \"c5ed503723c88cc4e9013f4dbf229efc53888e0562e8cf118647f09d51f0bc8d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c5bae7cc8507c9db4b874e7e4c071426e40b63cfcedc73e5fa4c76542569633e\"" Sep 9 05:36:38.576191 containerd[1614]: time="2025-09-09T05:36:38.575615438Z" level=info msg="StartContainer for \"c5bae7cc8507c9db4b874e7e4c071426e40b63cfcedc73e5fa4c76542569633e\"" Sep 9 05:36:38.581143 containerd[1614]: time="2025-09-09T05:36:38.580519996Z" level=info msg="connecting to shim c5bae7cc8507c9db4b874e7e4c071426e40b63cfcedc73e5fa4c76542569633e" address="unix:///run/containerd/s/3f888e769df6fbdb67c8a290df82961d85fed16f2e13095b7e5ad8ab92048d41" protocol=ttrpc version=3 Sep 9 05:36:38.618786 systemd[1]: Started cri-containerd-c5bae7cc8507c9db4b874e7e4c071426e40b63cfcedc73e5fa4c76542569633e.scope - libcontainer container c5bae7cc8507c9db4b874e7e4c071426e40b63cfcedc73e5fa4c76542569633e. Sep 9 05:36:38.652978 containerd[1614]: time="2025-09-09T05:36:38.652908190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-d4gjg,Uid:d56495d6-b3ac-403f-9cb0-bc5f50a5ee1b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a645284803930904e43497dd6a4e2e15226d0b5bc70fff116c0176ad6f2242fb\"" Sep 9 05:36:38.659199 containerd[1614]: time="2025-09-09T05:36:38.658829726Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 05:36:38.725619 containerd[1614]: time="2025-09-09T05:36:38.725290760Z" level=info msg="StartContainer for \"c5bae7cc8507c9db4b874e7e4c071426e40b63cfcedc73e5fa4c76542569633e\" returns successfully" Sep 9 05:36:39.262623 kubelet[2793]: I0909 05:36:39.262236 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2q9dp" podStartSLOduration=1.262207622 podStartE2EDuration="1.262207622s" podCreationTimestamp="2025-09-09 05:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:36:39.239015969 +0000 UTC m=+8.348205686" watchObservedRunningTime="2025-09-09 05:36:39.262207622 +0000 UTC m=+8.371397595" Sep 9 05:36:39.798011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3046057510.mount: Deactivated successfully. Sep 9 05:36:40.791845 containerd[1614]: time="2025-09-09T05:36:40.791768852Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:40.793513 containerd[1614]: time="2025-09-09T05:36:40.793317625Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 9 05:36:40.796921 containerd[1614]: time="2025-09-09T05:36:40.796845258Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:40.799268 containerd[1614]: time="2025-09-09T05:36:40.799207846Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:40.800469 containerd[1614]: time="2025-09-09T05:36:40.800265991Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.1413864s" Sep 9 05:36:40.800469 containerd[1614]: time="2025-09-09T05:36:40.800315358Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 9 05:36:40.805168 containerd[1614]: time="2025-09-09T05:36:40.805110948Z" level=info msg="CreateContainer within sandbox \"a645284803930904e43497dd6a4e2e15226d0b5bc70fff116c0176ad6f2242fb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 05:36:40.817463 containerd[1614]: time="2025-09-09T05:36:40.816138246Z" level=info msg="Container a8f2ded6da8aef755e3962625ca204f321aab9cd519e8d07c87f2fa0ba4cdd12: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:40.830139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3148717327.mount: Deactivated successfully. Sep 9 05:36:40.837468 containerd[1614]: time="2025-09-09T05:36:40.837339771Z" level=info msg="CreateContainer within sandbox \"a645284803930904e43497dd6a4e2e15226d0b5bc70fff116c0176ad6f2242fb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a8f2ded6da8aef755e3962625ca204f321aab9cd519e8d07c87f2fa0ba4cdd12\"" Sep 9 05:36:40.839043 containerd[1614]: time="2025-09-09T05:36:40.838966335Z" level=info msg="StartContainer for \"a8f2ded6da8aef755e3962625ca204f321aab9cd519e8d07c87f2fa0ba4cdd12\"" Sep 9 05:36:40.840633 containerd[1614]: time="2025-09-09T05:36:40.840588567Z" level=info msg="connecting to shim a8f2ded6da8aef755e3962625ca204f321aab9cd519e8d07c87f2fa0ba4cdd12" address="unix:///run/containerd/s/7b99542054ad77a4516ec21f028dccabd47af72db7a9c823491cf573c03ceba3" protocol=ttrpc version=3 Sep 9 05:36:40.882721 systemd[1]: Started cri-containerd-a8f2ded6da8aef755e3962625ca204f321aab9cd519e8d07c87f2fa0ba4cdd12.scope - libcontainer container a8f2ded6da8aef755e3962625ca204f321aab9cd519e8d07c87f2fa0ba4cdd12. Sep 9 05:36:40.934126 containerd[1614]: time="2025-09-09T05:36:40.934060411Z" level=info msg="StartContainer for \"a8f2ded6da8aef755e3962625ca204f321aab9cd519e8d07c87f2fa0ba4cdd12\" returns successfully" Sep 9 05:36:46.715529 sudo[1867]: pam_unix(sudo:session): session closed for user root Sep 9 05:36:46.761800 sshd[1866]: Connection closed by 139.178.89.65 port 50492 Sep 9 05:36:46.763619 sshd-session[1863]: pam_unix(sshd:session): session closed for user core Sep 9 05:36:46.778912 systemd[1]: sshd@6-10.128.0.4:22-139.178.89.65:50492.service: Deactivated successfully. Sep 9 05:36:46.787504 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 05:36:46.788585 systemd[1]: session-7.scope: Consumed 6.704s CPU time, 223.3M memory peak. Sep 9 05:36:46.798873 systemd-logind[1542]: Session 7 logged out. Waiting for processes to exit. Sep 9 05:36:46.803510 systemd-logind[1542]: Removed session 7. Sep 9 05:36:53.165417 kubelet[2793]: I0909 05:36:53.165330 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-d4gjg" podStartSLOduration=13.019137287 podStartE2EDuration="15.165273441s" podCreationTimestamp="2025-09-09 05:36:38 +0000 UTC" firstStartedPulling="2025-09-09 05:36:38.655652739 +0000 UTC m=+7.764842443" lastFinishedPulling="2025-09-09 05:36:40.801788886 +0000 UTC m=+9.910978597" observedRunningTime="2025-09-09 05:36:41.254636964 +0000 UTC m=+10.363826694" watchObservedRunningTime="2025-09-09 05:36:53.165273441 +0000 UTC m=+22.274463157" Sep 9 05:36:53.181188 systemd[1]: Created slice kubepods-besteffort-pod1b77c2d3_66dc_44cb_b202_4d5be01661e9.slice - libcontainer container kubepods-besteffort-pod1b77c2d3_66dc_44cb_b202_4d5be01661e9.slice. Sep 9 05:36:53.240595 kubelet[2793]: I0909 05:36:53.240540 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1b77c2d3-66dc-44cb-b202-4d5be01661e9-typha-certs\") pod \"calico-typha-85f9c86b6-6ngn2\" (UID: \"1b77c2d3-66dc-44cb-b202-4d5be01661e9\") " pod="calico-system/calico-typha-85f9c86b6-6ngn2" Sep 9 05:36:53.240797 kubelet[2793]: I0909 05:36:53.240608 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96jhf\" (UniqueName: \"kubernetes.io/projected/1b77c2d3-66dc-44cb-b202-4d5be01661e9-kube-api-access-96jhf\") pod \"calico-typha-85f9c86b6-6ngn2\" (UID: \"1b77c2d3-66dc-44cb-b202-4d5be01661e9\") " pod="calico-system/calico-typha-85f9c86b6-6ngn2" Sep 9 05:36:53.240797 kubelet[2793]: I0909 05:36:53.240646 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b77c2d3-66dc-44cb-b202-4d5be01661e9-tigera-ca-bundle\") pod \"calico-typha-85f9c86b6-6ngn2\" (UID: \"1b77c2d3-66dc-44cb-b202-4d5be01661e9\") " pod="calico-system/calico-typha-85f9c86b6-6ngn2" Sep 9 05:36:53.469251 systemd[1]: Created slice kubepods-besteffort-podc70a4e7b_a5ae_44d9_9d25_86d2efee70af.slice - libcontainer container kubepods-besteffort-podc70a4e7b_a5ae_44d9_9d25_86d2efee70af.slice. Sep 9 05:36:53.492678 containerd[1614]: time="2025-09-09T05:36:53.492607071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85f9c86b6-6ngn2,Uid:1b77c2d3-66dc-44cb-b202-4d5be01661e9,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:53.536193 containerd[1614]: time="2025-09-09T05:36:53.535998847Z" level=info msg="connecting to shim a51ee744b659d5792b350ae9dfa52e66e872d5c72fe94ba520304e65ba07ad33" address="unix:///run/containerd/s/499d0ffb8266a1924d951c60026e23b08c51f4b44b3937fd82eddabc7d2058c2" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:53.544602 kubelet[2793]: I0909 05:36:53.544531 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c70a4e7b-a5ae-44d9-9d25-86d2efee70af-tigera-ca-bundle\") pod \"calico-node-lcbjr\" (UID: \"c70a4e7b-a5ae-44d9-9d25-86d2efee70af\") " pod="calico-system/calico-node-lcbjr" Sep 9 05:36:53.544822 kubelet[2793]: I0909 05:36:53.544637 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c70a4e7b-a5ae-44d9-9d25-86d2efee70af-cni-bin-dir\") pod \"calico-node-lcbjr\" (UID: \"c70a4e7b-a5ae-44d9-9d25-86d2efee70af\") " pod="calico-system/calico-node-lcbjr" Sep 9 05:36:53.544822 kubelet[2793]: I0909 05:36:53.544702 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c70a4e7b-a5ae-44d9-9d25-86d2efee70af-xtables-lock\") pod \"calico-node-lcbjr\" (UID: \"c70a4e7b-a5ae-44d9-9d25-86d2efee70af\") " pod="calico-system/calico-node-lcbjr" Sep 9 05:36:53.544989 kubelet[2793]: I0909 05:36:53.544946 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c70a4e7b-a5ae-44d9-9d25-86d2efee70af-node-certs\") pod \"calico-node-lcbjr\" (UID: \"c70a4e7b-a5ae-44d9-9d25-86d2efee70af\") " pod="calico-system/calico-node-lcbjr" Sep 9 05:36:53.545237 kubelet[2793]: I0909 05:36:53.545022 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c70a4e7b-a5ae-44d9-9d25-86d2efee70af-var-run-calico\") pod \"calico-node-lcbjr\" (UID: \"c70a4e7b-a5ae-44d9-9d25-86d2efee70af\") " pod="calico-system/calico-node-lcbjr" Sep 9 05:36:53.550453 kubelet[2793]: I0909 05:36:53.545276 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c70a4e7b-a5ae-44d9-9d25-86d2efee70af-policysync\") pod \"calico-node-lcbjr\" (UID: \"c70a4e7b-a5ae-44d9-9d25-86d2efee70af\") " pod="calico-system/calico-node-lcbjr" Sep 9 05:36:53.550453 kubelet[2793]: I0909 05:36:53.550243 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c70a4e7b-a5ae-44d9-9d25-86d2efee70af-var-lib-calico\") pod \"calico-node-lcbjr\" (UID: \"c70a4e7b-a5ae-44d9-9d25-86d2efee70af\") " pod="calico-system/calico-node-lcbjr" Sep 9 05:36:53.550453 kubelet[2793]: I0909 05:36:53.550349 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c70a4e7b-a5ae-44d9-9d25-86d2efee70af-cni-log-dir\") pod \"calico-node-lcbjr\" (UID: \"c70a4e7b-a5ae-44d9-9d25-86d2efee70af\") " pod="calico-system/calico-node-lcbjr" Sep 9 05:36:53.550750 kubelet[2793]: I0909 05:36:53.550418 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c70a4e7b-a5ae-44d9-9d25-86d2efee70af-lib-modules\") pod \"calico-node-lcbjr\" (UID: \"c70a4e7b-a5ae-44d9-9d25-86d2efee70af\") " pod="calico-system/calico-node-lcbjr" Sep 9 05:36:53.550750 kubelet[2793]: I0909 05:36:53.550525 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pz92\" (UniqueName: \"kubernetes.io/projected/c70a4e7b-a5ae-44d9-9d25-86d2efee70af-kube-api-access-9pz92\") pod \"calico-node-lcbjr\" (UID: \"c70a4e7b-a5ae-44d9-9d25-86d2efee70af\") " pod="calico-system/calico-node-lcbjr" Sep 9 05:36:53.550750 kubelet[2793]: I0909 05:36:53.550594 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c70a4e7b-a5ae-44d9-9d25-86d2efee70af-cni-net-dir\") pod \"calico-node-lcbjr\" (UID: \"c70a4e7b-a5ae-44d9-9d25-86d2efee70af\") " pod="calico-system/calico-node-lcbjr" Sep 9 05:36:53.550920 kubelet[2793]: I0909 05:36:53.550786 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c70a4e7b-a5ae-44d9-9d25-86d2efee70af-flexvol-driver-host\") pod \"calico-node-lcbjr\" (UID: \"c70a4e7b-a5ae-44d9-9d25-86d2efee70af\") " pod="calico-system/calico-node-lcbjr" Sep 9 05:36:53.616016 systemd[1]: Started cri-containerd-a51ee744b659d5792b350ae9dfa52e66e872d5c72fe94ba520304e65ba07ad33.scope - libcontainer container a51ee744b659d5792b350ae9dfa52e66e872d5c72fe94ba520304e65ba07ad33. Sep 9 05:36:53.664361 kubelet[2793]: E0909 05:36:53.664275 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.664361 kubelet[2793]: W0909 05:36:53.664317 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.664623 kubelet[2793]: E0909 05:36:53.664380 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.675008 kubelet[2793]: E0909 05:36:53.674974 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.675008 kubelet[2793]: W0909 05:36:53.675006 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.675232 kubelet[2793]: E0909 05:36:53.675035 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.687055 kubelet[2793]: E0909 05:36:53.686933 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.687055 kubelet[2793]: W0909 05:36:53.686966 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.687055 kubelet[2793]: E0909 05:36:53.686998 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.755690 containerd[1614]: time="2025-09-09T05:36:53.755413903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85f9c86b6-6ngn2,Uid:1b77c2d3-66dc-44cb-b202-4d5be01661e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"a51ee744b659d5792b350ae9dfa52e66e872d5c72fe94ba520304e65ba07ad33\"" Sep 9 05:36:53.762216 containerd[1614]: time="2025-09-09T05:36:53.762156797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 05:36:53.781486 containerd[1614]: time="2025-09-09T05:36:53.781393557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lcbjr,Uid:c70a4e7b-a5ae-44d9-9d25-86d2efee70af,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:53.791631 kubelet[2793]: E0909 05:36:53.791326 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sx2vx" podUID="fb14b863-0bcb-461f-94e8-0e174d2118f2" Sep 9 05:36:53.815093 kubelet[2793]: E0909 05:36:53.815028 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.815460 kubelet[2793]: W0909 05:36:53.815334 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.815460 kubelet[2793]: E0909 05:36:53.815381 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.816475 kubelet[2793]: E0909 05:36:53.816118 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.816475 kubelet[2793]: W0909 05:36:53.816142 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.816936 kubelet[2793]: E0909 05:36:53.816704 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.817886 kubelet[2793]: E0909 05:36:53.817830 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.817886 kubelet[2793]: W0909 05:36:53.817851 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.818221 kubelet[2793]: E0909 05:36:53.818054 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.819589 kubelet[2793]: E0909 05:36:53.819565 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.819797 kubelet[2793]: W0909 05:36:53.819729 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.819797 kubelet[2793]: E0909 05:36:53.819756 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.821862 kubelet[2793]: E0909 05:36:53.821800 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.821862 kubelet[2793]: W0909 05:36:53.821822 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.822258 kubelet[2793]: E0909 05:36:53.821843 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.823699 kubelet[2793]: E0909 05:36:53.823640 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.823699 kubelet[2793]: W0909 05:36:53.823664 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.824872 kubelet[2793]: E0909 05:36:53.824765 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.825368 kubelet[2793]: E0909 05:36:53.825345 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.825368 kubelet[2793]: W0909 05:36:53.825369 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.825688 kubelet[2793]: E0909 05:36:53.825391 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.825765 kubelet[2793]: E0909 05:36:53.825729 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.825765 kubelet[2793]: W0909 05:36:53.825743 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.825765 kubelet[2793]: E0909 05:36:53.825759 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.827636 kubelet[2793]: E0909 05:36:53.827606 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.827636 kubelet[2793]: W0909 05:36:53.827633 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.827859 kubelet[2793]: E0909 05:36:53.827652 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.829357 kubelet[2793]: E0909 05:36:53.828578 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.829357 kubelet[2793]: W0909 05:36:53.828596 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.829357 kubelet[2793]: E0909 05:36:53.828613 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.829357 kubelet[2793]: E0909 05:36:53.828970 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.829357 kubelet[2793]: W0909 05:36:53.828985 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.829357 kubelet[2793]: E0909 05:36:53.829002 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.830699 kubelet[2793]: E0909 05:36:53.830675 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.830699 kubelet[2793]: W0909 05:36:53.830698 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.830888 kubelet[2793]: E0909 05:36:53.830717 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.831739 kubelet[2793]: E0909 05:36:53.831716 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.831739 kubelet[2793]: W0909 05:36:53.831738 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.832689 kubelet[2793]: E0909 05:36:53.831755 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.834488 kubelet[2793]: E0909 05:36:53.832092 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.834610 kubelet[2793]: W0909 05:36:53.834494 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.834610 kubelet[2793]: E0909 05:36:53.834517 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.837772 kubelet[2793]: E0909 05:36:53.837737 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.837772 kubelet[2793]: W0909 05:36:53.837764 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.837963 kubelet[2793]: E0909 05:36:53.837785 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.838626 kubelet[2793]: E0909 05:36:53.838174 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.838626 kubelet[2793]: W0909 05:36:53.838193 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.838626 kubelet[2793]: E0909 05:36:53.838210 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.838885 containerd[1614]: time="2025-09-09T05:36:53.838615723Z" level=info msg="connecting to shim d06682df698a5697916722c6761e68183047fb6bff4d7f336fc8e42fe98e63c3" address="unix:///run/containerd/s/01364e2651e942118c3e8da8749f0d1262e1f29653ec775bb18a91caa8cc0c22" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:53.839692 kubelet[2793]: E0909 05:36:53.839635 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.839692 kubelet[2793]: W0909 05:36:53.839692 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.839863 kubelet[2793]: E0909 05:36:53.839713 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.840704 kubelet[2793]: E0909 05:36:53.840675 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.840911 kubelet[2793]: W0909 05:36:53.840702 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.840911 kubelet[2793]: E0909 05:36:53.840763 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.842065 kubelet[2793]: E0909 05:36:53.841996 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.842157 kubelet[2793]: W0909 05:36:53.842068 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.842157 kubelet[2793]: E0909 05:36:53.842093 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.843321 kubelet[2793]: E0909 05:36:53.843290 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.843549 kubelet[2793]: W0909 05:36:53.843522 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.843633 kubelet[2793]: E0909 05:36:53.843555 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.855184 kubelet[2793]: E0909 05:36:53.855118 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.855184 kubelet[2793]: W0909 05:36:53.855172 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.855419 kubelet[2793]: E0909 05:36:53.855202 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.856122 kubelet[2793]: I0909 05:36:53.855779 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fb14b863-0bcb-461f-94e8-0e174d2118f2-varrun\") pod \"csi-node-driver-sx2vx\" (UID: \"fb14b863-0bcb-461f-94e8-0e174d2118f2\") " pod="calico-system/csi-node-driver-sx2vx" Sep 9 05:36:53.857003 kubelet[2793]: E0909 05:36:53.856947 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.857003 kubelet[2793]: W0909 05:36:53.856976 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.857169 kubelet[2793]: E0909 05:36:53.857103 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.858058 kubelet[2793]: E0909 05:36:53.858008 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.858058 kubelet[2793]: W0909 05:36:53.858032 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.858234 kubelet[2793]: E0909 05:36:53.858187 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.858841 kubelet[2793]: I0909 05:36:53.858788 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fb14b863-0bcb-461f-94e8-0e174d2118f2-registration-dir\") pod \"csi-node-driver-sx2vx\" (UID: \"fb14b863-0bcb-461f-94e8-0e174d2118f2\") " pod="calico-system/csi-node-driver-sx2vx" Sep 9 05:36:53.859297 kubelet[2793]: E0909 05:36:53.859252 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.859383 kubelet[2793]: W0909 05:36:53.859277 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.859383 kubelet[2793]: E0909 05:36:53.859326 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.860993 kubelet[2793]: E0909 05:36:53.860962 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.860993 kubelet[2793]: W0909 05:36:53.860993 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.861188 kubelet[2793]: E0909 05:36:53.861019 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.862146 kubelet[2793]: E0909 05:36:53.862119 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.862269 kubelet[2793]: W0909 05:36:53.862145 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.862346 kubelet[2793]: E0909 05:36:53.862273 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.863165 kubelet[2793]: E0909 05:36:53.863106 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.863165 kubelet[2793]: W0909 05:36:53.863131 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.863354 kubelet[2793]: E0909 05:36:53.863195 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.863415 kubelet[2793]: I0909 05:36:53.863389 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fb14b863-0bcb-461f-94e8-0e174d2118f2-socket-dir\") pod \"csi-node-driver-sx2vx\" (UID: \"fb14b863-0bcb-461f-94e8-0e174d2118f2\") " pod="calico-system/csi-node-driver-sx2vx" Sep 9 05:36:53.865731 kubelet[2793]: E0909 05:36:53.865072 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.865731 kubelet[2793]: W0909 05:36:53.865097 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.865731 kubelet[2793]: E0909 05:36:53.865187 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.866135 kubelet[2793]: E0909 05:36:53.866107 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.866135 kubelet[2793]: W0909 05:36:53.866135 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.866135 kubelet[2793]: E0909 05:36:53.866276 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.866135 kubelet[2793]: I0909 05:36:53.866605 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hmkg\" (UniqueName: \"kubernetes.io/projected/fb14b863-0bcb-461f-94e8-0e174d2118f2-kube-api-access-6hmkg\") pod \"csi-node-driver-sx2vx\" (UID: \"fb14b863-0bcb-461f-94e8-0e174d2118f2\") " pod="calico-system/csi-node-driver-sx2vx" Sep 9 05:36:53.869036 kubelet[2793]: E0909 05:36:53.869001 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.869121 kubelet[2793]: W0909 05:36:53.869037 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.869121 kubelet[2793]: E0909 05:36:53.869058 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.871052 kubelet[2793]: E0909 05:36:53.871023 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.871052 kubelet[2793]: W0909 05:36:53.871046 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.871226 kubelet[2793]: E0909 05:36:53.871072 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.871887 kubelet[2793]: E0909 05:36:53.871808 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.871887 kubelet[2793]: W0909 05:36:53.871828 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.872513 kubelet[2793]: E0909 05:36:53.872249 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.873146 kubelet[2793]: E0909 05:36:53.873088 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.873146 kubelet[2793]: W0909 05:36:53.873146 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.873325 kubelet[2793]: E0909 05:36:53.873166 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.873390 kubelet[2793]: I0909 05:36:53.873334 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb14b863-0bcb-461f-94e8-0e174d2118f2-kubelet-dir\") pod \"csi-node-driver-sx2vx\" (UID: \"fb14b863-0bcb-461f-94e8-0e174d2118f2\") " pod="calico-system/csi-node-driver-sx2vx" Sep 9 05:36:53.875407 kubelet[2793]: E0909 05:36:53.875373 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.875788 kubelet[2793]: W0909 05:36:53.875577 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.875788 kubelet[2793]: E0909 05:36:53.875607 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.877498 kubelet[2793]: E0909 05:36:53.877355 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.877498 kubelet[2793]: W0909 05:36:53.877402 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.877498 kubelet[2793]: E0909 05:36:53.877423 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.927148 systemd[1]: Started cri-containerd-d06682df698a5697916722c6761e68183047fb6bff4d7f336fc8e42fe98e63c3.scope - libcontainer container d06682df698a5697916722c6761e68183047fb6bff4d7f336fc8e42fe98e63c3. Sep 9 05:36:53.974399 kubelet[2793]: E0909 05:36:53.974355 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.974760 kubelet[2793]: W0909 05:36:53.974610 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.974760 kubelet[2793]: E0909 05:36:53.974650 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.975533 kubelet[2793]: E0909 05:36:53.975468 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.975533 kubelet[2793]: W0909 05:36:53.975490 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.975865 kubelet[2793]: E0909 05:36:53.975697 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.976038 kubelet[2793]: E0909 05:36:53.975917 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.976038 kubelet[2793]: W0909 05:36:53.975949 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.976038 kubelet[2793]: E0909 05:36:53.975967 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.976678 kubelet[2793]: E0909 05:36:53.976636 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.976678 kubelet[2793]: W0909 05:36:53.976656 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.976925 kubelet[2793]: E0909 05:36:53.976856 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.977487 kubelet[2793]: E0909 05:36:53.977467 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.977641 kubelet[2793]: W0909 05:36:53.977621 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.977796 kubelet[2793]: E0909 05:36:53.977777 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.978395 kubelet[2793]: E0909 05:36:53.978375 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.978563 kubelet[2793]: W0909 05:36:53.978543 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.978811 kubelet[2793]: E0909 05:36:53.978734 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.979283 kubelet[2793]: E0909 05:36:53.979265 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.979468 kubelet[2793]: W0909 05:36:53.979419 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.979635 kubelet[2793]: E0909 05:36:53.979614 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.980221 kubelet[2793]: E0909 05:36:53.980203 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.980325 kubelet[2793]: W0909 05:36:53.980310 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.980845 kubelet[2793]: E0909 05:36:53.980474 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.981204 kubelet[2793]: E0909 05:36:53.981186 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.981317 kubelet[2793]: W0909 05:36:53.981300 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.981804 kubelet[2793]: E0909 05:36:53.981770 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.982087 kubelet[2793]: E0909 05:36:53.981963 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.982274 kubelet[2793]: W0909 05:36:53.982244 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.982834 kubelet[2793]: E0909 05:36:53.982479 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.983316 kubelet[2793]: E0909 05:36:53.983276 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.983316 kubelet[2793]: W0909 05:36:53.983295 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.983733 kubelet[2793]: E0909 05:36:53.983715 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.983966 kubelet[2793]: E0909 05:36:53.983928 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.983966 kubelet[2793]: W0909 05:36:53.983946 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.984243 kubelet[2793]: E0909 05:36:53.984198 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.986545 kubelet[2793]: E0909 05:36:53.986495 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.986545 kubelet[2793]: W0909 05:36:53.986521 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.986887 kubelet[2793]: E0909 05:36:53.986799 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.987349 kubelet[2793]: E0909 05:36:53.987306 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.987349 kubelet[2793]: W0909 05:36:53.987326 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.987690 kubelet[2793]: E0909 05:36:53.987645 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.988072 kubelet[2793]: E0909 05:36:53.988028 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.988072 kubelet[2793]: W0909 05:36:53.988047 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.988372 kubelet[2793]: E0909 05:36:53.988338 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.988831 kubelet[2793]: E0909 05:36:53.988791 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.988831 kubelet[2793]: W0909 05:36:53.988809 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.989072 kubelet[2793]: E0909 05:36:53.989055 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.989683 kubelet[2793]: E0909 05:36:53.989638 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.989683 kubelet[2793]: W0909 05:36:53.989658 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.990270 kubelet[2793]: E0909 05:36:53.990230 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.991454 kubelet[2793]: E0909 05:36:53.990511 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.991664 kubelet[2793]: W0909 05:36:53.991556 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.991806 kubelet[2793]: E0909 05:36:53.991761 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.993239 kubelet[2793]: E0909 05:36:53.992517 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.993239 kubelet[2793]: W0909 05:36:53.992537 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.993239 kubelet[2793]: E0909 05:36:53.992887 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.993239 kubelet[2793]: W0909 05:36:53.992902 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.993239 kubelet[2793]: E0909 05:36:53.993010 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.993239 kubelet[2793]: E0909 05:36:53.993203 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.993733 kubelet[2793]: E0909 05:36:53.993707 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.993794 kubelet[2793]: W0909 05:36:53.993737 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.993794 kubelet[2793]: E0909 05:36:53.993763 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.994250 kubelet[2793]: E0909 05:36:53.994228 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.994250 kubelet[2793]: W0909 05:36:53.994250 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.994401 kubelet[2793]: E0909 05:36:53.994383 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.995132 kubelet[2793]: E0909 05:36:53.995092 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.995132 kubelet[2793]: W0909 05:36:53.995120 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.995423 kubelet[2793]: E0909 05:36:53.995142 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.996571 kubelet[2793]: E0909 05:36:53.996541 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.996571 kubelet[2793]: W0909 05:36:53.996568 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.997626 kubelet[2793]: E0909 05:36:53.996587 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:53.998696 kubelet[2793]: E0909 05:36:53.998664 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:53.998696 kubelet[2793]: W0909 05:36:53.998691 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:53.998900 kubelet[2793]: E0909 05:36:53.998712 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:54.025903 kubelet[2793]: E0909 05:36:54.025503 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:54.025903 kubelet[2793]: W0909 05:36:54.025553 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:54.025903 kubelet[2793]: E0909 05:36:54.025584 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:54.058089 containerd[1614]: time="2025-09-09T05:36:54.058020999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lcbjr,Uid:c70a4e7b-a5ae-44d9-9d25-86d2efee70af,Namespace:calico-system,Attempt:0,} returns sandbox id \"d06682df698a5697916722c6761e68183047fb6bff4d7f336fc8e42fe98e63c3\"" Sep 9 05:36:54.892955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2994858104.mount: Deactivated successfully. Sep 9 05:36:55.154167 kubelet[2793]: E0909 05:36:55.152796 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sx2vx" podUID="fb14b863-0bcb-461f-94e8-0e174d2118f2" Sep 9 05:36:56.216177 containerd[1614]: time="2025-09-09T05:36:56.216052215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 9 05:36:56.217898 containerd[1614]: time="2025-09-09T05:36:56.216903356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:56.221458 containerd[1614]: time="2025-09-09T05:36:56.220992813Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:56.222654 containerd[1614]: time="2025-09-09T05:36:56.222400377Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.460188129s" Sep 9 05:36:56.223134 containerd[1614]: time="2025-09-09T05:36:56.223101830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 9 05:36:56.223328 containerd[1614]: time="2025-09-09T05:36:56.223154813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:56.227553 containerd[1614]: time="2025-09-09T05:36:56.227501062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 05:36:56.254485 containerd[1614]: time="2025-09-09T05:36:56.253796941Z" level=info msg="CreateContainer within sandbox \"a51ee744b659d5792b350ae9dfa52e66e872d5c72fe94ba520304e65ba07ad33\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 05:36:56.270669 containerd[1614]: time="2025-09-09T05:36:56.269591121Z" level=info msg="Container 82461e800c8d2e1a7922b4b297de188cd086870899644e4bc5c202ec59c13489: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:56.290689 containerd[1614]: time="2025-09-09T05:36:56.290633335Z" level=info msg="CreateContainer within sandbox \"a51ee744b659d5792b350ae9dfa52e66e872d5c72fe94ba520304e65ba07ad33\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"82461e800c8d2e1a7922b4b297de188cd086870899644e4bc5c202ec59c13489\"" Sep 9 05:36:56.293030 containerd[1614]: time="2025-09-09T05:36:56.292865027Z" level=info msg="StartContainer for \"82461e800c8d2e1a7922b4b297de188cd086870899644e4bc5c202ec59c13489\"" Sep 9 05:36:56.298396 containerd[1614]: time="2025-09-09T05:36:56.298340793Z" level=info msg="connecting to shim 82461e800c8d2e1a7922b4b297de188cd086870899644e4bc5c202ec59c13489" address="unix:///run/containerd/s/499d0ffb8266a1924d951c60026e23b08c51f4b44b3937fd82eddabc7d2058c2" protocol=ttrpc version=3 Sep 9 05:36:56.350109 systemd[1]: Started cri-containerd-82461e800c8d2e1a7922b4b297de188cd086870899644e4bc5c202ec59c13489.scope - libcontainer container 82461e800c8d2e1a7922b4b297de188cd086870899644e4bc5c202ec59c13489. Sep 9 05:36:56.469806 containerd[1614]: time="2025-09-09T05:36:56.469416518Z" level=info msg="StartContainer for \"82461e800c8d2e1a7922b4b297de188cd086870899644e4bc5c202ec59c13489\" returns successfully" Sep 9 05:36:57.155231 kubelet[2793]: E0909 05:36:57.155158 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sx2vx" podUID="fb14b863-0bcb-461f-94e8-0e174d2118f2" Sep 9 05:36:57.370178 kubelet[2793]: E0909 05:36:57.370127 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.370178 kubelet[2793]: W0909 05:36:57.370165 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.370536 kubelet[2793]: E0909 05:36:57.370201 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.370672 kubelet[2793]: E0909 05:36:57.370646 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.370754 kubelet[2793]: W0909 05:36:57.370666 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.370754 kubelet[2793]: E0909 05:36:57.370708 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.371104 kubelet[2793]: E0909 05:36:57.371078 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.371104 kubelet[2793]: W0909 05:36:57.371099 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.371394 kubelet[2793]: E0909 05:36:57.371117 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.371592 kubelet[2793]: E0909 05:36:57.371425 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.371592 kubelet[2793]: W0909 05:36:57.371467 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.371592 kubelet[2793]: E0909 05:36:57.371484 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.371950 kubelet[2793]: E0909 05:36:57.371820 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.371950 kubelet[2793]: W0909 05:36:57.371834 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.371950 kubelet[2793]: E0909 05:36:57.371850 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.372319 kubelet[2793]: E0909 05:36:57.372146 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.372319 kubelet[2793]: W0909 05:36:57.372161 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.372319 kubelet[2793]: E0909 05:36:57.372183 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.372755 kubelet[2793]: E0909 05:36:57.372490 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.372755 kubelet[2793]: W0909 05:36:57.372504 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.372755 kubelet[2793]: E0909 05:36:57.372523 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.373102 kubelet[2793]: E0909 05:36:57.372808 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.373102 kubelet[2793]: W0909 05:36:57.372822 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.373102 kubelet[2793]: E0909 05:36:57.372838 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.373391 kubelet[2793]: E0909 05:36:57.373179 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.373391 kubelet[2793]: W0909 05:36:57.373193 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.373391 kubelet[2793]: E0909 05:36:57.373209 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.373707 kubelet[2793]: E0909 05:36:57.373507 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.373707 kubelet[2793]: W0909 05:36:57.373521 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.373707 kubelet[2793]: E0909 05:36:57.373537 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.373909 kubelet[2793]: E0909 05:36:57.373837 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.373909 kubelet[2793]: W0909 05:36:57.373851 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.373909 kubelet[2793]: E0909 05:36:57.373875 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.374191 kubelet[2793]: E0909 05:36:57.374168 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.374191 kubelet[2793]: W0909 05:36:57.374182 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.374354 kubelet[2793]: E0909 05:36:57.374197 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.374538 kubelet[2793]: E0909 05:36:57.374519 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.374632 kubelet[2793]: W0909 05:36:57.374537 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.374632 kubelet[2793]: E0909 05:36:57.374554 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.374882 kubelet[2793]: E0909 05:36:57.374853 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.374946 kubelet[2793]: W0909 05:36:57.374881 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.374946 kubelet[2793]: E0909 05:36:57.374897 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.375213 kubelet[2793]: E0909 05:36:57.375190 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.375213 kubelet[2793]: W0909 05:36:57.375210 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.375330 kubelet[2793]: E0909 05:36:57.375227 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.422480 kubelet[2793]: E0909 05:36:57.420713 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.422480 kubelet[2793]: W0909 05:36:57.420747 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.422480 kubelet[2793]: E0909 05:36:57.420777 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.422480 kubelet[2793]: E0909 05:36:57.421471 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.422480 kubelet[2793]: W0909 05:36:57.421486 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.422480 kubelet[2793]: E0909 05:36:57.422347 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.422480 kubelet[2793]: W0909 05:36:57.422368 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.422480 kubelet[2793]: E0909 05:36:57.422387 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.423033 kubelet[2793]: E0909 05:36:57.421878 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.423960 kubelet[2793]: E0909 05:36:57.423758 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.423960 kubelet[2793]: W0909 05:36:57.423785 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.423960 kubelet[2793]: E0909 05:36:57.423870 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.424595 kubelet[2793]: E0909 05:36:57.424536 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.424595 kubelet[2793]: W0909 05:36:57.424553 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.424595 kubelet[2793]: E0909 05:36:57.424587 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.425632 kubelet[2793]: E0909 05:36:57.425605 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.425719 kubelet[2793]: W0909 05:36:57.425633 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.425719 kubelet[2793]: E0909 05:36:57.425662 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.431456 kubelet[2793]: E0909 05:36:57.430878 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.431456 kubelet[2793]: W0909 05:36:57.430899 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.431456 kubelet[2793]: E0909 05:36:57.430917 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.433560 kubelet[2793]: E0909 05:36:57.433534 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.433560 kubelet[2793]: W0909 05:36:57.433560 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.433699 kubelet[2793]: E0909 05:36:57.433588 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.433952 kubelet[2793]: E0909 05:36:57.433929 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.434042 kubelet[2793]: W0909 05:36:57.433952 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.434042 kubelet[2793]: E0909 05:36:57.433970 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.435239 kubelet[2793]: E0909 05:36:57.434246 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.435239 kubelet[2793]: W0909 05:36:57.434260 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.435239 kubelet[2793]: E0909 05:36:57.434276 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.435239 kubelet[2793]: E0909 05:36:57.434594 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.435239 kubelet[2793]: W0909 05:36:57.434611 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.435239 kubelet[2793]: E0909 05:36:57.434628 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.435239 kubelet[2793]: E0909 05:36:57.434919 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.435239 kubelet[2793]: W0909 05:36:57.434931 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.435239 kubelet[2793]: E0909 05:36:57.434945 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.435930 kubelet[2793]: E0909 05:36:57.435669 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.435930 kubelet[2793]: W0909 05:36:57.435685 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.435930 kubelet[2793]: E0909 05:36:57.435730 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.436137 kubelet[2793]: E0909 05:36:57.436064 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.436137 kubelet[2793]: W0909 05:36:57.436077 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.436137 kubelet[2793]: E0909 05:36:57.436112 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.436532 kubelet[2793]: E0909 05:36:57.436510 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.436532 kubelet[2793]: W0909 05:36:57.436529 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.436683 kubelet[2793]: E0909 05:36:57.436565 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.437079 kubelet[2793]: E0909 05:36:57.436988 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.437079 kubelet[2793]: W0909 05:36:57.437077 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.437250 kubelet[2793]: E0909 05:36:57.437160 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.437699 kubelet[2793]: E0909 05:36:57.437674 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.437699 kubelet[2793]: W0909 05:36:57.437694 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.438064 kubelet[2793]: E0909 05:36:57.437720 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.438177 kubelet[2793]: E0909 05:36:57.438153 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:57.438247 kubelet[2793]: W0909 05:36:57.438176 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:57.438247 kubelet[2793]: E0909 05:36:57.438194 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:57.631486 containerd[1614]: time="2025-09-09T05:36:57.631397683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:57.633044 containerd[1614]: time="2025-09-09T05:36:57.632805731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 9 05:36:57.634942 containerd[1614]: time="2025-09-09T05:36:57.634893195Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:57.639374 containerd[1614]: time="2025-09-09T05:36:57.639332792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:57.640573 containerd[1614]: time="2025-09-09T05:36:57.640329730Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.412607821s" Sep 9 05:36:57.640573 containerd[1614]: time="2025-09-09T05:36:57.640392558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 9 05:36:57.645172 containerd[1614]: time="2025-09-09T05:36:57.645112580Z" level=info msg="CreateContainer within sandbox \"d06682df698a5697916722c6761e68183047fb6bff4d7f336fc8e42fe98e63c3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 05:36:57.661315 containerd[1614]: time="2025-09-09T05:36:57.660622897Z" level=info msg="Container bfb8b8df8b91e61acdd561d4da48d909e81b076ff15c098a43c47c3572417e08: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:57.671911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2866009941.mount: Deactivated successfully. Sep 9 05:36:57.676522 containerd[1614]: time="2025-09-09T05:36:57.675954229Z" level=info msg="CreateContainer within sandbox \"d06682df698a5697916722c6761e68183047fb6bff4d7f336fc8e42fe98e63c3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bfb8b8df8b91e61acdd561d4da48d909e81b076ff15c098a43c47c3572417e08\"" Sep 9 05:36:57.678637 containerd[1614]: time="2025-09-09T05:36:57.678590112Z" level=info msg="StartContainer for \"bfb8b8df8b91e61acdd561d4da48d909e81b076ff15c098a43c47c3572417e08\"" Sep 9 05:36:57.682075 containerd[1614]: time="2025-09-09T05:36:57.682017911Z" level=info msg="connecting to shim bfb8b8df8b91e61acdd561d4da48d909e81b076ff15c098a43c47c3572417e08" address="unix:///run/containerd/s/01364e2651e942118c3e8da8749f0d1262e1f29653ec775bb18a91caa8cc0c22" protocol=ttrpc version=3 Sep 9 05:36:57.723976 systemd[1]: Started cri-containerd-bfb8b8df8b91e61acdd561d4da48d909e81b076ff15c098a43c47c3572417e08.scope - libcontainer container bfb8b8df8b91e61acdd561d4da48d909e81b076ff15c098a43c47c3572417e08. Sep 9 05:36:57.787074 containerd[1614]: time="2025-09-09T05:36:57.787008747Z" level=info msg="StartContainer for \"bfb8b8df8b91e61acdd561d4da48d909e81b076ff15c098a43c47c3572417e08\" returns successfully" Sep 9 05:36:57.804977 systemd[1]: cri-containerd-bfb8b8df8b91e61acdd561d4da48d909e81b076ff15c098a43c47c3572417e08.scope: Deactivated successfully. Sep 9 05:36:57.810918 containerd[1614]: time="2025-09-09T05:36:57.810831377Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfb8b8df8b91e61acdd561d4da48d909e81b076ff15c098a43c47c3572417e08\" id:\"bfb8b8df8b91e61acdd561d4da48d909e81b076ff15c098a43c47c3572417e08\" pid:3480 exited_at:{seconds:1757396217 nanos:809133018}" Sep 9 05:36:57.811161 containerd[1614]: time="2025-09-09T05:36:57.811006868Z" level=info msg="received exit event container_id:\"bfb8b8df8b91e61acdd561d4da48d909e81b076ff15c098a43c47c3572417e08\" id:\"bfb8b8df8b91e61acdd561d4da48d909e81b076ff15c098a43c47c3572417e08\" pid:3480 exited_at:{seconds:1757396217 nanos:809133018}" Sep 9 05:36:57.851785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfb8b8df8b91e61acdd561d4da48d909e81b076ff15c098a43c47c3572417e08-rootfs.mount: Deactivated successfully. Sep 9 05:36:58.291608 kubelet[2793]: I0909 05:36:58.291558 2793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:36:58.310222 kubelet[2793]: I0909 05:36:58.309716 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-85f9c86b6-6ngn2" podStartSLOduration=2.844280585 podStartE2EDuration="5.309689395s" podCreationTimestamp="2025-09-09 05:36:53 +0000 UTC" firstStartedPulling="2025-09-09 05:36:53.760978113 +0000 UTC m=+22.870167816" lastFinishedPulling="2025-09-09 05:36:56.226386916 +0000 UTC m=+25.335576626" observedRunningTime="2025-09-09 05:36:57.303937252 +0000 UTC m=+26.413126966" watchObservedRunningTime="2025-09-09 05:36:58.309689395 +0000 UTC m=+27.418879112" Sep 9 05:36:59.154532 kubelet[2793]: E0909 05:36:59.153131 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sx2vx" podUID="fb14b863-0bcb-461f-94e8-0e174d2118f2" Sep 9 05:37:00.303386 containerd[1614]: time="2025-09-09T05:37:00.303314081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 05:37:01.154876 kubelet[2793]: E0909 05:37:01.154759 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sx2vx" podUID="fb14b863-0bcb-461f-94e8-0e174d2118f2" Sep 9 05:37:03.155091 kubelet[2793]: E0909 05:37:03.155026 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sx2vx" podUID="fb14b863-0bcb-461f-94e8-0e174d2118f2" Sep 9 05:37:03.678957 containerd[1614]: time="2025-09-09T05:37:03.678872180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:03.680479 containerd[1614]: time="2025-09-09T05:37:03.680376802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 9 05:37:03.682107 containerd[1614]: time="2025-09-09T05:37:03.682021413Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:03.686484 containerd[1614]: time="2025-09-09T05:37:03.685705834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:03.687159 containerd[1614]: time="2025-09-09T05:37:03.687064496Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.383684929s" Sep 9 05:37:03.687159 containerd[1614]: time="2025-09-09T05:37:03.687119906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 9 05:37:03.693263 containerd[1614]: time="2025-09-09T05:37:03.693213833Z" level=info msg="CreateContainer within sandbox \"d06682df698a5697916722c6761e68183047fb6bff4d7f336fc8e42fe98e63c3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 05:37:03.707877 containerd[1614]: time="2025-09-09T05:37:03.707828234Z" level=info msg="Container 6c84767de443f73467cde493e8bdaba33e1921437a8eb9536505099d8c5262ab: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:03.717782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2733604081.mount: Deactivated successfully. Sep 9 05:37:03.732200 containerd[1614]: time="2025-09-09T05:37:03.732138097Z" level=info msg="CreateContainer within sandbox \"d06682df698a5697916722c6761e68183047fb6bff4d7f336fc8e42fe98e63c3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6c84767de443f73467cde493e8bdaba33e1921437a8eb9536505099d8c5262ab\"" Sep 9 05:37:03.733525 containerd[1614]: time="2025-09-09T05:37:03.733237592Z" level=info msg="StartContainer for \"6c84767de443f73467cde493e8bdaba33e1921437a8eb9536505099d8c5262ab\"" Sep 9 05:37:03.736177 containerd[1614]: time="2025-09-09T05:37:03.736117800Z" level=info msg="connecting to shim 6c84767de443f73467cde493e8bdaba33e1921437a8eb9536505099d8c5262ab" address="unix:///run/containerd/s/01364e2651e942118c3e8da8749f0d1262e1f29653ec775bb18a91caa8cc0c22" protocol=ttrpc version=3 Sep 9 05:37:03.778741 systemd[1]: Started cri-containerd-6c84767de443f73467cde493e8bdaba33e1921437a8eb9536505099d8c5262ab.scope - libcontainer container 6c84767de443f73467cde493e8bdaba33e1921437a8eb9536505099d8c5262ab. Sep 9 05:37:03.861647 containerd[1614]: time="2025-09-09T05:37:03.861597336Z" level=info msg="StartContainer for \"6c84767de443f73467cde493e8bdaba33e1921437a8eb9536505099d8c5262ab\" returns successfully" Sep 9 05:37:04.955098 containerd[1614]: time="2025-09-09T05:37:04.955029877Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:37:04.959840 systemd[1]: cri-containerd-6c84767de443f73467cde493e8bdaba33e1921437a8eb9536505099d8c5262ab.scope: Deactivated successfully. Sep 9 05:37:04.960921 containerd[1614]: time="2025-09-09T05:37:04.959959163Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c84767de443f73467cde493e8bdaba33e1921437a8eb9536505099d8c5262ab\" id:\"6c84767de443f73467cde493e8bdaba33e1921437a8eb9536505099d8c5262ab\" pid:3538 exited_at:{seconds:1757396224 nanos:959515141}" Sep 9 05:37:04.960921 containerd[1614]: time="2025-09-09T05:37:04.960679101Z" level=info msg="received exit event container_id:\"6c84767de443f73467cde493e8bdaba33e1921437a8eb9536505099d8c5262ab\" id:\"6c84767de443f73467cde493e8bdaba33e1921437a8eb9536505099d8c5262ab\" pid:3538 exited_at:{seconds:1757396224 nanos:959515141}" Sep 9 05:37:04.960395 systemd[1]: cri-containerd-6c84767de443f73467cde493e8bdaba33e1921437a8eb9536505099d8c5262ab.scope: Consumed 743ms CPU time, 195.7M memory peak, 171.3M written to disk. Sep 9 05:37:05.003296 kubelet[2793]: I0909 05:37:05.001828 2793 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 05:37:05.020172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c84767de443f73467cde493e8bdaba33e1921437a8eb9536505099d8c5262ab-rootfs.mount: Deactivated successfully. Sep 9 05:37:05.119139 systemd[1]: Created slice kubepods-besteffort-pod5fa1ecdb_01fb_4f51_8fbc_a2dac958e18d.slice - libcontainer container kubepods-besteffort-pod5fa1ecdb_01fb_4f51_8fbc_a2dac958e18d.slice. Sep 9 05:37:05.128962 kubelet[2793]: W0909 05:37:05.128922 2793 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb' and this object Sep 9 05:37:05.129502 kubelet[2793]: E0909 05:37:05.129442 2793 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb' and this object" logger="UnhandledError" Sep 9 05:37:05.129502 kubelet[2793]: W0909 05:37:05.129276 2793 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb' and this object Sep 9 05:37:05.129502 kubelet[2793]: E0909 05:37:05.129547 2793 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb' and this object" logger="UnhandledError" Sep 9 05:37:05.136009 systemd[1]: Created slice kubepods-burstable-pod3ec46691_51ce_4118_bbb6_ca3b35a2d45c.slice - libcontainer container kubepods-burstable-pod3ec46691_51ce_4118_bbb6_ca3b35a2d45c.slice. Sep 9 05:37:05.161999 systemd[1]: Created slice kubepods-burstable-pod1e37909a_15b0_4abe_9b56_16b705d6a0b9.slice - libcontainer container kubepods-burstable-pod1e37909a_15b0_4abe_9b56_16b705d6a0b9.slice. Sep 9 05:37:05.178540 systemd[1]: Created slice kubepods-besteffort-pode5a5b433_2f7b_4464_9d2f_741352c25aec.slice - libcontainer container kubepods-besteffort-pode5a5b433_2f7b_4464_9d2f_741352c25aec.slice. Sep 9 05:37:05.189457 kubelet[2793]: I0909 05:37:05.188660 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e5a5b433-2f7b-4464-9d2f-741352c25aec-calico-apiserver-certs\") pod \"calico-apiserver-6b96d9f487-86crn\" (UID: \"e5a5b433-2f7b-4464-9d2f-741352c25aec\") " pod="calico-apiserver/calico-apiserver-6b96d9f487-86crn" Sep 9 05:37:05.189457 kubelet[2793]: I0909 05:37:05.188716 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbssk\" (UniqueName: \"kubernetes.io/projected/e5a5b433-2f7b-4464-9d2f-741352c25aec-kube-api-access-gbssk\") pod \"calico-apiserver-6b96d9f487-86crn\" (UID: \"e5a5b433-2f7b-4464-9d2f-741352c25aec\") " pod="calico-apiserver/calico-apiserver-6b96d9f487-86crn" Sep 9 05:37:05.189457 kubelet[2793]: I0909 05:37:05.188768 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbgn6\" (UniqueName: \"kubernetes.io/projected/5fa1ecdb-01fb-4f51-8fbc-a2dac958e18d-kube-api-access-nbgn6\") pod \"calico-kube-controllers-74d99b4c64-gjhhx\" (UID: \"5fa1ecdb-01fb-4f51-8fbc-a2dac958e18d\") " pod="calico-system/calico-kube-controllers-74d99b4c64-gjhhx" Sep 9 05:37:05.189457 kubelet[2793]: I0909 05:37:05.188801 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ec46691-51ce-4118-bbb6-ca3b35a2d45c-config-volume\") pod \"coredns-7c65d6cfc9-9xc46\" (UID: \"3ec46691-51ce-4118-bbb6-ca3b35a2d45c\") " pod="kube-system/coredns-7c65d6cfc9-9xc46" Sep 9 05:37:05.189457 kubelet[2793]: I0909 05:37:05.188842 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e37909a-15b0-4abe-9b56-16b705d6a0b9-config-volume\") pod \"coredns-7c65d6cfc9-brhf6\" (UID: \"1e37909a-15b0-4abe-9b56-16b705d6a0b9\") " pod="kube-system/coredns-7c65d6cfc9-brhf6" Sep 9 05:37:05.189893 kubelet[2793]: I0909 05:37:05.188884 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fa1ecdb-01fb-4f51-8fbc-a2dac958e18d-tigera-ca-bundle\") pod \"calico-kube-controllers-74d99b4c64-gjhhx\" (UID: \"5fa1ecdb-01fb-4f51-8fbc-a2dac958e18d\") " pod="calico-system/calico-kube-controllers-74d99b4c64-gjhhx" Sep 9 05:37:05.189893 kubelet[2793]: I0909 05:37:05.188922 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvvtp\" (UniqueName: \"kubernetes.io/projected/3ec46691-51ce-4118-bbb6-ca3b35a2d45c-kube-api-access-bvvtp\") pod \"coredns-7c65d6cfc9-9xc46\" (UID: \"3ec46691-51ce-4118-bbb6-ca3b35a2d45c\") " pod="kube-system/coredns-7c65d6cfc9-9xc46" Sep 9 05:37:05.189893 kubelet[2793]: I0909 05:37:05.188951 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m57t6\" (UniqueName: \"kubernetes.io/projected/1e37909a-15b0-4abe-9b56-16b705d6a0b9-kube-api-access-m57t6\") pod \"coredns-7c65d6cfc9-brhf6\" (UID: \"1e37909a-15b0-4abe-9b56-16b705d6a0b9\") " pod="kube-system/coredns-7c65d6cfc9-brhf6" Sep 9 05:37:05.201841 systemd[1]: Created slice kubepods-besteffort-podfb14b863_0bcb_461f_94e8_0e174d2118f2.slice - libcontainer container kubepods-besteffort-podfb14b863_0bcb_461f_94e8_0e174d2118f2.slice. Sep 9 05:37:05.210975 containerd[1614]: time="2025-09-09T05:37:05.209984966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sx2vx,Uid:fb14b863-0bcb-461f-94e8-0e174d2118f2,Namespace:calico-system,Attempt:0,}" Sep 9 05:37:05.222589 systemd[1]: Created slice kubepods-besteffort-pode9d9ba6a_f077_443e_a1c4_bee160d5b056.slice - libcontainer container kubepods-besteffort-pode9d9ba6a_f077_443e_a1c4_bee160d5b056.slice. Sep 9 05:37:05.240398 systemd[1]: Created slice kubepods-besteffort-pod4d97d611_7546_4d90_b668_89aa3816730e.slice - libcontainer container kubepods-besteffort-pod4d97d611_7546_4d90_b668_89aa3816730e.slice. Sep 9 05:37:05.255357 systemd[1]: Created slice kubepods-besteffort-podf2e5a79f_6883_4bf7_8f40_06be28144483.slice - libcontainer container kubepods-besteffort-podf2e5a79f_6883_4bf7_8f40_06be28144483.slice. Sep 9 05:37:05.291310 kubelet[2793]: I0909 05:37:05.289840 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckvzk\" (UniqueName: \"kubernetes.io/projected/4d97d611-7546-4d90-b668-89aa3816730e-kube-api-access-ckvzk\") pod \"goldmane-7988f88666-nhngg\" (UID: \"4d97d611-7546-4d90-b668-89aa3816730e\") " pod="calico-system/goldmane-7988f88666-nhngg" Sep 9 05:37:05.294951 kubelet[2793]: I0909 05:37:05.294086 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2e5a79f-6883-4bf7-8f40-06be28144483-whisker-backend-key-pair\") pod \"whisker-7454464f88-nrtrr\" (UID: \"f2e5a79f-6883-4bf7-8f40-06be28144483\") " pod="calico-system/whisker-7454464f88-nrtrr" Sep 9 05:37:05.294951 kubelet[2793]: I0909 05:37:05.294171 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d97d611-7546-4d90-b668-89aa3816730e-config\") pod \"goldmane-7988f88666-nhngg\" (UID: \"4d97d611-7546-4d90-b668-89aa3816730e\") " pod="calico-system/goldmane-7988f88666-nhngg" Sep 9 05:37:05.294951 kubelet[2793]: I0909 05:37:05.294231 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e9d9ba6a-f077-443e-a1c4-bee160d5b056-calico-apiserver-certs\") pod \"calico-apiserver-6b96d9f487-55zq4\" (UID: \"e9d9ba6a-f077-443e-a1c4-bee160d5b056\") " pod="calico-apiserver/calico-apiserver-6b96d9f487-55zq4" Sep 9 05:37:05.294951 kubelet[2793]: I0909 05:37:05.294261 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4d97d611-7546-4d90-b668-89aa3816730e-goldmane-key-pair\") pod \"goldmane-7988f88666-nhngg\" (UID: \"4d97d611-7546-4d90-b668-89aa3816730e\") " pod="calico-system/goldmane-7988f88666-nhngg" Sep 9 05:37:05.294951 kubelet[2793]: I0909 05:37:05.294296 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlvcj\" (UniqueName: \"kubernetes.io/projected/f2e5a79f-6883-4bf7-8f40-06be28144483-kube-api-access-dlvcj\") pod \"whisker-7454464f88-nrtrr\" (UID: \"f2e5a79f-6883-4bf7-8f40-06be28144483\") " pod="calico-system/whisker-7454464f88-nrtrr" Sep 9 05:37:05.295420 kubelet[2793]: I0909 05:37:05.294353 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85j2k\" (UniqueName: \"kubernetes.io/projected/e9d9ba6a-f077-443e-a1c4-bee160d5b056-kube-api-access-85j2k\") pod \"calico-apiserver-6b96d9f487-55zq4\" (UID: \"e9d9ba6a-f077-443e-a1c4-bee160d5b056\") " pod="calico-apiserver/calico-apiserver-6b96d9f487-55zq4" Sep 9 05:37:05.300147 kubelet[2793]: I0909 05:37:05.297535 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d97d611-7546-4d90-b668-89aa3816730e-goldmane-ca-bundle\") pod \"goldmane-7988f88666-nhngg\" (UID: \"4d97d611-7546-4d90-b668-89aa3816730e\") " pod="calico-system/goldmane-7988f88666-nhngg" Sep 9 05:37:05.300147 kubelet[2793]: I0909 05:37:05.297587 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2e5a79f-6883-4bf7-8f40-06be28144483-whisker-ca-bundle\") pod \"whisker-7454464f88-nrtrr\" (UID: \"f2e5a79f-6883-4bf7-8f40-06be28144483\") " pod="calico-system/whisker-7454464f88-nrtrr" Sep 9 05:37:05.425770 containerd[1614]: time="2025-09-09T05:37:05.425719588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74d99b4c64-gjhhx,Uid:5fa1ecdb-01fb-4f51-8fbc-a2dac958e18d,Namespace:calico-system,Attempt:0,}" Sep 9 05:37:05.447899 containerd[1614]: time="2025-09-09T05:37:05.447840527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9xc46,Uid:3ec46691-51ce-4118-bbb6-ca3b35a2d45c,Namespace:kube-system,Attempt:0,}" Sep 9 05:37:05.478847 containerd[1614]: time="2025-09-09T05:37:05.478703621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-brhf6,Uid:1e37909a-15b0-4abe-9b56-16b705d6a0b9,Namespace:kube-system,Attempt:0,}" Sep 9 05:37:05.552000 containerd[1614]: time="2025-09-09T05:37:05.551679569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nhngg,Uid:4d97d611-7546-4d90-b668-89aa3816730e,Namespace:calico-system,Attempt:0,}" Sep 9 05:37:05.576696 containerd[1614]: time="2025-09-09T05:37:05.576635867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7454464f88-nrtrr,Uid:f2e5a79f-6883-4bf7-8f40-06be28144483,Namespace:calico-system,Attempt:0,}" Sep 9 05:37:05.581018 containerd[1614]: time="2025-09-09T05:37:05.580948113Z" level=error msg="Failed to destroy network for sandbox \"b894266e8430d841677ec4295e61bce4ef6d4ec1c7bcd0e9771022d4890e4d24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:05.593614 containerd[1614]: time="2025-09-09T05:37:05.593505397Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sx2vx,Uid:fb14b863-0bcb-461f-94e8-0e174d2118f2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b894266e8430d841677ec4295e61bce4ef6d4ec1c7bcd0e9771022d4890e4d24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:05.598643 kubelet[2793]: E0909 05:37:05.598063 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b894266e8430d841677ec4295e61bce4ef6d4ec1c7bcd0e9771022d4890e4d24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:05.598643 kubelet[2793]: E0909 05:37:05.598191 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b894266e8430d841677ec4295e61bce4ef6d4ec1c7bcd0e9771022d4890e4d24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sx2vx" Sep 9 05:37:05.598643 kubelet[2793]: E0909 05:37:05.598228 2793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b894266e8430d841677ec4295e61bce4ef6d4ec1c7bcd0e9771022d4890e4d24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sx2vx" Sep 9 05:37:05.598978 kubelet[2793]: E0909 05:37:05.598303 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sx2vx_calico-system(fb14b863-0bcb-461f-94e8-0e174d2118f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sx2vx_calico-system(fb14b863-0bcb-461f-94e8-0e174d2118f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b894266e8430d841677ec4295e61bce4ef6d4ec1c7bcd0e9771022d4890e4d24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sx2vx" podUID="fb14b863-0bcb-461f-94e8-0e174d2118f2" Sep 9 05:37:05.852188 containerd[1614]: time="2025-09-09T05:37:05.851823304Z" level=error msg="Failed to destroy network for sandbox \"26577718f4ab5b249762d6905d9b24e13eb86e85b08e2987251fa19bb6b60d50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:05.855148 containerd[1614]: time="2025-09-09T05:37:05.855076084Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74d99b4c64-gjhhx,Uid:5fa1ecdb-01fb-4f51-8fbc-a2dac958e18d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"26577718f4ab5b249762d6905d9b24e13eb86e85b08e2987251fa19bb6b60d50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:05.856619 kubelet[2793]: E0909 05:37:05.855417 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26577718f4ab5b249762d6905d9b24e13eb86e85b08e2987251fa19bb6b60d50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:05.856619 kubelet[2793]: E0909 05:37:05.855536 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26577718f4ab5b249762d6905d9b24e13eb86e85b08e2987251fa19bb6b60d50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74d99b4c64-gjhhx" Sep 9 05:37:05.856619 kubelet[2793]: E0909 05:37:05.855569 2793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26577718f4ab5b249762d6905d9b24e13eb86e85b08e2987251fa19bb6b60d50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74d99b4c64-gjhhx" Sep 9 05:37:05.856809 kubelet[2793]: E0909 05:37:05.855645 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74d99b4c64-gjhhx_calico-system(5fa1ecdb-01fb-4f51-8fbc-a2dac958e18d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74d99b4c64-gjhhx_calico-system(5fa1ecdb-01fb-4f51-8fbc-a2dac958e18d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26577718f4ab5b249762d6905d9b24e13eb86e85b08e2987251fa19bb6b60d50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74d99b4c64-gjhhx" podUID="5fa1ecdb-01fb-4f51-8fbc-a2dac958e18d" Sep 9 05:37:05.885542 containerd[1614]: time="2025-09-09T05:37:05.885342256Z" level=error msg="Failed to destroy network for sandbox \"6ad32051b2c618885417fb98e581dff01eb04446592b59b6c0ffa73846c803b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:05.888012 containerd[1614]: time="2025-09-09T05:37:05.887908886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7454464f88-nrtrr,Uid:f2e5a79f-6883-4bf7-8f40-06be28144483,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ad32051b2c618885417fb98e581dff01eb04446592b59b6c0ffa73846c803b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:05.889199 kubelet[2793]: E0909 05:37:05.888192 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ad32051b2c618885417fb98e581dff01eb04446592b59b6c0ffa73846c803b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:05.889199 kubelet[2793]: E0909 05:37:05.888273 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ad32051b2c618885417fb98e581dff01eb04446592b59b6c0ffa73846c803b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7454464f88-nrtrr" Sep 9 05:37:05.889199 kubelet[2793]: E0909 05:37:05.888309 2793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ad32051b2c618885417fb98e581dff01eb04446592b59b6c0ffa73846c803b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7454464f88-nrtrr" Sep 9 05:37:05.889469 kubelet[2793]: E0909 05:37:05.888387 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7454464f88-nrtrr_calico-system(f2e5a79f-6883-4bf7-8f40-06be28144483)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7454464f88-nrtrr_calico-system(f2e5a79f-6883-4bf7-8f40-06be28144483)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ad32051b2c618885417fb98e581dff01eb04446592b59b6c0ffa73846c803b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7454464f88-nrtrr" podUID="f2e5a79f-6883-4bf7-8f40-06be28144483" Sep 9 05:37:05.908506 containerd[1614]: time="2025-09-09T05:37:05.908229473Z" level=error msg="Failed to destroy network for sandbox \"be6a0cbe057596fc958119e4267f932f65ae49c50854ccd8f047ace32e0938b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:05.911760 containerd[1614]: time="2025-09-09T05:37:05.911340261Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9xc46,Uid:3ec46691-51ce-4118-bbb6-ca3b35a2d45c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be6a0cbe057596fc958119e4267f932f65ae49c50854ccd8f047ace32e0938b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:05.912007 kubelet[2793]: E0909 05:37:05.911920 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be6a0cbe057596fc958119e4267f932f65ae49c50854ccd8f047ace32e0938b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:05.912106 kubelet[2793]: E0909 05:37:05.912026 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be6a0cbe057596fc958119e4267f932f65ae49c50854ccd8f047ace32e0938b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-9xc46" Sep 9 05:37:05.912192 kubelet[2793]: E0909 05:37:05.912096 2793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be6a0cbe057596fc958119e4267f932f65ae49c50854ccd8f047ace32e0938b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-9xc46" Sep 9 05:37:05.913466 kubelet[2793]: E0909 05:37:05.912224 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-9xc46_kube-system(3ec46691-51ce-4118-bbb6-ca3b35a2d45c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-9xc46_kube-system(3ec46691-51ce-4118-bbb6-ca3b35a2d45c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be6a0cbe057596fc958119e4267f932f65ae49c50854ccd8f047ace32e0938b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-9xc46" podUID="3ec46691-51ce-4118-bbb6-ca3b35a2d45c" Sep 9 05:37:05.915846 containerd[1614]: time="2025-09-09T05:37:05.915784143Z" level=error msg="Failed to destroy network for sandbox \"7847c9ae2b1ebc62bf7f3e357c065859379db9ecebc44e1b59802c2767e17cb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:05.918471 containerd[1614]: time="2025-09-09T05:37:05.918295099Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nhngg,Uid:4d97d611-7546-4d90-b668-89aa3816730e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7847c9ae2b1ebc62bf7f3e357c065859379db9ecebc44e1b59802c2767e17cb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:05.920029 kubelet[2793]: E0909 05:37:05.919986 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7847c9ae2b1ebc62bf7f3e357c065859379db9ecebc44e1b59802c2767e17cb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:05.920981 kubelet[2793]: E0909 05:37:05.920508 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7847c9ae2b1ebc62bf7f3e357c065859379db9ecebc44e1b59802c2767e17cb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-nhngg" Sep 9 05:37:05.920981 kubelet[2793]: E0909 05:37:05.920607 2793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7847c9ae2b1ebc62bf7f3e357c065859379db9ecebc44e1b59802c2767e17cb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-nhngg" Sep 9 05:37:05.921460 kubelet[2793]: E0909 05:37:05.920698 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-nhngg_calico-system(4d97d611-7546-4d90-b668-89aa3816730e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-nhngg_calico-system(4d97d611-7546-4d90-b668-89aa3816730e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7847c9ae2b1ebc62bf7f3e357c065859379db9ecebc44e1b59802c2767e17cb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-nhngg" podUID="4d97d611-7546-4d90-b668-89aa3816730e" Sep 9 05:37:05.924325 containerd[1614]: time="2025-09-09T05:37:05.924265618Z" level=error msg="Failed to destroy network for sandbox \"788b35fdff835d6a12c4e8e1300d6b1e09ec77de7cc0713cce7cba506063002c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:05.926065 containerd[1614]: time="2025-09-09T05:37:05.926011666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-brhf6,Uid:1e37909a-15b0-4abe-9b56-16b705d6a0b9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"788b35fdff835d6a12c4e8e1300d6b1e09ec77de7cc0713cce7cba506063002c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:05.926383 kubelet[2793]: E0909 05:37:05.926281 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"788b35fdff835d6a12c4e8e1300d6b1e09ec77de7cc0713cce7cba506063002c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:05.926383 kubelet[2793]: E0909 05:37:05.926342 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"788b35fdff835d6a12c4e8e1300d6b1e09ec77de7cc0713cce7cba506063002c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-brhf6" Sep 9 05:37:05.926383 kubelet[2793]: E0909 05:37:05.926373 2793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"788b35fdff835d6a12c4e8e1300d6b1e09ec77de7cc0713cce7cba506063002c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-brhf6" Sep 9 05:37:05.927247 kubelet[2793]: E0909 05:37:05.926478 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-brhf6_kube-system(1e37909a-15b0-4abe-9b56-16b705d6a0b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-brhf6_kube-system(1e37909a-15b0-4abe-9b56-16b705d6a0b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"788b35fdff835d6a12c4e8e1300d6b1e09ec77de7cc0713cce7cba506063002c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-brhf6" podUID="1e37909a-15b0-4abe-9b56-16b705d6a0b9" Sep 9 05:37:06.026734 systemd[1]: run-netns-cni\x2deeb6512b\x2d79e5\x2d4f7b\x2dfe6a\x2d7e9dbcc9be7f.mount: Deactivated successfully. Sep 9 05:37:06.239537 kubelet[2793]: I0909 05:37:06.239089 2793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:37:06.293300 kubelet[2793]: E0909 05:37:06.293234 2793 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Sep 9 05:37:06.293545 kubelet[2793]: E0909 05:37:06.293372 2793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5a5b433-2f7b-4464-9d2f-741352c25aec-calico-apiserver-certs podName:e5a5b433-2f7b-4464-9d2f-741352c25aec nodeName:}" failed. No retries permitted until 2025-09-09 05:37:06.793340329 +0000 UTC m=+35.902530038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/e5a5b433-2f7b-4464-9d2f-741352c25aec-calico-apiserver-certs") pod "calico-apiserver-6b96d9f487-86crn" (UID: "e5a5b433-2f7b-4464-9d2f-741352c25aec") : failed to sync secret cache: timed out waiting for the condition Sep 9 05:37:06.331648 containerd[1614]: time="2025-09-09T05:37:06.331579022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 05:37:06.359972 kubelet[2793]: E0909 05:37:06.359905 2793 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 9 05:37:06.359972 kubelet[2793]: E0909 05:37:06.359954 2793 projected.go:194] Error preparing data for projected volume kube-api-access-gbssk for pod calico-apiserver/calico-apiserver-6b96d9f487-86crn: failed to sync configmap cache: timed out waiting for the condition Sep 9 05:37:06.360248 kubelet[2793]: E0909 05:37:06.360038 2793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e5a5b433-2f7b-4464-9d2f-741352c25aec-kube-api-access-gbssk podName:e5a5b433-2f7b-4464-9d2f-741352c25aec nodeName:}" failed. No retries permitted until 2025-09-09 05:37:06.860014337 +0000 UTC m=+35.969204045 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gbssk" (UniqueName: "kubernetes.io/projected/e5a5b433-2f7b-4464-9d2f-741352c25aec-kube-api-access-gbssk") pod "calico-apiserver-6b96d9f487-86crn" (UID: "e5a5b433-2f7b-4464-9d2f-741352c25aec") : failed to sync configmap cache: timed out waiting for the condition Sep 9 05:37:06.403243 kubelet[2793]: E0909 05:37:06.403159 2793 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Sep 9 05:37:06.403539 kubelet[2793]: E0909 05:37:06.403287 2793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9d9ba6a-f077-443e-a1c4-bee160d5b056-calico-apiserver-certs podName:e9d9ba6a-f077-443e-a1c4-bee160d5b056 nodeName:}" failed. No retries permitted until 2025-09-09 05:37:06.903256695 +0000 UTC m=+36.012446394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/e9d9ba6a-f077-443e-a1c4-bee160d5b056-calico-apiserver-certs") pod "calico-apiserver-6b96d9f487-55zq4" (UID: "e9d9ba6a-f077-443e-a1c4-bee160d5b056") : failed to sync secret cache: timed out waiting for the condition Sep 9 05:37:06.435243 kubelet[2793]: E0909 05:37:06.435177 2793 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 9 05:37:06.435243 kubelet[2793]: E0909 05:37:06.435238 2793 projected.go:194] Error preparing data for projected volume kube-api-access-85j2k for pod calico-apiserver/calico-apiserver-6b96d9f487-55zq4: failed to sync configmap cache: timed out waiting for the condition Sep 9 05:37:06.435573 kubelet[2793]: E0909 05:37:06.435325 2793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9d9ba6a-f077-443e-a1c4-bee160d5b056-kube-api-access-85j2k podName:e9d9ba6a-f077-443e-a1c4-bee160d5b056 nodeName:}" failed. No retries permitted until 2025-09-09 05:37:06.935299744 +0000 UTC m=+36.044489454 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-85j2k" (UniqueName: "kubernetes.io/projected/e9d9ba6a-f077-443e-a1c4-bee160d5b056-kube-api-access-85j2k") pod "calico-apiserver-6b96d9f487-55zq4" (UID: "e9d9ba6a-f077-443e-a1c4-bee160d5b056") : failed to sync configmap cache: timed out waiting for the condition Sep 9 05:37:07.000343 containerd[1614]: time="2025-09-09T05:37:07.000274953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b96d9f487-86crn,Uid:e5a5b433-2f7b-4464-9d2f-741352c25aec,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:37:07.094413 containerd[1614]: time="2025-09-09T05:37:07.094335361Z" level=error msg="Failed to destroy network for sandbox \"dce7603767e84cc539bf0b9095a9c146881aba8750d321f133f79027fba1419b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:07.097616 containerd[1614]: time="2025-09-09T05:37:07.097551218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b96d9f487-86crn,Uid:e5a5b433-2f7b-4464-9d2f-741352c25aec,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dce7603767e84cc539bf0b9095a9c146881aba8750d321f133f79027fba1419b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:07.099849 kubelet[2793]: E0909 05:37:07.099679 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dce7603767e84cc539bf0b9095a9c146881aba8750d321f133f79027fba1419b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:07.100159 kubelet[2793]: E0909 05:37:07.100045 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dce7603767e84cc539bf0b9095a9c146881aba8750d321f133f79027fba1419b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b96d9f487-86crn" Sep 9 05:37:07.100159 kubelet[2793]: E0909 05:37:07.100108 2793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dce7603767e84cc539bf0b9095a9c146881aba8750d321f133f79027fba1419b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b96d9f487-86crn" Sep 9 05:37:07.100599 systemd[1]: run-netns-cni\x2d6236645f\x2d700e\x2dab38\x2dc7eb\x2dddbc236c0d62.mount: Deactivated successfully. Sep 9 05:37:07.106005 kubelet[2793]: E0909 05:37:07.105545 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b96d9f487-86crn_calico-apiserver(e5a5b433-2f7b-4464-9d2f-741352c25aec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b96d9f487-86crn_calico-apiserver(e5a5b433-2f7b-4464-9d2f-741352c25aec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dce7603767e84cc539bf0b9095a9c146881aba8750d321f133f79027fba1419b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b96d9f487-86crn" podUID="e5a5b433-2f7b-4464-9d2f-741352c25aec" Sep 9 05:37:07.331697 containerd[1614]: time="2025-09-09T05:37:07.331530852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b96d9f487-55zq4,Uid:e9d9ba6a-f077-443e-a1c4-bee160d5b056,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:37:07.422870 containerd[1614]: time="2025-09-09T05:37:07.422792433Z" level=error msg="Failed to destroy network for sandbox \"4a3dc2b9b0a981ca75f72a73ae67ad3634cf964727acddb046c16b826b19a3d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:07.428567 containerd[1614]: time="2025-09-09T05:37:07.428066524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b96d9f487-55zq4,Uid:e9d9ba6a-f077-443e-a1c4-bee160d5b056,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a3dc2b9b0a981ca75f72a73ae67ad3634cf964727acddb046c16b826b19a3d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:07.431017 systemd[1]: run-netns-cni\x2d64e00983\x2dee54\x2d08ab\x2d05c8\x2d8db5b7e798dc.mount: Deactivated successfully. Sep 9 05:37:07.431722 kubelet[2793]: E0909 05:37:07.431650 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a3dc2b9b0a981ca75f72a73ae67ad3634cf964727acddb046c16b826b19a3d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:37:07.432224 kubelet[2793]: E0909 05:37:07.431739 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a3dc2b9b0a981ca75f72a73ae67ad3634cf964727acddb046c16b826b19a3d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b96d9f487-55zq4" Sep 9 05:37:07.432224 kubelet[2793]: E0909 05:37:07.431772 2793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a3dc2b9b0a981ca75f72a73ae67ad3634cf964727acddb046c16b826b19a3d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b96d9f487-55zq4" Sep 9 05:37:07.432224 kubelet[2793]: E0909 05:37:07.431874 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b96d9f487-55zq4_calico-apiserver(e9d9ba6a-f077-443e-a1c4-bee160d5b056)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b96d9f487-55zq4_calico-apiserver(e9d9ba6a-f077-443e-a1c4-bee160d5b056)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a3dc2b9b0a981ca75f72a73ae67ad3634cf964727acddb046c16b826b19a3d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b96d9f487-55zq4" podUID="e9d9ba6a-f077-443e-a1c4-bee160d5b056" Sep 9 05:37:13.580668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2466029742.mount: Deactivated successfully. Sep 9 05:37:13.619649 containerd[1614]: time="2025-09-09T05:37:13.619561698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:13.621706 containerd[1614]: time="2025-09-09T05:37:13.621452851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 9 05:37:13.623198 containerd[1614]: time="2025-09-09T05:37:13.623144557Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:13.626279 containerd[1614]: time="2025-09-09T05:37:13.626233106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:13.627261 containerd[1614]: time="2025-09-09T05:37:13.627218910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 7.295582144s" Sep 9 05:37:13.627450 containerd[1614]: time="2025-09-09T05:37:13.627401683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 9 05:37:13.657043 containerd[1614]: time="2025-09-09T05:37:13.656970543Z" level=info msg="CreateContainer within sandbox \"d06682df698a5697916722c6761e68183047fb6bff4d7f336fc8e42fe98e63c3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 05:37:13.673236 containerd[1614]: time="2025-09-09T05:37:13.673170016Z" level=info msg="Container 3ae8046e95c02d0daedc19523913d024bcaadfb538ad2842993497ea69baab1b: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:13.696710 containerd[1614]: time="2025-09-09T05:37:13.696634089Z" level=info msg="CreateContainer within sandbox \"d06682df698a5697916722c6761e68183047fb6bff4d7f336fc8e42fe98e63c3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3ae8046e95c02d0daedc19523913d024bcaadfb538ad2842993497ea69baab1b\"" Sep 9 05:37:13.697664 containerd[1614]: time="2025-09-09T05:37:13.697614942Z" level=info msg="StartContainer for \"3ae8046e95c02d0daedc19523913d024bcaadfb538ad2842993497ea69baab1b\"" Sep 9 05:37:13.700295 containerd[1614]: time="2025-09-09T05:37:13.700230681Z" level=info msg="connecting to shim 3ae8046e95c02d0daedc19523913d024bcaadfb538ad2842993497ea69baab1b" address="unix:///run/containerd/s/01364e2651e942118c3e8da8749f0d1262e1f29653ec775bb18a91caa8cc0c22" protocol=ttrpc version=3 Sep 9 05:37:13.731702 systemd[1]: Started cri-containerd-3ae8046e95c02d0daedc19523913d024bcaadfb538ad2842993497ea69baab1b.scope - libcontainer container 3ae8046e95c02d0daedc19523913d024bcaadfb538ad2842993497ea69baab1b. Sep 9 05:37:13.812085 containerd[1614]: time="2025-09-09T05:37:13.811911318Z" level=info msg="StartContainer for \"3ae8046e95c02d0daedc19523913d024bcaadfb538ad2842993497ea69baab1b\" returns successfully" Sep 9 05:37:13.952026 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 05:37:13.952678 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 05:37:14.186670 kubelet[2793]: I0909 05:37:14.184548 2793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2e5a79f-6883-4bf7-8f40-06be28144483-whisker-backend-key-pair\") pod \"f2e5a79f-6883-4bf7-8f40-06be28144483\" (UID: \"f2e5a79f-6883-4bf7-8f40-06be28144483\") " Sep 9 05:37:14.186670 kubelet[2793]: I0909 05:37:14.184646 2793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2e5a79f-6883-4bf7-8f40-06be28144483-whisker-ca-bundle\") pod \"f2e5a79f-6883-4bf7-8f40-06be28144483\" (UID: \"f2e5a79f-6883-4bf7-8f40-06be28144483\") " Sep 9 05:37:14.186670 kubelet[2793]: I0909 05:37:14.184697 2793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlvcj\" (UniqueName: \"kubernetes.io/projected/f2e5a79f-6883-4bf7-8f40-06be28144483-kube-api-access-dlvcj\") pod \"f2e5a79f-6883-4bf7-8f40-06be28144483\" (UID: \"f2e5a79f-6883-4bf7-8f40-06be28144483\") " Sep 9 05:37:14.188026 kubelet[2793]: I0909 05:37:14.187973 2793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2e5a79f-6883-4bf7-8f40-06be28144483-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f2e5a79f-6883-4bf7-8f40-06be28144483" (UID: "f2e5a79f-6883-4bf7-8f40-06be28144483"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 05:37:14.192373 kubelet[2793]: I0909 05:37:14.192313 2793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2e5a79f-6883-4bf7-8f40-06be28144483-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f2e5a79f-6883-4bf7-8f40-06be28144483" (UID: "f2e5a79f-6883-4bf7-8f40-06be28144483"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 05:37:14.195594 kubelet[2793]: I0909 05:37:14.195500 2793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2e5a79f-6883-4bf7-8f40-06be28144483-kube-api-access-dlvcj" (OuterVolumeSpecName: "kube-api-access-dlvcj") pod "f2e5a79f-6883-4bf7-8f40-06be28144483" (UID: "f2e5a79f-6883-4bf7-8f40-06be28144483"). InnerVolumeSpecName "kube-api-access-dlvcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 05:37:14.286176 kubelet[2793]: I0909 05:37:14.286007 2793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlvcj\" (UniqueName: \"kubernetes.io/projected/f2e5a79f-6883-4bf7-8f40-06be28144483-kube-api-access-dlvcj\") on node \"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" DevicePath \"\"" Sep 9 05:37:14.286176 kubelet[2793]: I0909 05:37:14.286077 2793 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2e5a79f-6883-4bf7-8f40-06be28144483-whisker-backend-key-pair\") on node \"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" DevicePath \"\"" Sep 9 05:37:14.286176 kubelet[2793]: I0909 05:37:14.286097 2793 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2e5a79f-6883-4bf7-8f40-06be28144483-whisker-ca-bundle\") on node \"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb\" DevicePath \"\"" Sep 9 05:37:14.389106 systemd[1]: Removed slice kubepods-besteffort-podf2e5a79f_6883_4bf7_8f40_06be28144483.slice - libcontainer container kubepods-besteffort-podf2e5a79f_6883_4bf7_8f40_06be28144483.slice. Sep 9 05:37:14.409348 kubelet[2793]: I0909 05:37:14.408428 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lcbjr" podStartSLOduration=1.8409729929999998 podStartE2EDuration="21.408398356s" podCreationTimestamp="2025-09-09 05:36:53 +0000 UTC" firstStartedPulling="2025-09-09 05:36:54.061153077 +0000 UTC m=+23.170342774" lastFinishedPulling="2025-09-09 05:37:13.628578444 +0000 UTC m=+42.737768137" observedRunningTime="2025-09-09 05:37:14.407496689 +0000 UTC m=+43.516686403" watchObservedRunningTime="2025-09-09 05:37:14.408398356 +0000 UTC m=+43.517588073" Sep 9 05:37:14.516348 systemd[1]: Created slice kubepods-besteffort-pod2713bf72_4622_485d_bbb9_51678f99f161.slice - libcontainer container kubepods-besteffort-pod2713bf72_4622_485d_bbb9_51678f99f161.slice. Sep 9 05:37:14.579200 systemd[1]: var-lib-kubelet-pods-f2e5a79f\x2d6883\x2d4bf7\x2d8f40\x2d06be28144483-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddlvcj.mount: Deactivated successfully. Sep 9 05:37:14.579373 systemd[1]: var-lib-kubelet-pods-f2e5a79f\x2d6883\x2d4bf7\x2d8f40\x2d06be28144483-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 05:37:14.588664 kubelet[2793]: I0909 05:37:14.588608 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2713bf72-4622-485d-bbb9-51678f99f161-whisker-backend-key-pair\") pod \"whisker-654677dfc9-cv72z\" (UID: \"2713bf72-4622-485d-bbb9-51678f99f161\") " pod="calico-system/whisker-654677dfc9-cv72z" Sep 9 05:37:14.588842 kubelet[2793]: I0909 05:37:14.588737 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2713bf72-4622-485d-bbb9-51678f99f161-whisker-ca-bundle\") pod \"whisker-654677dfc9-cv72z\" (UID: \"2713bf72-4622-485d-bbb9-51678f99f161\") " pod="calico-system/whisker-654677dfc9-cv72z" Sep 9 05:37:14.588842 kubelet[2793]: I0909 05:37:14.588792 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gq6q\" (UniqueName: \"kubernetes.io/projected/2713bf72-4622-485d-bbb9-51678f99f161-kube-api-access-4gq6q\") pod \"whisker-654677dfc9-cv72z\" (UID: \"2713bf72-4622-485d-bbb9-51678f99f161\") " pod="calico-system/whisker-654677dfc9-cv72z" Sep 9 05:37:14.835301 containerd[1614]: time="2025-09-09T05:37:14.835116537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-654677dfc9-cv72z,Uid:2713bf72-4622-485d-bbb9-51678f99f161,Namespace:calico-system,Attempt:0,}" Sep 9 05:37:14.994400 systemd-networkd[1467]: cali5923c394888: Link UP Sep 9 05:37:14.997086 systemd-networkd[1467]: cali5923c394888: Gained carrier Sep 9 05:37:15.025109 containerd[1614]: 2025-09-09 05:37:14.874 [INFO][3869] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 05:37:15.025109 containerd[1614]: 2025-09-09 05:37:14.890 [INFO][3869] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-whisker--654677dfc9--cv72z-eth0 whisker-654677dfc9- calico-system 2713bf72-4622-485d-bbb9-51678f99f161 883 0 2025-09-09 05:37:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:654677dfc9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb whisker-654677dfc9-cv72z eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5923c394888 [] [] }} ContainerID="16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312" Namespace="calico-system" Pod="whisker-654677dfc9-cv72z" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-whisker--654677dfc9--cv72z-" Sep 9 05:37:15.025109 containerd[1614]: 2025-09-09 05:37:14.890 [INFO][3869] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312" Namespace="calico-system" Pod="whisker-654677dfc9-cv72z" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-whisker--654677dfc9--cv72z-eth0" Sep 9 05:37:15.025109 containerd[1614]: 2025-09-09 05:37:14.927 [INFO][3883] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312" HandleID="k8s-pod-network.16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-whisker--654677dfc9--cv72z-eth0" Sep 9 05:37:15.025662 containerd[1614]: 2025-09-09 05:37:14.927 [INFO][3883] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312" HandleID="k8s-pod-network.16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-whisker--654677dfc9--cv72z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", "pod":"whisker-654677dfc9-cv72z", "timestamp":"2025-09-09 05:37:14.927726877 +0000 UTC"}, Hostname:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:37:15.025662 containerd[1614]: 2025-09-09 05:37:14.928 [INFO][3883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:37:15.025662 containerd[1614]: 2025-09-09 05:37:14.928 [INFO][3883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:37:15.025662 containerd[1614]: 2025-09-09 05:37:14.928 [INFO][3883] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb' Sep 9 05:37:15.025662 containerd[1614]: 2025-09-09 05:37:14.939 [INFO][3883] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:15.025662 containerd[1614]: 2025-09-09 05:37:14.944 [INFO][3883] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:15.025662 containerd[1614]: 2025-09-09 05:37:14.949 [INFO][3883] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:15.025662 containerd[1614]: 2025-09-09 05:37:14.952 [INFO][3883] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:15.026158 containerd[1614]: 2025-09-09 05:37:14.954 [INFO][3883] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:15.026158 containerd[1614]: 2025-09-09 05:37:14.954 [INFO][3883] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:15.026158 containerd[1614]: 2025-09-09 05:37:14.956 [INFO][3883] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312 Sep 9 05:37:15.026158 containerd[1614]: 2025-09-09 05:37:14.963 [INFO][3883] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:15.026158 containerd[1614]: 2025-09-09 05:37:14.971 [INFO][3883] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.129/26] block=192.168.15.128/26 handle="k8s-pod-network.16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:15.026158 containerd[1614]: 2025-09-09 05:37:14.971 [INFO][3883] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.129/26] handle="k8s-pod-network.16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:15.026158 containerd[1614]: 2025-09-09 05:37:14.971 [INFO][3883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:37:15.026158 containerd[1614]: 2025-09-09 05:37:14.972 [INFO][3883] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.129/26] IPv6=[] ContainerID="16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312" HandleID="k8s-pod-network.16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-whisker--654677dfc9--cv72z-eth0" Sep 9 05:37:15.028968 containerd[1614]: 2025-09-09 05:37:14.982 [INFO][3869] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312" Namespace="calico-system" Pod="whisker-654677dfc9-cv72z" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-whisker--654677dfc9--cv72z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-whisker--654677dfc9--cv72z-eth0", GenerateName:"whisker-654677dfc9-", Namespace:"calico-system", SelfLink:"", UID:"2713bf72-4622-485d-bbb9-51678f99f161", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 37, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"654677dfc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", ContainerID:"", Pod:"whisker-654677dfc9-cv72z", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5923c394888", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:37:15.029757 containerd[1614]: 2025-09-09 05:37:14.983 [INFO][3869] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.129/32] ContainerID="16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312" Namespace="calico-system" Pod="whisker-654677dfc9-cv72z" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-whisker--654677dfc9--cv72z-eth0" Sep 9 05:37:15.029757 containerd[1614]: 2025-09-09 05:37:14.983 [INFO][3869] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5923c394888 ContainerID="16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312" Namespace="calico-system" Pod="whisker-654677dfc9-cv72z" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-whisker--654677dfc9--cv72z-eth0" Sep 9 05:37:15.029757 containerd[1614]: 2025-09-09 05:37:14.995 [INFO][3869] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312" Namespace="calico-system" Pod="whisker-654677dfc9-cv72z" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-whisker--654677dfc9--cv72z-eth0" Sep 9 05:37:15.030116 containerd[1614]: 2025-09-09 05:37:14.995 [INFO][3869] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312" Namespace="calico-system" Pod="whisker-654677dfc9-cv72z" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-whisker--654677dfc9--cv72z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-whisker--654677dfc9--cv72z-eth0", GenerateName:"whisker-654677dfc9-", Namespace:"calico-system", SelfLink:"", UID:"2713bf72-4622-485d-bbb9-51678f99f161", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 37, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"654677dfc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", ContainerID:"16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312", Pod:"whisker-654677dfc9-cv72z", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5923c394888", MAC:"02:88:42:19:9e:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:37:15.030321 containerd[1614]: 2025-09-09 05:37:15.010 [INFO][3869] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312" Namespace="calico-system" Pod="whisker-654677dfc9-cv72z" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-whisker--654677dfc9--cv72z-eth0" Sep 9 05:37:15.064093 containerd[1614]: time="2025-09-09T05:37:15.064024574Z" level=info msg="connecting to shim 16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312" address="unix:///run/containerd/s/9a528f79e7baed14e287b5804d188d3df8942af376b002b3ffb00ba7a1db6f23" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:37:15.101710 systemd[1]: Started cri-containerd-16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312.scope - libcontainer container 16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312. Sep 9 05:37:15.164143 kubelet[2793]: I0909 05:37:15.163661 2793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2e5a79f-6883-4bf7-8f40-06be28144483" path="/var/lib/kubelet/pods/f2e5a79f-6883-4bf7-8f40-06be28144483/volumes" Sep 9 05:37:15.177252 containerd[1614]: time="2025-09-09T05:37:15.177173580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-654677dfc9-cv72z,Uid:2713bf72-4622-485d-bbb9-51678f99f161,Namespace:calico-system,Attempt:0,} returns sandbox id \"16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312\"" Sep 9 05:37:15.179666 containerd[1614]: time="2025-09-09T05:37:15.179593628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 05:37:16.154079 containerd[1614]: time="2025-09-09T05:37:16.154013540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sx2vx,Uid:fb14b863-0bcb-461f-94e8-0e174d2118f2,Namespace:calico-system,Attempt:0,}" Sep 9 05:37:16.371881 systemd-networkd[1467]: cali70626749932: Link UP Sep 9 05:37:16.374236 systemd-networkd[1467]: cali70626749932: Gained carrier Sep 9 05:37:16.410646 containerd[1614]: 2025-09-09 05:37:16.243 [INFO][4060] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-csi--node--driver--sx2vx-eth0 csi-node-driver- calico-system fb14b863-0bcb-461f-94e8-0e174d2118f2 691 0 2025-09-09 05:36:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb csi-node-driver-sx2vx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali70626749932 [] [] }} ContainerID="124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115" Namespace="calico-system" Pod="csi-node-driver-sx2vx" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-csi--node--driver--sx2vx-" Sep 9 05:37:16.410646 containerd[1614]: 2025-09-09 05:37:16.244 [INFO][4060] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115" Namespace="calico-system" Pod="csi-node-driver-sx2vx" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-csi--node--driver--sx2vx-eth0" Sep 9 05:37:16.410646 containerd[1614]: 2025-09-09 05:37:16.301 [INFO][4071] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115" HandleID="k8s-pod-network.124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-csi--node--driver--sx2vx-eth0" Sep 9 05:37:16.411116 containerd[1614]: 2025-09-09 05:37:16.302 [INFO][4071] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115" HandleID="k8s-pod-network.124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-csi--node--driver--sx2vx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cfd60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", "pod":"csi-node-driver-sx2vx", "timestamp":"2025-09-09 05:37:16.301611786 +0000 UTC"}, Hostname:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:37:16.411116 containerd[1614]: 2025-09-09 05:37:16.302 [INFO][4071] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:37:16.411116 containerd[1614]: 2025-09-09 05:37:16.302 [INFO][4071] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:37:16.411116 containerd[1614]: 2025-09-09 05:37:16.302 [INFO][4071] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb' Sep 9 05:37:16.411116 containerd[1614]: 2025-09-09 05:37:16.314 [INFO][4071] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:16.411116 containerd[1614]: 2025-09-09 05:37:16.322 [INFO][4071] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:16.411116 containerd[1614]: 2025-09-09 05:37:16.330 [INFO][4071] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:16.411116 containerd[1614]: 2025-09-09 05:37:16.333 [INFO][4071] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:16.413472 containerd[1614]: 2025-09-09 05:37:16.336 [INFO][4071] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:16.413472 containerd[1614]: 2025-09-09 05:37:16.336 [INFO][4071] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:16.413472 containerd[1614]: 2025-09-09 05:37:16.338 [INFO][4071] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115 Sep 9 05:37:16.413472 containerd[1614]: 2025-09-09 05:37:16.346 [INFO][4071] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:16.413472 containerd[1614]: 2025-09-09 05:37:16.357 [INFO][4071] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.130/26] block=192.168.15.128/26 handle="k8s-pod-network.124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:16.413472 containerd[1614]: 2025-09-09 05:37:16.357 [INFO][4071] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.130/26] handle="k8s-pod-network.124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:16.413472 containerd[1614]: 2025-09-09 05:37:16.357 [INFO][4071] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:37:16.413472 containerd[1614]: 2025-09-09 05:37:16.357 [INFO][4071] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.130/26] IPv6=[] ContainerID="124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115" HandleID="k8s-pod-network.124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-csi--node--driver--sx2vx-eth0" Sep 9 05:37:16.415046 containerd[1614]: 2025-09-09 05:37:16.361 [INFO][4060] cni-plugin/k8s.go 418: Populated endpoint ContainerID="124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115" Namespace="calico-system" Pod="csi-node-driver-sx2vx" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-csi--node--driver--sx2vx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-csi--node--driver--sx2vx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fb14b863-0bcb-461f-94e8-0e174d2118f2", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", ContainerID:"", Pod:"csi-node-driver-sx2vx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali70626749932", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:37:16.415294 containerd[1614]: 2025-09-09 05:37:16.362 [INFO][4060] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.130/32] ContainerID="124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115" Namespace="calico-system" Pod="csi-node-driver-sx2vx" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-csi--node--driver--sx2vx-eth0" Sep 9 05:37:16.415294 containerd[1614]: 2025-09-09 05:37:16.362 [INFO][4060] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70626749932 ContainerID="124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115" Namespace="calico-system" Pod="csi-node-driver-sx2vx" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-csi--node--driver--sx2vx-eth0" Sep 9 05:37:16.415294 containerd[1614]: 2025-09-09 05:37:16.377 [INFO][4060] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115" Namespace="calico-system" Pod="csi-node-driver-sx2vx" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-csi--node--driver--sx2vx-eth0" Sep 9 05:37:16.415794 containerd[1614]: 2025-09-09 05:37:16.379 [INFO][4060] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115" Namespace="calico-system" Pod="csi-node-driver-sx2vx" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-csi--node--driver--sx2vx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-csi--node--driver--sx2vx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fb14b863-0bcb-461f-94e8-0e174d2118f2", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", ContainerID:"124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115", Pod:"csi-node-driver-sx2vx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali70626749932", MAC:"ea:41:dc:ea:95:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:37:16.416069 containerd[1614]: 2025-09-09 05:37:16.405 [INFO][4060] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115" Namespace="calico-system" Pod="csi-node-driver-sx2vx" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-csi--node--driver--sx2vx-eth0" Sep 9 05:37:16.462423 containerd[1614]: time="2025-09-09T05:37:16.460227913Z" level=info msg="connecting to shim 124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115" address="unix:///run/containerd/s/1ff2660e77b11f9a667d587b8cc185ef200e0833080c74f9b5e64ac2da448eb5" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:37:16.553704 systemd[1]: Started cri-containerd-124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115.scope - libcontainer container 124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115. Sep 9 05:37:16.707721 containerd[1614]: time="2025-09-09T05:37:16.707671579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sx2vx,Uid:fb14b863-0bcb-461f-94e8-0e174d2118f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115\"" Sep 9 05:37:16.752767 systemd-networkd[1467]: cali5923c394888: Gained IPv6LL Sep 9 05:37:16.765586 systemd-networkd[1467]: vxlan.calico: Link UP Sep 9 05:37:16.765603 systemd-networkd[1467]: vxlan.calico: Gained carrier Sep 9 05:37:16.955765 containerd[1614]: time="2025-09-09T05:37:16.955693712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:16.958541 containerd[1614]: time="2025-09-09T05:37:16.958386347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 9 05:37:16.959708 containerd[1614]: time="2025-09-09T05:37:16.959636750Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:16.965451 containerd[1614]: time="2025-09-09T05:37:16.964865583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:16.967709 containerd[1614]: time="2025-09-09T05:37:16.967658954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.788010574s" Sep 9 05:37:16.967848 containerd[1614]: time="2025-09-09T05:37:16.967714026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 9 05:37:16.970289 containerd[1614]: time="2025-09-09T05:37:16.970022050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 05:37:16.973647 containerd[1614]: time="2025-09-09T05:37:16.973607577Z" level=info msg="CreateContainer within sandbox \"16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 05:37:16.989468 containerd[1614]: time="2025-09-09T05:37:16.986772471Z" level=info msg="Container 34e3325986a4ee2f5194b0782ecff8a8cfd5552a870888684a58b7ec907d3cf4: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:17.000579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount671988280.mount: Deactivated successfully. Sep 9 05:37:17.009194 containerd[1614]: time="2025-09-09T05:37:17.009123174Z" level=info msg="CreateContainer within sandbox \"16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"34e3325986a4ee2f5194b0782ecff8a8cfd5552a870888684a58b7ec907d3cf4\"" Sep 9 05:37:17.011717 containerd[1614]: time="2025-09-09T05:37:17.011674395Z" level=info msg="StartContainer for \"34e3325986a4ee2f5194b0782ecff8a8cfd5552a870888684a58b7ec907d3cf4\"" Sep 9 05:37:17.016315 containerd[1614]: time="2025-09-09T05:37:17.016264221Z" level=info msg="connecting to shim 34e3325986a4ee2f5194b0782ecff8a8cfd5552a870888684a58b7ec907d3cf4" address="unix:///run/containerd/s/9a528f79e7baed14e287b5804d188d3df8942af376b002b3ffb00ba7a1db6f23" protocol=ttrpc version=3 Sep 9 05:37:17.063977 systemd[1]: Started cri-containerd-34e3325986a4ee2f5194b0782ecff8a8cfd5552a870888684a58b7ec907d3cf4.scope - libcontainer container 34e3325986a4ee2f5194b0782ecff8a8cfd5552a870888684a58b7ec907d3cf4. Sep 9 05:37:17.156036 containerd[1614]: time="2025-09-09T05:37:17.155928545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-brhf6,Uid:1e37909a-15b0-4abe-9b56-16b705d6a0b9,Namespace:kube-system,Attempt:0,}" Sep 9 05:37:17.206197 containerd[1614]: time="2025-09-09T05:37:17.206089806Z" level=info msg="StartContainer for \"34e3325986a4ee2f5194b0782ecff8a8cfd5552a870888684a58b7ec907d3cf4\" returns successfully" Sep 9 05:37:17.419487 systemd-networkd[1467]: cali56ff79eef6f: Link UP Sep 9 05:37:17.419853 systemd-networkd[1467]: cali56ff79eef6f: Gained carrier Sep 9 05:37:17.449516 containerd[1614]: 2025-09-09 05:37:17.281 [INFO][4206] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--brhf6-eth0 coredns-7c65d6cfc9- kube-system 1e37909a-15b0-4abe-9b56-16b705d6a0b9 807 0 2025-09-09 05:36:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb coredns-7c65d6cfc9-brhf6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali56ff79eef6f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44" Namespace="kube-system" Pod="coredns-7c65d6cfc9-brhf6" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--brhf6-" Sep 9 05:37:17.449516 containerd[1614]: 2025-09-09 05:37:17.282 [INFO][4206] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44" Namespace="kube-system" Pod="coredns-7c65d6cfc9-brhf6" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--brhf6-eth0" Sep 9 05:37:17.449516 containerd[1614]: 2025-09-09 05:37:17.341 [INFO][4223] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44" HandleID="k8s-pod-network.16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--brhf6-eth0" Sep 9 05:37:17.449907 containerd[1614]: 2025-09-09 05:37:17.342 [INFO][4223] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44" HandleID="k8s-pod-network.16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--brhf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e0d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", "pod":"coredns-7c65d6cfc9-brhf6", "timestamp":"2025-09-09 05:37:17.341674374 +0000 UTC"}, Hostname:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:37:17.449907 containerd[1614]: 2025-09-09 05:37:17.342 [INFO][4223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:37:17.449907 containerd[1614]: 2025-09-09 05:37:17.342 [INFO][4223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:37:17.449907 containerd[1614]: 2025-09-09 05:37:17.342 [INFO][4223] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb' Sep 9 05:37:17.449907 containerd[1614]: 2025-09-09 05:37:17.354 [INFO][4223] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:17.449907 containerd[1614]: 2025-09-09 05:37:17.364 [INFO][4223] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:17.449907 containerd[1614]: 2025-09-09 05:37:17.371 [INFO][4223] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:17.449907 containerd[1614]: 2025-09-09 05:37:17.373 [INFO][4223] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:17.450955 containerd[1614]: 2025-09-09 05:37:17.376 [INFO][4223] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:17.450955 containerd[1614]: 2025-09-09 05:37:17.376 [INFO][4223] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:17.450955 containerd[1614]: 2025-09-09 05:37:17.379 [INFO][4223] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44 Sep 9 05:37:17.450955 containerd[1614]: 2025-09-09 05:37:17.389 [INFO][4223] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:17.450955 containerd[1614]: 2025-09-09 05:37:17.405 [INFO][4223] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.131/26] block=192.168.15.128/26 handle="k8s-pod-network.16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:17.450955 containerd[1614]: 2025-09-09 05:37:17.405 [INFO][4223] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.131/26] handle="k8s-pod-network.16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:17.450955 containerd[1614]: 2025-09-09 05:37:17.405 [INFO][4223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:37:17.450955 containerd[1614]: 2025-09-09 05:37:17.405 [INFO][4223] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.131/26] IPv6=[] ContainerID="16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44" HandleID="k8s-pod-network.16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--brhf6-eth0" Sep 9 05:37:17.451498 containerd[1614]: 2025-09-09 05:37:17.411 [INFO][4206] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44" Namespace="kube-system" Pod="coredns-7c65d6cfc9-brhf6" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--brhf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--brhf6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1e37909a-15b0-4abe-9b56-16b705d6a0b9", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", ContainerID:"", Pod:"coredns-7c65d6cfc9-brhf6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56ff79eef6f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:37:17.451498 containerd[1614]: 2025-09-09 05:37:17.412 [INFO][4206] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.131/32] ContainerID="16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44" Namespace="kube-system" Pod="coredns-7c65d6cfc9-brhf6" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--brhf6-eth0" Sep 9 05:37:17.451498 containerd[1614]: 2025-09-09 05:37:17.412 [INFO][4206] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56ff79eef6f ContainerID="16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44" Namespace="kube-system" Pod="coredns-7c65d6cfc9-brhf6" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--brhf6-eth0" Sep 9 05:37:17.451498 containerd[1614]: 2025-09-09 05:37:17.421 [INFO][4206] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44" Namespace="kube-system" Pod="coredns-7c65d6cfc9-brhf6" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--brhf6-eth0" Sep 9 05:37:17.451498 containerd[1614]: 2025-09-09 05:37:17.421 [INFO][4206] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44" Namespace="kube-system" Pod="coredns-7c65d6cfc9-brhf6" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--brhf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--brhf6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1e37909a-15b0-4abe-9b56-16b705d6a0b9", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", ContainerID:"16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44", Pod:"coredns-7c65d6cfc9-brhf6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56ff79eef6f", MAC:"ca:77:57:9f:d0:ee", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:37:17.451498 containerd[1614]: 2025-09-09 05:37:17.440 [INFO][4206] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44" Namespace="kube-system" Pod="coredns-7c65d6cfc9-brhf6" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--brhf6-eth0" Sep 9 05:37:17.520699 systemd-networkd[1467]: cali70626749932: Gained IPv6LL Sep 9 05:37:17.523304 containerd[1614]: time="2025-09-09T05:37:17.520085018Z" level=info msg="connecting to shim 16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44" address="unix:///run/containerd/s/4b95544f9e47962a81576b910bacbfb6cab2815bb682f8c4d3fbedc9be1207de" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:37:17.579716 systemd[1]: Started cri-containerd-16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44.scope - libcontainer container 16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44. Sep 9 05:37:17.704292 containerd[1614]: time="2025-09-09T05:37:17.704233901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-brhf6,Uid:1e37909a-15b0-4abe-9b56-16b705d6a0b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44\"" Sep 9 05:37:17.714364 containerd[1614]: time="2025-09-09T05:37:17.714238530Z" level=info msg="CreateContainer within sandbox \"16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:37:17.733491 containerd[1614]: time="2025-09-09T05:37:17.732685035Z" level=info msg="Container 6e5d6771443f782a8b85cfc73feafd66aa7f61becd708283225ecf4523d8b92a: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:17.749177 containerd[1614]: time="2025-09-09T05:37:17.749101253Z" level=info msg="CreateContainer within sandbox \"16fd361859b753b4b62233da41658a3ac73f3954052ec82962b963b3ac8a0b44\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6e5d6771443f782a8b85cfc73feafd66aa7f61becd708283225ecf4523d8b92a\"" Sep 9 05:37:17.751974 containerd[1614]: time="2025-09-09T05:37:17.751786111Z" level=info msg="StartContainer for \"6e5d6771443f782a8b85cfc73feafd66aa7f61becd708283225ecf4523d8b92a\"" Sep 9 05:37:17.755367 containerd[1614]: time="2025-09-09T05:37:17.755301376Z" level=info msg="connecting to shim 6e5d6771443f782a8b85cfc73feafd66aa7f61becd708283225ecf4523d8b92a" address="unix:///run/containerd/s/4b95544f9e47962a81576b910bacbfb6cab2815bb682f8c4d3fbedc9be1207de" protocol=ttrpc version=3 Sep 9 05:37:17.802827 systemd[1]: Started cri-containerd-6e5d6771443f782a8b85cfc73feafd66aa7f61becd708283225ecf4523d8b92a.scope - libcontainer container 6e5d6771443f782a8b85cfc73feafd66aa7f61becd708283225ecf4523d8b92a. Sep 9 05:37:17.861180 containerd[1614]: time="2025-09-09T05:37:17.861112628Z" level=info msg="StartContainer for \"6e5d6771443f782a8b85cfc73feafd66aa7f61becd708283225ecf4523d8b92a\" returns successfully" Sep 9 05:37:18.154356 containerd[1614]: time="2025-09-09T05:37:18.154212122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b96d9f487-86crn,Uid:e5a5b433-2f7b-4464-9d2f-741352c25aec,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:37:18.251354 containerd[1614]: time="2025-09-09T05:37:18.251295458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:18.255216 containerd[1614]: time="2025-09-09T05:37:18.255157294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 9 05:37:18.257922 containerd[1614]: time="2025-09-09T05:37:18.257838364Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:18.264274 containerd[1614]: time="2025-09-09T05:37:18.264182416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:18.269720 containerd[1614]: time="2025-09-09T05:37:18.269496120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.299426319s" Sep 9 05:37:18.269720 containerd[1614]: time="2025-09-09T05:37:18.269588296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 9 05:37:18.274055 containerd[1614]: time="2025-09-09T05:37:18.273991380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 05:37:18.279949 containerd[1614]: time="2025-09-09T05:37:18.277976640Z" level=info msg="CreateContainer within sandbox \"124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 05:37:18.297329 containerd[1614]: time="2025-09-09T05:37:18.297268043Z" level=info msg="Container 2d4340b42ce51028b0fc6a2ada11a2c56dfa1adb260cac77fbe4e513d2b96f9f: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:18.324021 containerd[1614]: time="2025-09-09T05:37:18.323935120Z" level=info msg="CreateContainer within sandbox \"124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2d4340b42ce51028b0fc6a2ada11a2c56dfa1adb260cac77fbe4e513d2b96f9f\"" Sep 9 05:37:18.328168 containerd[1614]: time="2025-09-09T05:37:18.328097371Z" level=info msg="StartContainer for \"2d4340b42ce51028b0fc6a2ada11a2c56dfa1adb260cac77fbe4e513d2b96f9f\"" Sep 9 05:37:18.340486 containerd[1614]: time="2025-09-09T05:37:18.340386407Z" level=info msg="connecting to shim 2d4340b42ce51028b0fc6a2ada11a2c56dfa1adb260cac77fbe4e513d2b96f9f" address="unix:///run/containerd/s/1ff2660e77b11f9a667d587b8cc185ef200e0833080c74f9b5e64ac2da448eb5" protocol=ttrpc version=3 Sep 9 05:37:18.419979 systemd[1]: Started cri-containerd-2d4340b42ce51028b0fc6a2ada11a2c56dfa1adb260cac77fbe4e513d2b96f9f.scope - libcontainer container 2d4340b42ce51028b0fc6a2ada11a2c56dfa1adb260cac77fbe4e513d2b96f9f. Sep 9 05:37:18.451298 kubelet[2793]: I0909 05:37:18.451205 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-brhf6" podStartSLOduration=40.451179583 podStartE2EDuration="40.451179583s" podCreationTimestamp="2025-09-09 05:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:37:18.449162689 +0000 UTC m=+47.558352404" watchObservedRunningTime="2025-09-09 05:37:18.451179583 +0000 UTC m=+47.560369302" Sep 9 05:37:18.543185 systemd-networkd[1467]: cali5b690d32b03: Link UP Sep 9 05:37:18.545368 systemd-networkd[1467]: cali5b690d32b03: Gained carrier Sep 9 05:37:18.584153 containerd[1614]: 2025-09-09 05:37:18.276 [INFO][4359] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--86crn-eth0 calico-apiserver-6b96d9f487- calico-apiserver e5a5b433-2f7b-4464-9d2f-741352c25aec 804 0 2025-09-09 05:36:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b96d9f487 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb calico-apiserver-6b96d9f487-86crn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5b690d32b03 [] [] }} ContainerID="0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb" Namespace="calico-apiserver" Pod="calico-apiserver-6b96d9f487-86crn" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--86crn-" Sep 9 05:37:18.584153 containerd[1614]: 2025-09-09 05:37:18.277 [INFO][4359] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb" Namespace="calico-apiserver" Pod="calico-apiserver-6b96d9f487-86crn" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--86crn-eth0" Sep 9 05:37:18.584153 containerd[1614]: 2025-09-09 05:37:18.389 [INFO][4372] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb" HandleID="k8s-pod-network.0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--86crn-eth0" Sep 9 05:37:18.584153 containerd[1614]: 2025-09-09 05:37:18.390 [INFO][4372] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb" HandleID="k8s-pod-network.0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--86crn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f9f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", "pod":"calico-apiserver-6b96d9f487-86crn", "timestamp":"2025-09-09 05:37:18.389794745 +0000 UTC"}, Hostname:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:37:18.584153 containerd[1614]: 2025-09-09 05:37:18.390 [INFO][4372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:37:18.584153 containerd[1614]: 2025-09-09 05:37:18.390 [INFO][4372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:37:18.584153 containerd[1614]: 2025-09-09 05:37:18.390 [INFO][4372] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb' Sep 9 05:37:18.584153 containerd[1614]: 2025-09-09 05:37:18.413 [INFO][4372] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:18.584153 containerd[1614]: 2025-09-09 05:37:18.434 [INFO][4372] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:18.584153 containerd[1614]: 2025-09-09 05:37:18.455 [INFO][4372] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:18.584153 containerd[1614]: 2025-09-09 05:37:18.470 [INFO][4372] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:18.584153 containerd[1614]: 2025-09-09 05:37:18.480 [INFO][4372] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:18.584153 containerd[1614]: 2025-09-09 05:37:18.481 [INFO][4372] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:18.584153 containerd[1614]: 2025-09-09 05:37:18.485 [INFO][4372] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb Sep 9 05:37:18.584153 containerd[1614]: 2025-09-09 05:37:18.504 [INFO][4372] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:18.584153 containerd[1614]: 2025-09-09 05:37:18.523 [INFO][4372] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.132/26] block=192.168.15.128/26 handle="k8s-pod-network.0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:18.584153 containerd[1614]: 2025-09-09 05:37:18.523 [INFO][4372] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.132/26] handle="k8s-pod-network.0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:18.584153 containerd[1614]: 2025-09-09 05:37:18.524 [INFO][4372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:37:18.584153 containerd[1614]: 2025-09-09 05:37:18.524 [INFO][4372] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.132/26] IPv6=[] ContainerID="0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb" HandleID="k8s-pod-network.0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--86crn-eth0" Sep 9 05:37:18.590525 containerd[1614]: 2025-09-09 05:37:18.536 [INFO][4359] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb" Namespace="calico-apiserver" Pod="calico-apiserver-6b96d9f487-86crn" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--86crn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--86crn-eth0", GenerateName:"calico-apiserver-6b96d9f487-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5a5b433-2f7b-4464-9d2f-741352c25aec", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b96d9f487", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", ContainerID:"", Pod:"calico-apiserver-6b96d9f487-86crn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5b690d32b03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:37:18.590525 containerd[1614]: 2025-09-09 05:37:18.536 [INFO][4359] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.132/32] ContainerID="0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb" Namespace="calico-apiserver" Pod="calico-apiserver-6b96d9f487-86crn" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--86crn-eth0" Sep 9 05:37:18.590525 containerd[1614]: 2025-09-09 05:37:18.536 [INFO][4359] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b690d32b03 ContainerID="0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb" Namespace="calico-apiserver" Pod="calico-apiserver-6b96d9f487-86crn" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--86crn-eth0" Sep 9 05:37:18.590525 containerd[1614]: 2025-09-09 05:37:18.544 [INFO][4359] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb" Namespace="calico-apiserver" Pod="calico-apiserver-6b96d9f487-86crn" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--86crn-eth0" Sep 9 05:37:18.590525 containerd[1614]: 2025-09-09 05:37:18.545 [INFO][4359] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb" Namespace="calico-apiserver" Pod="calico-apiserver-6b96d9f487-86crn" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--86crn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--86crn-eth0", GenerateName:"calico-apiserver-6b96d9f487-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5a5b433-2f7b-4464-9d2f-741352c25aec", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b96d9f487", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", ContainerID:"0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb", Pod:"calico-apiserver-6b96d9f487-86crn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5b690d32b03", MAC:"c6:39:6f:24:cc:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:37:18.590525 containerd[1614]: 2025-09-09 05:37:18.575 [INFO][4359] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb" Namespace="calico-apiserver" Pod="calico-apiserver-6b96d9f487-86crn" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--86crn-eth0" Sep 9 05:37:18.632290 containerd[1614]: time="2025-09-09T05:37:18.632137696Z" level=info msg="StartContainer for \"2d4340b42ce51028b0fc6a2ada11a2c56dfa1adb260cac77fbe4e513d2b96f9f\" returns successfully" Sep 9 05:37:18.659484 containerd[1614]: time="2025-09-09T05:37:18.659393197Z" level=info msg="connecting to shim 0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb" address="unix:///run/containerd/s/7794da28041e3734f612f17b4765ad8e3b3b8d07197cc59637c4a4134ca7fec0" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:37:18.737682 systemd[1]: Started cri-containerd-0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb.scope - libcontainer container 0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb. Sep 9 05:37:18.801091 systemd-networkd[1467]: vxlan.calico: Gained IPv6LL Sep 9 05:37:18.932057 containerd[1614]: time="2025-09-09T05:37:18.931166452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b96d9f487-86crn,Uid:e5a5b433-2f7b-4464-9d2f-741352c25aec,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb\"" Sep 9 05:37:19.057582 systemd-networkd[1467]: cali56ff79eef6f: Gained IPv6LL Sep 9 05:37:20.144659 systemd-networkd[1467]: cali5b690d32b03: Gained IPv6LL Sep 9 05:37:20.478307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2565939769.mount: Deactivated successfully. Sep 9 05:37:20.505452 containerd[1614]: time="2025-09-09T05:37:20.505312730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:20.506905 containerd[1614]: time="2025-09-09T05:37:20.506844473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 9 05:37:20.508370 containerd[1614]: time="2025-09-09T05:37:20.508278141Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:20.514119 containerd[1614]: time="2025-09-09T05:37:20.513051199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:20.514119 containerd[1614]: time="2025-09-09T05:37:20.513962537Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.239912263s" Sep 9 05:37:20.514119 containerd[1614]: time="2025-09-09T05:37:20.514005198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 9 05:37:20.516727 containerd[1614]: time="2025-09-09T05:37:20.516689432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 05:37:20.518759 containerd[1614]: time="2025-09-09T05:37:20.518712855Z" level=info msg="CreateContainer within sandbox \"16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 05:37:20.531465 containerd[1614]: time="2025-09-09T05:37:20.528324432Z" level=info msg="Container 3052b07d2765d05cd20efbe8921789bb990aeb4bd0479edfe33300e457659d2e: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:20.550602 containerd[1614]: time="2025-09-09T05:37:20.550505967Z" level=info msg="CreateContainer within sandbox \"16617da13b8bd55d2abd6304f0af4e46a4bf8c9c7f66f67942b732f51d300312\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3052b07d2765d05cd20efbe8921789bb990aeb4bd0479edfe33300e457659d2e\"" Sep 9 05:37:20.553202 containerd[1614]: time="2025-09-09T05:37:20.551615758Z" level=info msg="StartContainer for \"3052b07d2765d05cd20efbe8921789bb990aeb4bd0479edfe33300e457659d2e\"" Sep 9 05:37:20.554707 containerd[1614]: time="2025-09-09T05:37:20.554666212Z" level=info msg="connecting to shim 3052b07d2765d05cd20efbe8921789bb990aeb4bd0479edfe33300e457659d2e" address="unix:///run/containerd/s/9a528f79e7baed14e287b5804d188d3df8942af376b002b3ffb00ba7a1db6f23" protocol=ttrpc version=3 Sep 9 05:37:20.592720 systemd[1]: Started cri-containerd-3052b07d2765d05cd20efbe8921789bb990aeb4bd0479edfe33300e457659d2e.scope - libcontainer container 3052b07d2765d05cd20efbe8921789bb990aeb4bd0479edfe33300e457659d2e. Sep 9 05:37:20.675224 containerd[1614]: time="2025-09-09T05:37:20.675054107Z" level=info msg="StartContainer for \"3052b07d2765d05cd20efbe8921789bb990aeb4bd0479edfe33300e457659d2e\" returns successfully" Sep 9 05:37:21.155699 containerd[1614]: time="2025-09-09T05:37:21.154767826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nhngg,Uid:4d97d611-7546-4d90-b668-89aa3816730e,Namespace:calico-system,Attempt:0,}" Sep 9 05:37:21.155699 containerd[1614]: time="2025-09-09T05:37:21.154823514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74d99b4c64-gjhhx,Uid:5fa1ecdb-01fb-4f51-8fbc-a2dac958e18d,Namespace:calico-system,Attempt:0,}" Sep 9 05:37:21.155699 containerd[1614]: time="2025-09-09T05:37:21.154766428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9xc46,Uid:3ec46691-51ce-4118-bbb6-ca3b35a2d45c,Namespace:kube-system,Attempt:0,}" Sep 9 05:37:21.156291 containerd[1614]: time="2025-09-09T05:37:21.156251985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b96d9f487-55zq4,Uid:e9d9ba6a-f077-443e-a1c4-bee160d5b056,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:37:21.613405 kubelet[2793]: I0909 05:37:21.612200 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-654677dfc9-cv72z" podStartSLOduration=2.275635459 podStartE2EDuration="7.61216273s" podCreationTimestamp="2025-09-09 05:37:14 +0000 UTC" firstStartedPulling="2025-09-09 05:37:15.179080145 +0000 UTC m=+44.288269846" lastFinishedPulling="2025-09-09 05:37:20.515607408 +0000 UTC m=+49.624797117" observedRunningTime="2025-09-09 05:37:21.603194519 +0000 UTC m=+50.712384238" watchObservedRunningTime="2025-09-09 05:37:21.61216273 +0000 UTC m=+50.721352445" Sep 9 05:37:21.845622 systemd-networkd[1467]: califa5d566c430: Link UP Sep 9 05:37:21.850860 systemd-networkd[1467]: califa5d566c430: Gained carrier Sep 9 05:37:21.924093 containerd[1614]: 2025-09-09 05:37:21.298 [INFO][4532] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--55zq4-eth0 calico-apiserver-6b96d9f487- calico-apiserver e9d9ba6a-f077-443e-a1c4-bee160d5b056 798 0 2025-09-09 05:36:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b96d9f487 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb calico-apiserver-6b96d9f487-55zq4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califa5d566c430 [] [] }} ContainerID="187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5" Namespace="calico-apiserver" Pod="calico-apiserver-6b96d9f487-55zq4" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--55zq4-" Sep 9 05:37:21.924093 containerd[1614]: 2025-09-09 05:37:21.299 [INFO][4532] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5" Namespace="calico-apiserver" Pod="calico-apiserver-6b96d9f487-55zq4" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--55zq4-eth0" Sep 9 05:37:21.924093 containerd[1614]: 2025-09-09 05:37:21.456 [INFO][4556] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5" HandleID="k8s-pod-network.187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--55zq4-eth0" Sep 9 05:37:21.924093 containerd[1614]: 2025-09-09 05:37:21.456 [INFO][4556] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5" HandleID="k8s-pod-network.187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--55zq4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032bea0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", "pod":"calico-apiserver-6b96d9f487-55zq4", "timestamp":"2025-09-09 05:37:21.456640713 +0000 UTC"}, Hostname:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:37:21.924093 containerd[1614]: 2025-09-09 05:37:21.457 [INFO][4556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:37:21.924093 containerd[1614]: 2025-09-09 05:37:21.459 [INFO][4556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:37:21.924093 containerd[1614]: 2025-09-09 05:37:21.459 [INFO][4556] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb' Sep 9 05:37:21.924093 containerd[1614]: 2025-09-09 05:37:21.547 [INFO][4556] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:21.924093 containerd[1614]: 2025-09-09 05:37:21.608 [INFO][4556] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:21.924093 containerd[1614]: 2025-09-09 05:37:21.680 [INFO][4556] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:21.924093 containerd[1614]: 2025-09-09 05:37:21.690 [INFO][4556] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:21.924093 containerd[1614]: 2025-09-09 05:37:21.704 [INFO][4556] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:21.924093 containerd[1614]: 2025-09-09 05:37:21.704 [INFO][4556] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:21.924093 containerd[1614]: 2025-09-09 05:37:21.726 [INFO][4556] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5 Sep 9 05:37:21.924093 containerd[1614]: 2025-09-09 05:37:21.747 [INFO][4556] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:21.924093 containerd[1614]: 2025-09-09 05:37:21.791 [INFO][4556] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.133/26] block=192.168.15.128/26 handle="k8s-pod-network.187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:21.924093 containerd[1614]: 2025-09-09 05:37:21.791 [INFO][4556] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.133/26] handle="k8s-pod-network.187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:21.924093 containerd[1614]: 2025-09-09 05:37:21.792 [INFO][4556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:37:21.924093 containerd[1614]: 2025-09-09 05:37:21.792 [INFO][4556] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.133/26] IPv6=[] ContainerID="187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5" HandleID="k8s-pod-network.187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--55zq4-eth0" Sep 9 05:37:21.927329 containerd[1614]: 2025-09-09 05:37:21.805 [INFO][4532] cni-plugin/k8s.go 418: Populated endpoint ContainerID="187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5" Namespace="calico-apiserver" Pod="calico-apiserver-6b96d9f487-55zq4" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--55zq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--55zq4-eth0", GenerateName:"calico-apiserver-6b96d9f487-", Namespace:"calico-apiserver", SelfLink:"", UID:"e9d9ba6a-f077-443e-a1c4-bee160d5b056", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b96d9f487", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", ContainerID:"", Pod:"calico-apiserver-6b96d9f487-55zq4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califa5d566c430", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:37:21.927329 containerd[1614]: 2025-09-09 05:37:21.806 [INFO][4532] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.133/32] ContainerID="187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5" Namespace="calico-apiserver" Pod="calico-apiserver-6b96d9f487-55zq4" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--55zq4-eth0" Sep 9 05:37:21.927329 containerd[1614]: 2025-09-09 05:37:21.807 [INFO][4532] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa5d566c430 ContainerID="187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5" Namespace="calico-apiserver" Pod="calico-apiserver-6b96d9f487-55zq4" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--55zq4-eth0" Sep 9 05:37:21.927329 containerd[1614]: 2025-09-09 05:37:21.848 [INFO][4532] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5" Namespace="calico-apiserver" Pod="calico-apiserver-6b96d9f487-55zq4" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--55zq4-eth0" Sep 9 05:37:21.927329 containerd[1614]: 2025-09-09 05:37:21.849 [INFO][4532] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5" Namespace="calico-apiserver" Pod="calico-apiserver-6b96d9f487-55zq4" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--55zq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--55zq4-eth0", GenerateName:"calico-apiserver-6b96d9f487-", Namespace:"calico-apiserver", SelfLink:"", UID:"e9d9ba6a-f077-443e-a1c4-bee160d5b056", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b96d9f487", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", ContainerID:"187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5", Pod:"calico-apiserver-6b96d9f487-55zq4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califa5d566c430", MAC:"b2:41:bf:42:22:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:37:21.927329 containerd[1614]: 2025-09-09 05:37:21.912 [INFO][4532] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5" Namespace="calico-apiserver" Pod="calico-apiserver-6b96d9f487-55zq4" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--apiserver--6b96d9f487--55zq4-eth0" Sep 9 05:37:22.027111 containerd[1614]: time="2025-09-09T05:37:22.027039211Z" level=info msg="connecting to shim 187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5" address="unix:///run/containerd/s/359492734642f003a6211ea99e3a308ddc47a1861be3b7c4c0d271eff0843c4e" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:37:22.114156 systemd-networkd[1467]: cali9cb81981e6f: Link UP Sep 9 05:37:22.119287 systemd-networkd[1467]: cali9cb81981e6f: Gained carrier Sep 9 05:37:22.190759 containerd[1614]: 2025-09-09 05:37:21.370 [INFO][4506] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-goldmane--7988f88666--nhngg-eth0 goldmane-7988f88666- calico-system 4d97d611-7546-4d90-b668-89aa3816730e 806 0 2025-09-09 05:36:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb goldmane-7988f88666-nhngg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9cb81981e6f [] [] }} ContainerID="59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a" Namespace="calico-system" Pod="goldmane-7988f88666-nhngg" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-goldmane--7988f88666--nhngg-" Sep 9 05:37:22.190759 containerd[1614]: 2025-09-09 05:37:21.376 [INFO][4506] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a" Namespace="calico-system" Pod="goldmane-7988f88666-nhngg" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-goldmane--7988f88666--nhngg-eth0" Sep 9 05:37:22.190759 containerd[1614]: 2025-09-09 05:37:21.610 [INFO][4566] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a" HandleID="k8s-pod-network.59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-goldmane--7988f88666--nhngg-eth0" Sep 9 05:37:22.190759 containerd[1614]: 2025-09-09 05:37:21.611 [INFO][4566] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a" HandleID="k8s-pod-network.59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-goldmane--7988f88666--nhngg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000340c20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", "pod":"goldmane-7988f88666-nhngg", "timestamp":"2025-09-09 05:37:21.610983406 +0000 UTC"}, Hostname:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:37:22.190759 containerd[1614]: 2025-09-09 05:37:21.611 [INFO][4566] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:37:22.190759 containerd[1614]: 2025-09-09 05:37:21.792 [INFO][4566] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:37:22.190759 containerd[1614]: 2025-09-09 05:37:21.793 [INFO][4566] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb' Sep 9 05:37:22.190759 containerd[1614]: 2025-09-09 05:37:21.842 [INFO][4566] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.190759 containerd[1614]: 2025-09-09 05:37:21.868 [INFO][4566] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.190759 containerd[1614]: 2025-09-09 05:37:21.926 [INFO][4566] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.190759 containerd[1614]: 2025-09-09 05:37:21.937 [INFO][4566] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.190759 containerd[1614]: 2025-09-09 05:37:21.961 [INFO][4566] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.190759 containerd[1614]: 2025-09-09 05:37:21.962 [INFO][4566] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.190759 containerd[1614]: 2025-09-09 05:37:21.970 [INFO][4566] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a Sep 9 05:37:22.190759 containerd[1614]: 2025-09-09 05:37:21.991 [INFO][4566] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.190759 containerd[1614]: 2025-09-09 05:37:22.071 [INFO][4566] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.134/26] block=192.168.15.128/26 handle="k8s-pod-network.59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.190759 containerd[1614]: 2025-09-09 05:37:22.072 [INFO][4566] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.134/26] handle="k8s-pod-network.59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.190759 containerd[1614]: 2025-09-09 05:37:22.072 [INFO][4566] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:37:22.190759 containerd[1614]: 2025-09-09 05:37:22.072 [INFO][4566] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.134/26] IPv6=[] ContainerID="59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a" HandleID="k8s-pod-network.59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-goldmane--7988f88666--nhngg-eth0" Sep 9 05:37:22.194874 containerd[1614]: 2025-09-09 05:37:22.088 [INFO][4506] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a" Namespace="calico-system" Pod="goldmane-7988f88666-nhngg" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-goldmane--7988f88666--nhngg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-goldmane--7988f88666--nhngg-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"4d97d611-7546-4d90-b668-89aa3816730e", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", ContainerID:"", Pod:"goldmane-7988f88666-nhngg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9cb81981e6f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:37:22.194874 containerd[1614]: 2025-09-09 05:37:22.088 [INFO][4506] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.134/32] ContainerID="59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a" Namespace="calico-system" Pod="goldmane-7988f88666-nhngg" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-goldmane--7988f88666--nhngg-eth0" Sep 9 05:37:22.194874 containerd[1614]: 2025-09-09 05:37:22.088 [INFO][4506] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9cb81981e6f ContainerID="59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a" Namespace="calico-system" Pod="goldmane-7988f88666-nhngg" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-goldmane--7988f88666--nhngg-eth0" Sep 9 05:37:22.194874 containerd[1614]: 2025-09-09 05:37:22.137 [INFO][4506] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a" Namespace="calico-system" Pod="goldmane-7988f88666-nhngg" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-goldmane--7988f88666--nhngg-eth0" Sep 9 05:37:22.194874 containerd[1614]: 2025-09-09 05:37:22.140 [INFO][4506] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a" Namespace="calico-system" Pod="goldmane-7988f88666-nhngg" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-goldmane--7988f88666--nhngg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-goldmane--7988f88666--nhngg-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"4d97d611-7546-4d90-b668-89aa3816730e", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", ContainerID:"59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a", Pod:"goldmane-7988f88666-nhngg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9cb81981e6f", MAC:"6a:b9:24:79:7f:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:37:22.194874 containerd[1614]: 2025-09-09 05:37:22.177 [INFO][4506] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a" Namespace="calico-system" Pod="goldmane-7988f88666-nhngg" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-goldmane--7988f88666--nhngg-eth0" Sep 9 05:37:22.220807 systemd[1]: Started cri-containerd-187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5.scope - libcontainer container 187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5. Sep 9 05:37:22.335942 systemd-networkd[1467]: caliade9224af8b: Link UP Sep 9 05:37:22.343083 systemd-networkd[1467]: caliade9224af8b: Gained carrier Sep 9 05:37:22.369462 containerd[1614]: time="2025-09-09T05:37:22.369380259Z" level=info msg="connecting to shim 59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a" address="unix:///run/containerd/s/b1a5d73c85db76d48ad1b5faf300a4561492acf68572aafaef01ff63bae7437f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:37:22.390686 containerd[1614]: 2025-09-09 05:37:21.379 [INFO][4516] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--9xc46-eth0 coredns-7c65d6cfc9- kube-system 3ec46691-51ce-4118-bbb6-ca3b35a2d45c 805 0 2025-09-09 05:36:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb coredns-7c65d6cfc9-9xc46 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliade9224af8b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9xc46" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--9xc46-" Sep 9 05:37:22.390686 containerd[1614]: 2025-09-09 05:37:21.381 [INFO][4516] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9xc46" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--9xc46-eth0" Sep 9 05:37:22.390686 containerd[1614]: 2025-09-09 05:37:21.607 [INFO][4570] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a" HandleID="k8s-pod-network.8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--9xc46-eth0" Sep 9 05:37:22.390686 containerd[1614]: 2025-09-09 05:37:21.614 [INFO][4570] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a" HandleID="k8s-pod-network.8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--9xc46-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000355d20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", "pod":"coredns-7c65d6cfc9-9xc46", "timestamp":"2025-09-09 05:37:21.60789202 +0000 UTC"}, Hostname:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:37:22.390686 containerd[1614]: 2025-09-09 05:37:21.616 [INFO][4570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:37:22.390686 containerd[1614]: 2025-09-09 05:37:22.072 [INFO][4570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:37:22.390686 containerd[1614]: 2025-09-09 05:37:22.072 [INFO][4570] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb' Sep 9 05:37:22.390686 containerd[1614]: 2025-09-09 05:37:22.101 [INFO][4570] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.390686 containerd[1614]: 2025-09-09 05:37:22.135 [INFO][4570] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.390686 containerd[1614]: 2025-09-09 05:37:22.166 [INFO][4570] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.390686 containerd[1614]: 2025-09-09 05:37:22.179 [INFO][4570] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.390686 containerd[1614]: 2025-09-09 05:37:22.195 [INFO][4570] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.390686 containerd[1614]: 2025-09-09 05:37:22.216 [INFO][4570] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.390686 containerd[1614]: 2025-09-09 05:37:22.230 [INFO][4570] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a Sep 9 05:37:22.390686 containerd[1614]: 2025-09-09 05:37:22.254 [INFO][4570] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.390686 containerd[1614]: 2025-09-09 05:37:22.297 [INFO][4570] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.135/26] block=192.168.15.128/26 handle="k8s-pod-network.8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.390686 containerd[1614]: 2025-09-09 05:37:22.299 [INFO][4570] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.135/26] handle="k8s-pod-network.8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.390686 containerd[1614]: 2025-09-09 05:37:22.299 [INFO][4570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:37:22.390686 containerd[1614]: 2025-09-09 05:37:22.300 [INFO][4570] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.135/26] IPv6=[] ContainerID="8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a" HandleID="k8s-pod-network.8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--9xc46-eth0" Sep 9 05:37:22.393119 containerd[1614]: 2025-09-09 05:37:22.315 [INFO][4516] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9xc46" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--9xc46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--9xc46-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3ec46691-51ce-4118-bbb6-ca3b35a2d45c", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", ContainerID:"", Pod:"coredns-7c65d6cfc9-9xc46", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliade9224af8b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:37:22.393119 containerd[1614]: 2025-09-09 05:37:22.318 [INFO][4516] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.135/32] ContainerID="8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9xc46" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--9xc46-eth0" Sep 9 05:37:22.393119 containerd[1614]: 2025-09-09 05:37:22.318 [INFO][4516] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliade9224af8b ContainerID="8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9xc46" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--9xc46-eth0" Sep 9 05:37:22.393119 containerd[1614]: 2025-09-09 05:37:22.349 [INFO][4516] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9xc46" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--9xc46-eth0" Sep 9 05:37:22.393119 containerd[1614]: 2025-09-09 05:37:22.353 [INFO][4516] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9xc46" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--9xc46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--9xc46-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3ec46691-51ce-4118-bbb6-ca3b35a2d45c", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", ContainerID:"8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a", Pod:"coredns-7c65d6cfc9-9xc46", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliade9224af8b", MAC:"ee:c9:80:63:57:25", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:37:22.393119 containerd[1614]: 2025-09-09 05:37:22.384 [INFO][4516] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9xc46" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-coredns--7c65d6cfc9--9xc46-eth0" Sep 9 05:37:22.528723 systemd-networkd[1467]: cali176d275c3a4: Link UP Sep 9 05:37:22.529176 systemd-networkd[1467]: cali176d275c3a4: Gained carrier Sep 9 05:37:22.611110 systemd[1]: Started cri-containerd-59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a.scope - libcontainer container 59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a. Sep 9 05:37:22.618113 containerd[1614]: 2025-09-09 05:37:21.379 [INFO][4509] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--kube--controllers--74d99b4c64--gjhhx-eth0 calico-kube-controllers-74d99b4c64- calico-system 5fa1ecdb-01fb-4f51-8fbc-a2dac958e18d 796 0 2025-09-09 05:36:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74d99b4c64 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb calico-kube-controllers-74d99b4c64-gjhhx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali176d275c3a4 [] [] }} ContainerID="ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319" Namespace="calico-system" Pod="calico-kube-controllers-74d99b4c64-gjhhx" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--kube--controllers--74d99b4c64--gjhhx-" Sep 9 05:37:22.618113 containerd[1614]: 2025-09-09 05:37:21.380 [INFO][4509] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319" Namespace="calico-system" Pod="calico-kube-controllers-74d99b4c64-gjhhx" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--kube--controllers--74d99b4c64--gjhhx-eth0" Sep 9 05:37:22.618113 containerd[1614]: 2025-09-09 05:37:21.623 [INFO][4569] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319" HandleID="k8s-pod-network.ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--kube--controllers--74d99b4c64--gjhhx-eth0" Sep 9 05:37:22.618113 containerd[1614]: 2025-09-09 05:37:21.629 [INFO][4569] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319" HandleID="k8s-pod-network.ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--kube--controllers--74d99b4c64--gjhhx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324280), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", "pod":"calico-kube-controllers-74d99b4c64-gjhhx", "timestamp":"2025-09-09 05:37:21.623342722 +0000 UTC"}, Hostname:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:37:22.618113 containerd[1614]: 2025-09-09 05:37:21.629 [INFO][4569] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:37:22.618113 containerd[1614]: 2025-09-09 05:37:22.300 [INFO][4569] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:37:22.618113 containerd[1614]: 2025-09-09 05:37:22.300 [INFO][4569] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb' Sep 9 05:37:22.618113 containerd[1614]: 2025-09-09 05:37:22.326 [INFO][4569] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.618113 containerd[1614]: 2025-09-09 05:37:22.346 [INFO][4569] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.618113 containerd[1614]: 2025-09-09 05:37:22.373 [INFO][4569] ipam/ipam.go 511: Trying affinity for 192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.618113 containerd[1614]: 2025-09-09 05:37:22.379 [INFO][4569] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.618113 containerd[1614]: 2025-09-09 05:37:22.398 [INFO][4569] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.128/26 host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.618113 containerd[1614]: 2025-09-09 05:37:22.398 [INFO][4569] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.128/26 handle="k8s-pod-network.ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.618113 containerd[1614]: 2025-09-09 05:37:22.409 [INFO][4569] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319 Sep 9 05:37:22.618113 containerd[1614]: 2025-09-09 05:37:22.431 [INFO][4569] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.128/26 handle="k8s-pod-network.ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.618113 containerd[1614]: 2025-09-09 05:37:22.448 [INFO][4569] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.136/26] block=192.168.15.128/26 handle="k8s-pod-network.ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.618113 containerd[1614]: 2025-09-09 05:37:22.449 [INFO][4569] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.136/26] handle="k8s-pod-network.ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319" host="ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb" Sep 9 05:37:22.618113 containerd[1614]: 2025-09-09 05:37:22.449 [INFO][4569] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:37:22.618113 containerd[1614]: 2025-09-09 05:37:22.449 [INFO][4569] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.136/26] IPv6=[] ContainerID="ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319" HandleID="k8s-pod-network.ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319" Workload="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--kube--controllers--74d99b4c64--gjhhx-eth0" Sep 9 05:37:22.620855 containerd[1614]: 2025-09-09 05:37:22.491 [INFO][4509] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319" Namespace="calico-system" Pod="calico-kube-controllers-74d99b4c64-gjhhx" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--kube--controllers--74d99b4c64--gjhhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--kube--controllers--74d99b4c64--gjhhx-eth0", GenerateName:"calico-kube-controllers-74d99b4c64-", Namespace:"calico-system", SelfLink:"", UID:"5fa1ecdb-01fb-4f51-8fbc-a2dac958e18d", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74d99b4c64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", ContainerID:"", Pod:"calico-kube-controllers-74d99b4c64-gjhhx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali176d275c3a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:37:22.620855 containerd[1614]: 2025-09-09 05:37:22.497 [INFO][4509] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.136/32] ContainerID="ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319" Namespace="calico-system" Pod="calico-kube-controllers-74d99b4c64-gjhhx" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--kube--controllers--74d99b4c64--gjhhx-eth0" Sep 9 05:37:22.620855 containerd[1614]: 2025-09-09 05:37:22.497 [INFO][4509] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali176d275c3a4 ContainerID="ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319" Namespace="calico-system" Pod="calico-kube-controllers-74d99b4c64-gjhhx" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--kube--controllers--74d99b4c64--gjhhx-eth0" Sep 9 05:37:22.620855 containerd[1614]: 2025-09-09 05:37:22.535 [INFO][4509] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319" Namespace="calico-system" Pod="calico-kube-controllers-74d99b4c64-gjhhx" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--kube--controllers--74d99b4c64--gjhhx-eth0" Sep 9 05:37:22.620855 containerd[1614]: 2025-09-09 05:37:22.539 [INFO][4509] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319" Namespace="calico-system" Pod="calico-kube-controllers-74d99b4c64-gjhhx" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--kube--controllers--74d99b4c64--gjhhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--kube--controllers--74d99b4c64--gjhhx-eth0", GenerateName:"calico-kube-controllers-74d99b4c64-", Namespace:"calico-system", SelfLink:"", UID:"5fa1ecdb-01fb-4f51-8fbc-a2dac958e18d", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74d99b4c64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-nightly-20250908-2100-345b4714103ff562d0cb", ContainerID:"ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319", Pod:"calico-kube-controllers-74d99b4c64-gjhhx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali176d275c3a4", MAC:"4e:36:40:fd:09:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:37:22.620855 containerd[1614]: 2025-09-09 05:37:22.577 [INFO][4509] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319" Namespace="calico-system" Pod="calico-kube-controllers-74d99b4c64-gjhhx" WorkloadEndpoint="ci--4452--0--0--nightly--20250908--2100--345b4714103ff562d0cb-k8s-calico--kube--controllers--74d99b4c64--gjhhx-eth0" Sep 9 05:37:22.647838 containerd[1614]: time="2025-09-09T05:37:22.647272546Z" level=info msg="connecting to shim 8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a" address="unix:///run/containerd/s/969423f94406a0cc36ea472fc32237f91352e56a3746add4f33b50a1e6d37048" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:37:22.726201 containerd[1614]: time="2025-09-09T05:37:22.725358055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b96d9f487-55zq4,Uid:e9d9ba6a-f077-443e-a1c4-bee160d5b056,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5\"" Sep 9 05:37:22.767978 containerd[1614]: time="2025-09-09T05:37:22.767713897Z" level=info msg="connecting to shim ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319" address="unix:///run/containerd/s/55e31f8f6cb0ccf338872c50a2f73ebb8087df8644598d5bcce9260a491f84af" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:37:22.802143 systemd[1]: Started cri-containerd-8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a.scope - libcontainer container 8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a. Sep 9 05:37:22.917763 systemd[1]: Started cri-containerd-ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319.scope - libcontainer container ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319. Sep 9 05:37:23.102556 containerd[1614]: time="2025-09-09T05:37:23.101826565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9xc46,Uid:3ec46691-51ce-4118-bbb6-ca3b35a2d45c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a\"" Sep 9 05:37:23.127735 containerd[1614]: time="2025-09-09T05:37:23.127659319Z" level=info msg="CreateContainer within sandbox \"8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:37:23.205867 containerd[1614]: time="2025-09-09T05:37:23.204999042Z" level=info msg="Container 77fb8cd5aaa931dafafb7b2259fc2139a9baff780d27358912d50f010ec490a5: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:23.235876 containerd[1614]: time="2025-09-09T05:37:23.235811523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nhngg,Uid:4d97d611-7546-4d90-b668-89aa3816730e,Namespace:calico-system,Attempt:0,} returns sandbox id \"59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a\"" Sep 9 05:37:23.240959 containerd[1614]: time="2025-09-09T05:37:23.240885415Z" level=info msg="CreateContainer within sandbox \"8f4db4889c9c7db816023106a7eaeef905cfebaae8c32f3467e466b932af940a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"77fb8cd5aaa931dafafb7b2259fc2139a9baff780d27358912d50f010ec490a5\"" Sep 9 05:37:23.241798 containerd[1614]: time="2025-09-09T05:37:23.241749127Z" level=info msg="StartContainer for \"77fb8cd5aaa931dafafb7b2259fc2139a9baff780d27358912d50f010ec490a5\"" Sep 9 05:37:23.246810 containerd[1614]: time="2025-09-09T05:37:23.246759692Z" level=info msg="connecting to shim 77fb8cd5aaa931dafafb7b2259fc2139a9baff780d27358912d50f010ec490a5" address="unix:///run/containerd/s/969423f94406a0cc36ea472fc32237f91352e56a3746add4f33b50a1e6d37048" protocol=ttrpc version=3 Sep 9 05:37:23.298198 systemd[1]: Started cri-containerd-77fb8cd5aaa931dafafb7b2259fc2139a9baff780d27358912d50f010ec490a5.scope - libcontainer container 77fb8cd5aaa931dafafb7b2259fc2139a9baff780d27358912d50f010ec490a5. Sep 9 05:37:23.325825 containerd[1614]: time="2025-09-09T05:37:23.325515475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74d99b4c64-gjhhx,Uid:5fa1ecdb-01fb-4f51-8fbc-a2dac958e18d,Namespace:calico-system,Attempt:0,} returns sandbox id \"ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319\"" Sep 9 05:37:23.391492 containerd[1614]: time="2025-09-09T05:37:23.391317897Z" level=info msg="StartContainer for \"77fb8cd5aaa931dafafb7b2259fc2139a9baff780d27358912d50f010ec490a5\" returns successfully" Sep 9 05:37:23.407841 containerd[1614]: time="2025-09-09T05:37:23.407749169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:23.409368 containerd[1614]: time="2025-09-09T05:37:23.409327703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 9 05:37:23.411255 containerd[1614]: time="2025-09-09T05:37:23.410939915Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:23.416771 containerd[1614]: time="2025-09-09T05:37:23.416727570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:23.418061 containerd[1614]: time="2025-09-09T05:37:23.418007412Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.90113711s" Sep 9 05:37:23.418184 containerd[1614]: time="2025-09-09T05:37:23.418068360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 9 05:37:23.420767 containerd[1614]: time="2025-09-09T05:37:23.420686961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 05:37:23.423994 containerd[1614]: time="2025-09-09T05:37:23.423934435Z" level=info msg="CreateContainer within sandbox \"124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 05:37:23.441521 containerd[1614]: time="2025-09-09T05:37:23.441406955Z" level=info msg="Container 791115d233ddd9febee7fd3e21a3e5cb555e741876f5e0eda590a7ec38ebf255: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:23.456936 containerd[1614]: time="2025-09-09T05:37:23.456833629Z" level=info msg="CreateContainer within sandbox \"124c7a167de2a1c308e9254c58ffbc8d8f96e7082d0e18cf33cd8e35e9c86115\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"791115d233ddd9febee7fd3e21a3e5cb555e741876f5e0eda590a7ec38ebf255\"" Sep 9 05:37:23.460761 containerd[1614]: time="2025-09-09T05:37:23.460707777Z" level=info msg="StartContainer for \"791115d233ddd9febee7fd3e21a3e5cb555e741876f5e0eda590a7ec38ebf255\"" Sep 9 05:37:23.466917 containerd[1614]: time="2025-09-09T05:37:23.466774764Z" level=info msg="connecting to shim 791115d233ddd9febee7fd3e21a3e5cb555e741876f5e0eda590a7ec38ebf255" address="unix:///run/containerd/s/1ff2660e77b11f9a667d587b8cc185ef200e0833080c74f9b5e64ac2da448eb5" protocol=ttrpc version=3 Sep 9 05:37:23.495281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1752738768.mount: Deactivated successfully. Sep 9 05:37:23.517723 systemd[1]: Started cri-containerd-791115d233ddd9febee7fd3e21a3e5cb555e741876f5e0eda590a7ec38ebf255.scope - libcontainer container 791115d233ddd9febee7fd3e21a3e5cb555e741876f5e0eda590a7ec38ebf255. Sep 9 05:37:23.604230 containerd[1614]: time="2025-09-09T05:37:23.603402063Z" level=info msg="StartContainer for \"791115d233ddd9febee7fd3e21a3e5cb555e741876f5e0eda590a7ec38ebf255\" returns successfully" Sep 9 05:37:23.635111 kubelet[2793]: I0909 05:37:23.634980 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9xc46" podStartSLOduration=45.634951811 podStartE2EDuration="45.634951811s" podCreationTimestamp="2025-09-09 05:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:37:23.633358333 +0000 UTC m=+52.742548054" watchObservedRunningTime="2025-09-09 05:37:23.634951811 +0000 UTC m=+52.744141529" Sep 9 05:37:23.665007 systemd-networkd[1467]: caliade9224af8b: Gained IPv6LL Sep 9 05:37:23.792752 systemd-networkd[1467]: califa5d566c430: Gained IPv6LL Sep 9 05:37:23.985703 systemd-networkd[1467]: cali9cb81981e6f: Gained IPv6LL Sep 9 05:37:24.241125 systemd-networkd[1467]: cali176d275c3a4: Gained IPv6LL Sep 9 05:37:24.298487 kubelet[2793]: I0909 05:37:24.298409 2793 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 05:37:24.298487 kubelet[2793]: I0909 05:37:24.298497 2793 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 05:37:24.648172 kubelet[2793]: I0909 05:37:24.646994 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-sx2vx" podStartSLOduration=24.937630094 podStartE2EDuration="31.646966471s" podCreationTimestamp="2025-09-09 05:36:53 +0000 UTC" firstStartedPulling="2025-09-09 05:37:16.710657443 +0000 UTC m=+45.819847145" lastFinishedPulling="2025-09-09 05:37:23.419993813 +0000 UTC m=+52.529183522" observedRunningTime="2025-09-09 05:37:24.644168737 +0000 UTC m=+53.753358453" watchObservedRunningTime="2025-09-09 05:37:24.646966471 +0000 UTC m=+53.756156187" Sep 9 05:37:26.525854 containerd[1614]: time="2025-09-09T05:37:26.525761674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:26.527783 containerd[1614]: time="2025-09-09T05:37:26.527461606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 9 05:37:26.529416 containerd[1614]: time="2025-09-09T05:37:26.529323092Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:26.532833 containerd[1614]: time="2025-09-09T05:37:26.532792369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:26.533991 containerd[1614]: time="2025-09-09T05:37:26.533947602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.113208883s" Sep 9 05:37:26.534115 containerd[1614]: time="2025-09-09T05:37:26.533998156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 05:37:26.536416 containerd[1614]: time="2025-09-09T05:37:26.536347649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 05:37:26.537807 containerd[1614]: time="2025-09-09T05:37:26.537758241Z" level=info msg="CreateContainer within sandbox \"0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 05:37:26.551465 containerd[1614]: time="2025-09-09T05:37:26.548769581Z" level=info msg="Container b69a7d773ec9a570e61dc5c9b3b81d92aac509cc71b46bab10de39e132b62e48: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:26.565211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3851134877.mount: Deactivated successfully. Sep 9 05:37:26.569948 containerd[1614]: time="2025-09-09T05:37:26.569877170Z" level=info msg="CreateContainer within sandbox \"0b3b3272f7ffd79976c6345f6ab0cc71416218f24c22156fab3877a7ab44f4fb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b69a7d773ec9a570e61dc5c9b3b81d92aac509cc71b46bab10de39e132b62e48\"" Sep 9 05:37:26.571082 containerd[1614]: time="2025-09-09T05:37:26.571024863Z" level=info msg="StartContainer for \"b69a7d773ec9a570e61dc5c9b3b81d92aac509cc71b46bab10de39e132b62e48\"" Sep 9 05:37:26.573661 containerd[1614]: time="2025-09-09T05:37:26.573602426Z" level=info msg="connecting to shim b69a7d773ec9a570e61dc5c9b3b81d92aac509cc71b46bab10de39e132b62e48" address="unix:///run/containerd/s/7794da28041e3734f612f17b4765ad8e3b3b8d07197cc59637c4a4134ca7fec0" protocol=ttrpc version=3 Sep 9 05:37:26.610667 systemd[1]: Started cri-containerd-b69a7d773ec9a570e61dc5c9b3b81d92aac509cc71b46bab10de39e132b62e48.scope - libcontainer container b69a7d773ec9a570e61dc5c9b3b81d92aac509cc71b46bab10de39e132b62e48. Sep 9 05:37:26.701032 containerd[1614]: time="2025-09-09T05:37:26.700880354Z" level=info msg="StartContainer for \"b69a7d773ec9a570e61dc5c9b3b81d92aac509cc71b46bab10de39e132b62e48\" returns successfully" Sep 9 05:37:26.748463 containerd[1614]: time="2025-09-09T05:37:26.747971004Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:26.750413 containerd[1614]: time="2025-09-09T05:37:26.750334791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 05:37:26.753342 containerd[1614]: time="2025-09-09T05:37:26.753059007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 216.643847ms" Sep 9 05:37:26.753342 containerd[1614]: time="2025-09-09T05:37:26.753124327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 05:37:26.755388 containerd[1614]: time="2025-09-09T05:37:26.755322817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 05:37:26.757423 containerd[1614]: time="2025-09-09T05:37:26.757383208Z" level=info msg="CreateContainer within sandbox \"187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 05:37:26.772929 containerd[1614]: time="2025-09-09T05:37:26.772874987Z" level=info msg="Container ecb46923132374bff35465157db711fd567fcd9bf49605f84c29c03f65bdf2ee: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:26.801026 containerd[1614]: time="2025-09-09T05:37:26.800831939Z" level=info msg="CreateContainer within sandbox \"187a23699603c07e1db4dbc3482a244eba0a2c3c87936017fea88879467a59e5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ecb46923132374bff35465157db711fd567fcd9bf49605f84c29c03f65bdf2ee\"" Sep 9 05:37:26.803597 containerd[1614]: time="2025-09-09T05:37:26.803544089Z" level=info msg="StartContainer for \"ecb46923132374bff35465157db711fd567fcd9bf49605f84c29c03f65bdf2ee\"" Sep 9 05:37:26.806291 containerd[1614]: time="2025-09-09T05:37:26.806192428Z" level=info msg="connecting to shim ecb46923132374bff35465157db711fd567fcd9bf49605f84c29c03f65bdf2ee" address="unix:///run/containerd/s/359492734642f003a6211ea99e3a308ddc47a1861be3b7c4c0d271eff0843c4e" protocol=ttrpc version=3 Sep 9 05:37:26.859726 systemd[1]: Started cri-containerd-ecb46923132374bff35465157db711fd567fcd9bf49605f84c29c03f65bdf2ee.scope - libcontainer container ecb46923132374bff35465157db711fd567fcd9bf49605f84c29c03f65bdf2ee. Sep 9 05:37:26.961171 containerd[1614]: time="2025-09-09T05:37:26.961109359Z" level=info msg="StartContainer for \"ecb46923132374bff35465157db711fd567fcd9bf49605f84c29c03f65bdf2ee\" returns successfully" Sep 9 05:37:26.992697 ntpd[1528]: Listen normally on 8 vxlan.calico 192.168.15.128:123 Sep 9 05:37:26.995883 ntpd[1528]: 9 Sep 05:37:26 ntpd[1528]: Listen normally on 8 vxlan.calico 192.168.15.128:123 Sep 9 05:37:26.995883 ntpd[1528]: 9 Sep 05:37:26 ntpd[1528]: Listen normally on 9 cali5923c394888 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 9 05:37:26.995883 ntpd[1528]: 9 Sep 05:37:26 ntpd[1528]: Listen normally on 10 cali70626749932 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 9 05:37:26.995883 ntpd[1528]: 9 Sep 05:37:26 ntpd[1528]: Listen normally on 11 vxlan.calico [fe80::6400:d9ff:fe94:fe7d%6]:123 Sep 9 05:37:26.995883 ntpd[1528]: 9 Sep 05:37:26 ntpd[1528]: Listen normally on 12 cali56ff79eef6f [fe80::ecee:eeff:feee:eeee%9]:123 Sep 9 05:37:26.995883 ntpd[1528]: 9 Sep 05:37:26 ntpd[1528]: Listen normally on 13 cali5b690d32b03 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 9 05:37:26.995883 ntpd[1528]: 9 Sep 05:37:26 ntpd[1528]: Listen normally on 14 califa5d566c430 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 9 05:37:26.995883 ntpd[1528]: 9 Sep 05:37:26 ntpd[1528]: Listen normally on 15 cali9cb81981e6f [fe80::ecee:eeff:feee:eeee%12]:123 Sep 9 05:37:26.995883 ntpd[1528]: 9 Sep 05:37:26 ntpd[1528]: Listen normally on 16 caliade9224af8b [fe80::ecee:eeff:feee:eeee%13]:123 Sep 9 05:37:26.995883 ntpd[1528]: 9 Sep 05:37:26 ntpd[1528]: Listen normally on 17 cali176d275c3a4 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 9 05:37:26.995175 ntpd[1528]: Listen normally on 9 cali5923c394888 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 9 05:37:26.995267 ntpd[1528]: Listen normally on 10 cali70626749932 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 9 05:37:26.995357 ntpd[1528]: Listen normally on 11 vxlan.calico [fe80::6400:d9ff:fe94:fe7d%6]:123 Sep 9 05:37:26.995419 ntpd[1528]: Listen normally on 12 cali56ff79eef6f [fe80::ecee:eeff:feee:eeee%9]:123 Sep 9 05:37:26.995503 ntpd[1528]: Listen normally on 13 cali5b690d32b03 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 9 05:37:26.995575 ntpd[1528]: Listen normally on 14 califa5d566c430 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 9 05:37:26.995634 ntpd[1528]: Listen normally on 15 cali9cb81981e6f [fe80::ecee:eeff:feee:eeee%12]:123 Sep 9 05:37:26.995693 ntpd[1528]: Listen normally on 16 caliade9224af8b [fe80::ecee:eeff:feee:eeee%13]:123 Sep 9 05:37:26.995753 ntpd[1528]: Listen normally on 17 cali176d275c3a4 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 9 05:37:27.696601 kubelet[2793]: I0909 05:37:27.695202 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b96d9f487-86crn" podStartSLOduration=33.096339755 podStartE2EDuration="40.695114463s" podCreationTimestamp="2025-09-09 05:36:47 +0000 UTC" firstStartedPulling="2025-09-09 05:37:18.936651282 +0000 UTC m=+48.045840983" lastFinishedPulling="2025-09-09 05:37:26.53542595 +0000 UTC m=+55.644615691" observedRunningTime="2025-09-09 05:37:27.69259292 +0000 UTC m=+56.801782635" watchObservedRunningTime="2025-09-09 05:37:27.695114463 +0000 UTC m=+56.804304180" Sep 9 05:37:27.724119 kubelet[2793]: I0909 05:37:27.724023 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b96d9f487-55zq4" podStartSLOduration=36.702465797 podStartE2EDuration="40.723412802s" podCreationTimestamp="2025-09-09 05:36:47 +0000 UTC" firstStartedPulling="2025-09-09 05:37:22.733320214 +0000 UTC m=+51.842509908" lastFinishedPulling="2025-09-09 05:37:26.754267204 +0000 UTC m=+55.863456913" observedRunningTime="2025-09-09 05:37:27.722701796 +0000 UTC m=+56.831891515" watchObservedRunningTime="2025-09-09 05:37:27.723412802 +0000 UTC m=+56.832602521" Sep 9 05:37:28.695143 kubelet[2793]: I0909 05:37:28.695082 2793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:37:29.857565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2855780478.mount: Deactivated successfully. Sep 9 05:37:31.499217 containerd[1614]: time="2025-09-09T05:37:31.499131023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:31.502865 containerd[1614]: time="2025-09-09T05:37:31.502008336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 9 05:37:31.505175 containerd[1614]: time="2025-09-09T05:37:31.504609800Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:31.510663 containerd[1614]: time="2025-09-09T05:37:31.510611866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:31.514360 containerd[1614]: time="2025-09-09T05:37:31.514305979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.758926977s" Sep 9 05:37:31.514591 containerd[1614]: time="2025-09-09T05:37:31.514362347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 9 05:37:31.518624 containerd[1614]: time="2025-09-09T05:37:31.518505517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 05:37:31.520855 containerd[1614]: time="2025-09-09T05:37:31.520809884Z" level=info msg="CreateContainer within sandbox \"59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 05:37:31.536903 containerd[1614]: time="2025-09-09T05:37:31.535319320Z" level=info msg="Container b0b406732634495a08369012d0e31f98068cc9335547cabb1a5b6ed46b5439c8: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:31.557613 containerd[1614]: time="2025-09-09T05:37:31.557519817Z" level=info msg="CreateContainer within sandbox \"59125d03ac9de81a95fe7cf733650c07080fc0f7d2c2aea1c17f17a187cda09a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b0b406732634495a08369012d0e31f98068cc9335547cabb1a5b6ed46b5439c8\"" Sep 9 05:37:31.559460 containerd[1614]: time="2025-09-09T05:37:31.559235543Z" level=info msg="StartContainer for \"b0b406732634495a08369012d0e31f98068cc9335547cabb1a5b6ed46b5439c8\"" Sep 9 05:37:31.562301 containerd[1614]: time="2025-09-09T05:37:31.562255686Z" level=info msg="connecting to shim b0b406732634495a08369012d0e31f98068cc9335547cabb1a5b6ed46b5439c8" address="unix:///run/containerd/s/b1a5d73c85db76d48ad1b5faf300a4561492acf68572aafaef01ff63bae7437f" protocol=ttrpc version=3 Sep 9 05:37:31.625128 systemd[1]: Started cri-containerd-b0b406732634495a08369012d0e31f98068cc9335547cabb1a5b6ed46b5439c8.scope - libcontainer container b0b406732634495a08369012d0e31f98068cc9335547cabb1a5b6ed46b5439c8. Sep 9 05:37:31.835489 containerd[1614]: time="2025-09-09T05:37:31.834661567Z" level=info msg="StartContainer for \"b0b406732634495a08369012d0e31f98068cc9335547cabb1a5b6ed46b5439c8\" returns successfully" Sep 9 05:37:32.748598 kubelet[2793]: I0909 05:37:32.748233 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-nhngg" podStartSLOduration=32.478742776 podStartE2EDuration="40.748060971s" podCreationTimestamp="2025-09-09 05:36:52 +0000 UTC" firstStartedPulling="2025-09-09 05:37:23.247358568 +0000 UTC m=+52.356548271" lastFinishedPulling="2025-09-09 05:37:31.516676753 +0000 UTC m=+60.625866466" observedRunningTime="2025-09-09 05:37:32.747132092 +0000 UTC m=+61.856321810" watchObservedRunningTime="2025-09-09 05:37:32.748060971 +0000 UTC m=+61.857250684" Sep 9 05:37:33.918449 containerd[1614]: time="2025-09-09T05:37:33.918084514Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0b406732634495a08369012d0e31f98068cc9335547cabb1a5b6ed46b5439c8\" id:\"0f43d9bb940025a52f7a523e2d709629bbcf6875b6e7bc215eb2aeac41f3f137\" pid:5071 exit_status:1 exited_at:{seconds:1757396253 nanos:916865898}" Sep 9 05:37:34.607967 containerd[1614]: time="2025-09-09T05:37:34.607890704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:34.609487 containerd[1614]: time="2025-09-09T05:37:34.609369012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 9 05:37:34.611012 containerd[1614]: time="2025-09-09T05:37:34.610933140Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:34.614352 containerd[1614]: time="2025-09-09T05:37:34.614268115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:34.615721 containerd[1614]: time="2025-09-09T05:37:34.615296732Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.096746968s" Sep 9 05:37:34.615721 containerd[1614]: time="2025-09-09T05:37:34.615347182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 9 05:37:34.648383 containerd[1614]: time="2025-09-09T05:37:34.648327786Z" level=info msg="CreateContainer within sandbox \"ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 05:37:34.665535 containerd[1614]: time="2025-09-09T05:37:34.663366781Z" level=info msg="Container 30771653b07f2276d6b5f9402fa1f1d27de7651a2a91ea8857f89c6ee12721d4: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:34.680326 containerd[1614]: time="2025-09-09T05:37:34.680226432Z" level=info msg="CreateContainer within sandbox \"ac7ef7f2b4b72837c5f09a91d93cf5279bedbcae8b6d3ca346c5255a9b3f5319\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"30771653b07f2276d6b5f9402fa1f1d27de7651a2a91ea8857f89c6ee12721d4\"" Sep 9 05:37:34.682208 containerd[1614]: time="2025-09-09T05:37:34.682136329Z" level=info msg="StartContainer for \"30771653b07f2276d6b5f9402fa1f1d27de7651a2a91ea8857f89c6ee12721d4\"" Sep 9 05:37:34.685705 containerd[1614]: time="2025-09-09T05:37:34.685654090Z" level=info msg="connecting to shim 30771653b07f2276d6b5f9402fa1f1d27de7651a2a91ea8857f89c6ee12721d4" address="unix:///run/containerd/s/55e31f8f6cb0ccf338872c50a2f73ebb8087df8644598d5bcce9260a491f84af" protocol=ttrpc version=3 Sep 9 05:37:34.726803 systemd[1]: Started cri-containerd-30771653b07f2276d6b5f9402fa1f1d27de7651a2a91ea8857f89c6ee12721d4.scope - libcontainer container 30771653b07f2276d6b5f9402fa1f1d27de7651a2a91ea8857f89c6ee12721d4. Sep 9 05:37:34.846588 containerd[1614]: time="2025-09-09T05:37:34.846531407Z" level=info msg="StartContainer for \"30771653b07f2276d6b5f9402fa1f1d27de7651a2a91ea8857f89c6ee12721d4\" returns successfully" Sep 9 05:37:34.904203 containerd[1614]: time="2025-09-09T05:37:34.902678279Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0b406732634495a08369012d0e31f98068cc9335547cabb1a5b6ed46b5439c8\" id:\"4fa7bd4c1d69e00c40b1c2107c0804491475f625d8f668e8622326b4df3eb9f1\" pid:5115 exit_status:1 exited_at:{seconds:1757396254 nanos:901909107}" Sep 9 05:37:35.668790 containerd[1614]: time="2025-09-09T05:37:35.668719863Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0b406732634495a08369012d0e31f98068cc9335547cabb1a5b6ed46b5439c8\" id:\"59ea0e07b49a9fbe8a5173ca4322d307ca50782e37fb3eb777b2b55e515bcf7b\" pid:5158 exit_status:1 exited_at:{seconds:1757396255 nanos:668288859}" Sep 9 05:37:35.819143 containerd[1614]: time="2025-09-09T05:37:35.819061318Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30771653b07f2276d6b5f9402fa1f1d27de7651a2a91ea8857f89c6ee12721d4\" id:\"7b990456cff161bc41ac235858f9dc4de96c7bcf6204a9e9bcbd63aa122907ed\" pid:5181 exited_at:{seconds:1757396255 nanos:818585271}" Sep 9 05:37:35.839560 kubelet[2793]: I0909 05:37:35.839461 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-74d99b4c64-gjhhx" podStartSLOduration=31.551874587 podStartE2EDuration="42.83940326s" podCreationTimestamp="2025-09-09 05:36:53 +0000 UTC" firstStartedPulling="2025-09-09 05:37:23.329466662 +0000 UTC m=+52.438656359" lastFinishedPulling="2025-09-09 05:37:34.616995326 +0000 UTC m=+63.726185032" observedRunningTime="2025-09-09 05:37:35.769134962 +0000 UTC m=+64.878324677" watchObservedRunningTime="2025-09-09 05:37:35.83940326 +0000 UTC m=+64.948592979" Sep 9 05:37:39.076047 containerd[1614]: time="2025-09-09T05:37:39.075969384Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ae8046e95c02d0daedc19523913d024bcaadfb538ad2842993497ea69baab1b\" id:\"5c45859db765a734bd66d89d27b5345c6a929055c48b4148f6c734b172f157d0\" pid:5212 exited_at:{seconds:1757396259 nanos:75593404}" Sep 9 05:37:39.219401 containerd[1614]: time="2025-09-09T05:37:39.219331953Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ae8046e95c02d0daedc19523913d024bcaadfb538ad2842993497ea69baab1b\" id:\"71392b524191c7922997f8bf1be0339f57e4680341c8f93176e399fc361d9f80\" pid:5235 exited_at:{seconds:1757396259 nanos:218854806}" Sep 9 05:37:46.681029 containerd[1614]: time="2025-09-09T05:37:46.680968360Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30771653b07f2276d6b5f9402fa1f1d27de7651a2a91ea8857f89c6ee12721d4\" id:\"7f3749efbe5f43d00f8bdf9713f03936e00d7c5937106cf9e0ad3ae5cf206431\" pid:5263 exited_at:{seconds:1757396266 nanos:679900651}" Sep 9 05:37:50.285452 kubelet[2793]: I0909 05:37:50.284855 2793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:38:00.025287 systemd[1]: Started sshd@7-10.128.0.4:22-139.178.89.65:56090.service - OpenSSH per-connection server daemon (139.178.89.65:56090). Sep 9 05:38:00.210336 containerd[1614]: time="2025-09-09T05:38:00.210175142Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0b406732634495a08369012d0e31f98068cc9335547cabb1a5b6ed46b5439c8\" id:\"f3ec98b0d1428cf2813b637e34dfc9f40937514a72f0c95b46697a32763b86a7\" pid:5301 exited_at:{seconds:1757396280 nanos:208893002}" Sep 9 05:38:00.392748 sshd[5297]: Accepted publickey for core from 139.178.89.65 port 56090 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:38:00.396689 sshd-session[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:38:00.410631 systemd-logind[1542]: New session 8 of user core. Sep 9 05:38:00.416718 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 05:38:00.758632 sshd[5315]: Connection closed by 139.178.89.65 port 56090 Sep 9 05:38:00.759758 sshd-session[5297]: pam_unix(sshd:session): session closed for user core Sep 9 05:38:00.772038 systemd[1]: sshd@7-10.128.0.4:22-139.178.89.65:56090.service: Deactivated successfully. Sep 9 05:38:00.779120 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 05:38:00.781913 systemd-logind[1542]: Session 8 logged out. Waiting for processes to exit. Sep 9 05:38:00.785286 systemd-logind[1542]: Removed session 8. Sep 9 05:38:05.707410 containerd[1614]: time="2025-09-09T05:38:05.707342648Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30771653b07f2276d6b5f9402fa1f1d27de7651a2a91ea8857f89c6ee12721d4\" id:\"485313ba8c6518956709753d1e59e610e41ddfe6092d34f5ce89354c31c01782\" pid:5343 exited_at:{seconds:1757396285 nanos:706943748}" Sep 9 05:38:05.826956 systemd[1]: Started sshd@8-10.128.0.4:22-139.178.89.65:56092.service - OpenSSH per-connection server daemon (139.178.89.65:56092). Sep 9 05:38:05.849062 containerd[1614]: time="2025-09-09T05:38:05.848948925Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0b406732634495a08369012d0e31f98068cc9335547cabb1a5b6ed46b5439c8\" id:\"42081d25ffc7eae5dd0cd750037685cf41a2ab19bb8d06a9725c009ea70fd0f1\" pid:5361 exited_at:{seconds:1757396285 nanos:844307204}" Sep 9 05:38:06.185199 sshd[5375]: Accepted publickey for core from 139.178.89.65 port 56092 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:38:06.188097 sshd-session[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:38:06.198719 systemd-logind[1542]: New session 9 of user core. Sep 9 05:38:06.205740 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 05:38:06.629045 sshd[5380]: Connection closed by 139.178.89.65 port 56092 Sep 9 05:38:06.630147 sshd-session[5375]: pam_unix(sshd:session): session closed for user core Sep 9 05:38:06.639143 systemd[1]: sshd@8-10.128.0.4:22-139.178.89.65:56092.service: Deactivated successfully. Sep 9 05:38:06.646811 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 05:38:06.649671 systemd-logind[1542]: Session 9 logged out. Waiting for processes to exit. Sep 9 05:38:06.653313 systemd-logind[1542]: Removed session 9. Sep 9 05:38:09.116617 containerd[1614]: time="2025-09-09T05:38:09.116561145Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ae8046e95c02d0daedc19523913d024bcaadfb538ad2842993497ea69baab1b\" id:\"82b93120614babb7f9b6ba6e1704c4625489b49603de217baa8e95438a51656c\" pid:5405 exited_at:{seconds:1757396289 nanos:115388564}" Sep 9 05:38:11.690805 systemd[1]: Started sshd@9-10.128.0.4:22-139.178.89.65:50014.service - OpenSSH per-connection server daemon (139.178.89.65:50014). Sep 9 05:38:12.014795 sshd[5417]: Accepted publickey for core from 139.178.89.65 port 50014 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:38:12.018093 sshd-session[5417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:38:12.028837 systemd-logind[1542]: New session 10 of user core. Sep 9 05:38:12.034725 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 05:38:12.398323 sshd[5420]: Connection closed by 139.178.89.65 port 50014 Sep 9 05:38:12.399371 sshd-session[5417]: pam_unix(sshd:session): session closed for user core Sep 9 05:38:12.412776 systemd[1]: sshd@9-10.128.0.4:22-139.178.89.65:50014.service: Deactivated successfully. Sep 9 05:38:12.417857 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 05:38:12.419767 systemd-logind[1542]: Session 10 logged out. Waiting for processes to exit. Sep 9 05:38:12.423862 systemd-logind[1542]: Removed session 10. Sep 9 05:38:12.458982 systemd[1]: Started sshd@10-10.128.0.4:22-139.178.89.65:50028.service - OpenSSH per-connection server daemon (139.178.89.65:50028). Sep 9 05:38:12.793551 sshd[5433]: Accepted publickey for core from 139.178.89.65 port 50028 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:38:12.797517 sshd-session[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:38:12.814251 systemd-logind[1542]: New session 11 of user core. Sep 9 05:38:12.817643 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 05:38:13.225684 sshd[5436]: Connection closed by 139.178.89.65 port 50028 Sep 9 05:38:13.228795 sshd-session[5433]: pam_unix(sshd:session): session closed for user core Sep 9 05:38:13.242130 systemd[1]: sshd@10-10.128.0.4:22-139.178.89.65:50028.service: Deactivated successfully. Sep 9 05:38:13.250892 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 05:38:13.255069 systemd-logind[1542]: Session 11 logged out. Waiting for processes to exit. Sep 9 05:38:13.261688 systemd-logind[1542]: Removed session 11. Sep 9 05:38:13.288635 systemd[1]: Started sshd@11-10.128.0.4:22-139.178.89.65:50036.service - OpenSSH per-connection server daemon (139.178.89.65:50036). Sep 9 05:38:13.619693 sshd[5446]: Accepted publickey for core from 139.178.89.65 port 50036 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:38:13.624957 sshd-session[5446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:38:13.636778 systemd-logind[1542]: New session 12 of user core. Sep 9 05:38:13.643664 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 05:38:13.994612 sshd[5449]: Connection closed by 139.178.89.65 port 50036 Sep 9 05:38:13.997972 sshd-session[5446]: pam_unix(sshd:session): session closed for user core Sep 9 05:38:14.009334 systemd[1]: sshd@11-10.128.0.4:22-139.178.89.65:50036.service: Deactivated successfully. Sep 9 05:38:14.015086 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 05:38:14.018517 systemd-logind[1542]: Session 12 logged out. Waiting for processes to exit. Sep 9 05:38:14.021724 systemd-logind[1542]: Removed session 12. Sep 9 05:38:19.062857 systemd[1]: Started sshd@12-10.128.0.4:22-139.178.89.65:50042.service - OpenSSH per-connection server daemon (139.178.89.65:50042). Sep 9 05:38:19.415280 sshd[5468]: Accepted publickey for core from 139.178.89.65 port 50042 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:38:19.419246 sshd-session[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:38:19.432213 systemd-logind[1542]: New session 13 of user core. Sep 9 05:38:19.441869 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 05:38:19.778839 sshd[5471]: Connection closed by 139.178.89.65 port 50042 Sep 9 05:38:19.779778 sshd-session[5468]: pam_unix(sshd:session): session closed for user core Sep 9 05:38:19.792845 systemd[1]: sshd@12-10.128.0.4:22-139.178.89.65:50042.service: Deactivated successfully. Sep 9 05:38:19.799151 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 05:38:19.801194 systemd-logind[1542]: Session 13 logged out. Waiting for processes to exit. Sep 9 05:38:19.805278 systemd-logind[1542]: Removed session 13. Sep 9 05:38:24.843848 systemd[1]: Started sshd@13-10.128.0.4:22-139.178.89.65:48294.service - OpenSSH per-connection server daemon (139.178.89.65:48294). Sep 9 05:38:25.188773 sshd[5483]: Accepted publickey for core from 139.178.89.65 port 48294 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:38:25.191642 sshd-session[5483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:38:25.203022 systemd-logind[1542]: New session 14 of user core. Sep 9 05:38:25.212799 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 05:38:25.548702 sshd[5486]: Connection closed by 139.178.89.65 port 48294 Sep 9 05:38:25.550785 sshd-session[5483]: pam_unix(sshd:session): session closed for user core Sep 9 05:38:25.561861 systemd-logind[1542]: Session 14 logged out. Waiting for processes to exit. Sep 9 05:38:25.565266 systemd[1]: sshd@13-10.128.0.4:22-139.178.89.65:48294.service: Deactivated successfully. Sep 9 05:38:25.573029 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 05:38:25.579659 systemd-logind[1542]: Removed session 14. Sep 9 05:38:30.607415 systemd[1]: Started sshd@14-10.128.0.4:22-139.178.89.65:57792.service - OpenSSH per-connection server daemon (139.178.89.65:57792). Sep 9 05:38:30.942738 sshd[5498]: Accepted publickey for core from 139.178.89.65 port 57792 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:38:30.946420 sshd-session[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:38:30.957505 systemd-logind[1542]: New session 15 of user core. Sep 9 05:38:30.963922 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 05:38:31.332999 sshd[5501]: Connection closed by 139.178.89.65 port 57792 Sep 9 05:38:31.333788 sshd-session[5498]: pam_unix(sshd:session): session closed for user core Sep 9 05:38:31.344614 systemd[1]: sshd@14-10.128.0.4:22-139.178.89.65:57792.service: Deactivated successfully. Sep 9 05:38:31.350907 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 05:38:31.356329 systemd-logind[1542]: Session 15 logged out. Waiting for processes to exit. Sep 9 05:38:31.362061 systemd-logind[1542]: Removed session 15. Sep 9 05:38:35.517799 containerd[1614]: time="2025-09-09T05:38:35.517725866Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30771653b07f2276d6b5f9402fa1f1d27de7651a2a91ea8857f89c6ee12721d4\" id:\"df800caec6ec23e88896509e976e61e65977aecf32b86e5f0862d1d69282209f\" pid:5526 exited_at:{seconds:1757396315 nanos:517349407}" Sep 9 05:38:35.811420 containerd[1614]: time="2025-09-09T05:38:35.811240901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0b406732634495a08369012d0e31f98068cc9335547cabb1a5b6ed46b5439c8\" id:\"25fde48d8cf9171dcd0b5b94f81042ef361aad385018fe270b2a1268d9a4cbd6\" pid:5546 exited_at:{seconds:1757396315 nanos:809863661}" Sep 9 05:38:36.397049 systemd[1]: Started sshd@15-10.128.0.4:22-139.178.89.65:57800.service - OpenSSH per-connection server daemon (139.178.89.65:57800). Sep 9 05:38:36.725917 sshd[5558]: Accepted publickey for core from 139.178.89.65 port 57800 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:38:36.729518 sshd-session[5558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:38:36.740416 systemd-logind[1542]: New session 16 of user core. Sep 9 05:38:36.751948 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 05:38:37.047872 sshd[5561]: Connection closed by 139.178.89.65 port 57800 Sep 9 05:38:37.049265 sshd-session[5558]: pam_unix(sshd:session): session closed for user core Sep 9 05:38:37.057046 systemd[1]: sshd@15-10.128.0.4:22-139.178.89.65:57800.service: Deactivated successfully. Sep 9 05:38:37.061948 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 05:38:37.064218 systemd-logind[1542]: Session 16 logged out. Waiting for processes to exit. Sep 9 05:38:37.067180 systemd-logind[1542]: Removed session 16. Sep 9 05:38:37.105029 systemd[1]: Started sshd@16-10.128.0.4:22-139.178.89.65:57814.service - OpenSSH per-connection server daemon (139.178.89.65:57814). Sep 9 05:38:37.436995 sshd[5573]: Accepted publickey for core from 139.178.89.65 port 57814 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:38:37.440736 sshd-session[5573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:38:37.455224 systemd-logind[1542]: New session 17 of user core. Sep 9 05:38:37.460646 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 05:38:37.886573 sshd[5576]: Connection closed by 139.178.89.65 port 57814 Sep 9 05:38:37.888741 sshd-session[5573]: pam_unix(sshd:session): session closed for user core Sep 9 05:38:37.901528 systemd[1]: sshd@16-10.128.0.4:22-139.178.89.65:57814.service: Deactivated successfully. Sep 9 05:38:37.907963 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 05:38:37.912812 systemd-logind[1542]: Session 17 logged out. Waiting for processes to exit. Sep 9 05:38:37.915902 systemd-logind[1542]: Removed session 17. Sep 9 05:38:37.951948 systemd[1]: Started sshd@17-10.128.0.4:22-139.178.89.65:57830.service - OpenSSH per-connection server daemon (139.178.89.65:57830). Sep 9 05:38:38.284820 sshd[5593]: Accepted publickey for core from 139.178.89.65 port 57830 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:38:38.289239 sshd-session[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:38:38.307833 systemd-logind[1542]: New session 18 of user core. Sep 9 05:38:38.313144 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 05:38:39.583091 containerd[1614]: time="2025-09-09T05:38:39.582765900Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ae8046e95c02d0daedc19523913d024bcaadfb538ad2842993497ea69baab1b\" id:\"060112b12c908ae26781085373393cf89c1b11cd6c9e76d5de658bdbbde0ee2e\" pid:5620 exited_at:{seconds:1757396319 nanos:581798572}" Sep 9 05:38:41.899093 sshd[5596]: Connection closed by 139.178.89.65 port 57830 Sep 9 05:38:41.900118 sshd-session[5593]: pam_unix(sshd:session): session closed for user core Sep 9 05:38:41.909282 systemd[1]: sshd@17-10.128.0.4:22-139.178.89.65:57830.service: Deactivated successfully. Sep 9 05:38:41.916707 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 05:38:41.917629 systemd[1]: session-18.scope: Consumed 949ms CPU time, 82.8M memory peak. Sep 9 05:38:41.924033 systemd-logind[1542]: Session 18 logged out. Waiting for processes to exit. Sep 9 05:38:41.927967 systemd-logind[1542]: Removed session 18. Sep 9 05:38:41.962355 systemd[1]: Started sshd@18-10.128.0.4:22-139.178.89.65:46364.service - OpenSSH per-connection server daemon (139.178.89.65:46364). Sep 9 05:38:42.291106 sshd[5638]: Accepted publickey for core from 139.178.89.65 port 46364 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:38:42.294921 sshd-session[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:38:42.310655 systemd-logind[1542]: New session 19 of user core. Sep 9 05:38:42.313900 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 05:38:42.948491 sshd[5643]: Connection closed by 139.178.89.65 port 46364 Sep 9 05:38:42.949452 sshd-session[5638]: pam_unix(sshd:session): session closed for user core Sep 9 05:38:42.961572 systemd[1]: sshd@18-10.128.0.4:22-139.178.89.65:46364.service: Deactivated successfully. Sep 9 05:38:42.969171 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 05:38:42.976146 systemd-logind[1542]: Session 19 logged out. Waiting for processes to exit. Sep 9 05:38:42.978739 systemd-logind[1542]: Removed session 19. Sep 9 05:38:43.010071 systemd[1]: Started sshd@19-10.128.0.4:22-139.178.89.65:46370.service - OpenSSH per-connection server daemon (139.178.89.65:46370). Sep 9 05:38:43.339956 sshd[5653]: Accepted publickey for core from 139.178.89.65 port 46370 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:38:43.342562 sshd-session[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:38:43.351821 systemd-logind[1542]: New session 20 of user core. Sep 9 05:38:43.359713 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 05:38:43.694470 sshd[5658]: Connection closed by 139.178.89.65 port 46370 Sep 9 05:38:43.692883 sshd-session[5653]: pam_unix(sshd:session): session closed for user core Sep 9 05:38:43.702585 systemd[1]: sshd@19-10.128.0.4:22-139.178.89.65:46370.service: Deactivated successfully. Sep 9 05:38:43.707708 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 05:38:43.713176 systemd-logind[1542]: Session 20 logged out. Waiting for processes to exit. Sep 9 05:38:43.715714 systemd-logind[1542]: Removed session 20. Sep 9 05:38:46.503637 containerd[1614]: time="2025-09-09T05:38:46.502986394Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30771653b07f2276d6b5f9402fa1f1d27de7651a2a91ea8857f89c6ee12721d4\" id:\"35171ffb83b099df44173eeee68656c79cf1fbb6a6b3d1aedb6195519c9b5f0b\" pid:5682 exited_at:{seconds:1757396326 nanos:501869719}" Sep 9 05:38:48.752239 systemd[1]: Started sshd@20-10.128.0.4:22-139.178.89.65:46378.service - OpenSSH per-connection server daemon (139.178.89.65:46378). Sep 9 05:38:49.086422 sshd[5702]: Accepted publickey for core from 139.178.89.65 port 46378 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:38:49.090733 sshd-session[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:38:49.103578 systemd-logind[1542]: New session 21 of user core. Sep 9 05:38:49.110790 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 05:38:49.442902 sshd[5705]: Connection closed by 139.178.89.65 port 46378 Sep 9 05:38:49.443913 sshd-session[5702]: pam_unix(sshd:session): session closed for user core Sep 9 05:38:49.453735 systemd-logind[1542]: Session 21 logged out. Waiting for processes to exit. Sep 9 05:38:49.454991 systemd[1]: sshd@20-10.128.0.4:22-139.178.89.65:46378.service: Deactivated successfully. Sep 9 05:38:49.460910 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 05:38:49.468025 systemd-logind[1542]: Removed session 21. Sep 9 05:38:54.507625 systemd[1]: Started sshd@21-10.128.0.4:22-139.178.89.65:50512.service - OpenSSH per-connection server daemon (139.178.89.65:50512). Sep 9 05:38:54.839330 sshd[5733]: Accepted publickey for core from 139.178.89.65 port 50512 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:38:54.843497 sshd-session[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:38:54.858970 systemd-logind[1542]: New session 22 of user core. Sep 9 05:38:54.862656 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 05:38:55.230889 sshd[5736]: Connection closed by 139.178.89.65 port 50512 Sep 9 05:38:55.231852 sshd-session[5733]: pam_unix(sshd:session): session closed for user core Sep 9 05:38:55.242082 systemd[1]: sshd@21-10.128.0.4:22-139.178.89.65:50512.service: Deactivated successfully. Sep 9 05:38:55.242974 systemd-logind[1542]: Session 22 logged out. Waiting for processes to exit. Sep 9 05:38:55.249024 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 05:38:55.254913 systemd-logind[1542]: Removed session 22. Sep 9 05:39:00.215654 containerd[1614]: time="2025-09-09T05:39:00.215553628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0b406732634495a08369012d0e31f98068cc9335547cabb1a5b6ed46b5439c8\" id:\"43d169e6d893f5f40910c2f36616eb766fd8345fc4c1fb72f8569281d8760343\" pid:5761 exited_at:{seconds:1757396340 nanos:213540353}" Sep 9 05:39:00.293563 systemd[1]: Started sshd@22-10.128.0.4:22-139.178.89.65:54238.service - OpenSSH per-connection server daemon (139.178.89.65:54238). Sep 9 05:39:00.626564 sshd[5771]: Accepted publickey for core from 139.178.89.65 port 54238 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:39:00.628976 sshd-session[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:39:00.640124 systemd-logind[1542]: New session 23 of user core. Sep 9 05:39:00.645714 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 05:39:00.994919 sshd[5774]: Connection closed by 139.178.89.65 port 54238 Sep 9 05:39:00.995904 sshd-session[5771]: pam_unix(sshd:session): session closed for user core Sep 9 05:39:01.004684 systemd[1]: sshd@22-10.128.0.4:22-139.178.89.65:54238.service: Deactivated successfully. Sep 9 05:39:01.011933 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 05:39:01.017123 systemd-logind[1542]: Session 23 logged out. Waiting for processes to exit. Sep 9 05:39:01.020065 systemd-logind[1542]: Removed session 23.