Sep 9 04:53:10.797136 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 04:53:10.797160 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 9 03:38:34 -00 2025 Sep 9 04:53:10.797170 kernel: KASLR enabled Sep 9 04:53:10.797176 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Sep 9 04:53:10.797181 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Sep 9 04:53:10.797187 kernel: random: crng init done Sep 9 04:53:10.797194 kernel: secureboot: Secure boot disabled Sep 9 04:53:10.797200 kernel: ACPI: Early table checksum verification disabled Sep 9 04:53:10.797219 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Sep 9 04:53:10.797226 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Sep 9 04:53:10.797234 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:53:10.797240 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:53:10.797246 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:53:10.797252 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:53:10.797259 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:53:10.797266 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:53:10.797272 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:53:10.797278 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:53:10.797284 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:53:10.797290 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Sep 9 04:53:10.797296 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Sep 9 04:53:10.797302 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 04:53:10.797308 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Sep 9 04:53:10.797314 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Sep 9 04:53:10.797320 kernel: Zone ranges: Sep 9 04:53:10.797326 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 9 04:53:10.797334 kernel: DMA32 empty Sep 9 04:53:10.797340 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Sep 9 04:53:10.797346 kernel: Device empty Sep 9 04:53:10.797352 kernel: Movable zone start for each node Sep 9 04:53:10.797358 kernel: Early memory node ranges Sep 9 04:53:10.797364 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Sep 9 04:53:10.797370 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Sep 9 04:53:10.797376 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Sep 9 04:53:10.797382 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Sep 9 04:53:10.797388 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Sep 9 04:53:10.797393 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Sep 9 04:53:10.797399 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Sep 9 04:53:10.797407 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Sep 9 04:53:10.797413 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Sep 9 04:53:10.797422 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Sep 9 04:53:10.797428 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Sep 9 04:53:10.797435 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Sep 9 04:53:10.797442 kernel: psci: probing for conduit method from ACPI. Sep 9 04:53:10.797449 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 04:53:10.797455 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 04:53:10.797462 kernel: psci: Trusted OS migration not required Sep 9 04:53:10.797468 kernel: psci: SMC Calling Convention v1.1 Sep 9 04:53:10.797475 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 04:53:10.797482 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 04:53:10.797488 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 04:53:10.797495 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 9 04:53:10.797501 kernel: Detected PIPT I-cache on CPU0 Sep 9 04:53:10.797508 kernel: CPU features: detected: GIC system register CPU interface Sep 9 04:53:10.797515 kernel: CPU features: detected: Spectre-v4 Sep 9 04:53:10.797522 kernel: CPU features: detected: Spectre-BHB Sep 9 04:53:10.797528 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 04:53:10.797535 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 04:53:10.797541 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 04:53:10.797547 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 04:53:10.797554 kernel: alternatives: applying boot alternatives Sep 9 04:53:10.797562 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=1e9320fd787e27d01e3b8a1acb67e0c640346112c469b7a652e9dcfc9271bf90 Sep 9 04:53:10.797569 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 04:53:10.797575 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 04:53:10.797583 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 04:53:10.797589 kernel: Fallback order for Node 0: 0 Sep 9 04:53:10.797596 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Sep 9 04:53:10.797602 kernel: Policy zone: Normal Sep 9 04:53:10.797609 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 04:53:10.797615 kernel: software IO TLB: area num 2. Sep 9 04:53:10.797621 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Sep 9 04:53:10.797628 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 9 04:53:10.797634 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 04:53:10.797641 kernel: rcu: RCU event tracing is enabled. Sep 9 04:53:10.797648 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 9 04:53:10.797654 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 04:53:10.797662 kernel: Tracing variant of Tasks RCU enabled. Sep 9 04:53:10.797669 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 04:53:10.797675 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 9 04:53:10.797681 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 04:53:10.797688 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 04:53:10.797695 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 04:53:10.797701 kernel: GICv3: 256 SPIs implemented Sep 9 04:53:10.797707 kernel: GICv3: 0 Extended SPIs implemented Sep 9 04:53:10.797714 kernel: Root IRQ handler: gic_handle_irq Sep 9 04:53:10.797720 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 9 04:53:10.797727 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 9 04:53:10.797733 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 04:53:10.797755 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 04:53:10.797762 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Sep 9 04:53:10.797769 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Sep 9 04:53:10.797775 kernel: GICv3: using LPI property table @0x0000000100120000 Sep 9 04:53:10.797781 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Sep 9 04:53:10.797805 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 04:53:10.797812 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 04:53:10.797819 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 04:53:10.797826 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 04:53:10.797832 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 04:53:10.797839 kernel: Console: colour dummy device 80x25 Sep 9 04:53:10.797848 kernel: ACPI: Core revision 20240827 Sep 9 04:53:10.797855 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 04:53:10.797862 kernel: pid_max: default: 32768 minimum: 301 Sep 9 04:53:10.797868 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 04:53:10.797875 kernel: landlock: Up and running. Sep 9 04:53:10.797881 kernel: SELinux: Initializing. Sep 9 04:53:10.797888 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 04:53:10.797894 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 04:53:10.797901 kernel: rcu: Hierarchical SRCU implementation. Sep 9 04:53:10.797909 kernel: rcu: Max phase no-delay instances is 400. Sep 9 04:53:10.797916 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 04:53:10.797923 kernel: Remapping and enabling EFI services. Sep 9 04:53:10.797929 kernel: smp: Bringing up secondary CPUs ... Sep 9 04:53:10.797936 kernel: Detected PIPT I-cache on CPU1 Sep 9 04:53:10.797943 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 04:53:10.797950 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Sep 9 04:53:10.797956 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 04:53:10.797963 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 04:53:10.797971 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 04:53:10.797982 kernel: SMP: Total of 2 processors activated. Sep 9 04:53:10.797989 kernel: CPU: All CPU(s) started at EL1 Sep 9 04:53:10.797997 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 04:53:10.798004 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 04:53:10.798011 kernel: CPU features: detected: Common not Private translations Sep 9 04:53:10.798018 kernel: CPU features: detected: CRC32 instructions Sep 9 04:53:10.798025 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 9 04:53:10.798034 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 04:53:10.798041 kernel: CPU features: detected: LSE atomic instructions Sep 9 04:53:10.798048 kernel: CPU features: detected: Privileged Access Never Sep 9 04:53:10.798055 kernel: CPU features: detected: RAS Extension Support Sep 9 04:53:10.798062 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 04:53:10.798069 kernel: alternatives: applying system-wide alternatives Sep 9 04:53:10.798076 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Sep 9 04:53:10.798084 kernel: Memory: 3859556K/4096000K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38976K init, 1038K bss, 214964K reserved, 16384K cma-reserved) Sep 9 04:53:10.798091 kernel: devtmpfs: initialized Sep 9 04:53:10.798100 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 04:53:10.798107 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 9 04:53:10.798114 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 04:53:10.798120 kernel: 0 pages in range for non-PLT usage Sep 9 04:53:10.798127 kernel: 508560 pages in range for PLT usage Sep 9 04:53:10.798134 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 04:53:10.798141 kernel: SMBIOS 3.0.0 present. Sep 9 04:53:10.798148 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Sep 9 04:53:10.798155 kernel: DMI: Memory slots populated: 1/1 Sep 9 04:53:10.798163 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 04:53:10.798170 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 04:53:10.798177 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 04:53:10.798184 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 04:53:10.798191 kernel: audit: initializing netlink subsys (disabled) Sep 9 04:53:10.798198 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Sep 9 04:53:10.798215 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 04:53:10.798224 kernel: cpuidle: using governor menu Sep 9 04:53:10.798231 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 04:53:10.798240 kernel: ASID allocator initialised with 32768 entries Sep 9 04:53:10.798247 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 04:53:10.798255 kernel: Serial: AMBA PL011 UART driver Sep 9 04:53:10.798262 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 04:53:10.798269 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 04:53:10.798276 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 04:53:10.798283 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 04:53:10.798290 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 04:53:10.798297 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 04:53:10.798305 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 04:53:10.798312 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 04:53:10.798319 kernel: ACPI: Added _OSI(Module Device) Sep 9 04:53:10.798326 kernel: ACPI: Added _OSI(Processor Device) Sep 9 04:53:10.798333 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 04:53:10.798340 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 04:53:10.798347 kernel: ACPI: Interpreter enabled Sep 9 04:53:10.798354 kernel: ACPI: Using GIC for interrupt routing Sep 9 04:53:10.798360 kernel: ACPI: MCFG table detected, 1 entries Sep 9 04:53:10.798369 kernel: ACPI: CPU0 has been hot-added Sep 9 04:53:10.798376 kernel: ACPI: CPU1 has been hot-added Sep 9 04:53:10.798383 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 04:53:10.798390 kernel: printk: legacy console [ttyAMA0] enabled Sep 9 04:53:10.798397 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 04:53:10.798544 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 04:53:10.798607 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 04:53:10.798664 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 04:53:10.798722 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 04:53:10.801950 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 04:53:10.801984 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 04:53:10.801992 kernel: PCI host bridge to bus 0000:00 Sep 9 04:53:10.802082 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 04:53:10.802137 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 04:53:10.802191 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 04:53:10.802330 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 04:53:10.802417 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 9 04:53:10.802486 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Sep 9 04:53:10.802547 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Sep 9 04:53:10.802606 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Sep 9 04:53:10.802675 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 04:53:10.802737 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Sep 9 04:53:10.802881 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 9 04:53:10.802967 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Sep 9 04:53:10.803058 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Sep 9 04:53:10.803136 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 04:53:10.803230 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Sep 9 04:53:10.803304 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 9 04:53:10.803388 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Sep 9 04:53:10.803476 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 04:53:10.803537 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Sep 9 04:53:10.803645 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 9 04:53:10.803708 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Sep 9 04:53:10.805893 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Sep 9 04:53:10.805991 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 04:53:10.806058 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Sep 9 04:53:10.806117 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 9 04:53:10.806176 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Sep 9 04:53:10.806290 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Sep 9 04:53:10.806365 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 04:53:10.806426 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Sep 9 04:53:10.806484 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 9 04:53:10.806546 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 9 04:53:10.806604 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Sep 9 04:53:10.806670 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 04:53:10.806728 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Sep 9 04:53:10.806831 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 9 04:53:10.806891 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Sep 9 04:53:10.806948 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Sep 9 04:53:10.807017 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 04:53:10.807075 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Sep 9 04:53:10.807132 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 9 04:53:10.807224 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Sep 9 04:53:10.807290 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Sep 9 04:53:10.807365 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 04:53:10.807426 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Sep 9 04:53:10.807487 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 9 04:53:10.807544 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Sep 9 04:53:10.807610 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 04:53:10.807668 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Sep 9 04:53:10.807726 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 9 04:53:10.809872 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Sep 9 04:53:10.809971 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Sep 9 04:53:10.810034 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Sep 9 04:53:10.810107 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Sep 9 04:53:10.810168 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Sep 9 04:53:10.810249 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 04:53:10.810313 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Sep 9 04:53:10.810382 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Sep 9 04:53:10.810446 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Sep 9 04:53:10.810514 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Sep 9 04:53:10.810574 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Sep 9 04:53:10.810634 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Sep 9 04:53:10.810728 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Sep 9 04:53:10.810829 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Sep 9 04:53:10.810909 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Sep 9 04:53:10.810972 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Sep 9 04:53:10.811043 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Sep 9 04:53:10.811104 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Sep 9 04:53:10.811176 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Sep 9 04:53:10.811297 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Sep 9 04:53:10.811379 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Sep 9 04:53:10.811447 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Sep 9 04:53:10.811505 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Sep 9 04:53:10.811566 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Sep 9 04:53:10.811627 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Sep 9 04:53:10.811684 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Sep 9 04:53:10.815030 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Sep 9 04:53:10.815149 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Sep 9 04:53:10.815251 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Sep 9 04:53:10.815319 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 9 04:53:10.815380 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Sep 9 04:53:10.815439 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Sep 9 04:53:10.815503 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 9 04:53:10.815564 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Sep 9 04:53:10.815625 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Sep 9 04:53:10.815688 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 9 04:53:10.815768 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Sep 9 04:53:10.815830 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Sep 9 04:53:10.815891 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 9 04:53:10.815969 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Sep 9 04:53:10.816030 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Sep 9 04:53:10.816098 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 9 04:53:10.816156 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Sep 9 04:53:10.816227 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Sep 9 04:53:10.816296 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 9 04:53:10.816354 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Sep 9 04:53:10.816412 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Sep 9 04:53:10.816474 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 9 04:53:10.816536 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Sep 9 04:53:10.816593 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Sep 9 04:53:10.816656 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Sep 9 04:53:10.817836 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Sep 9 04:53:10.817961 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Sep 9 04:53:10.818023 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Sep 9 04:53:10.818084 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Sep 9 04:53:10.818160 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Sep 9 04:53:10.818254 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Sep 9 04:53:10.818327 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Sep 9 04:53:10.818398 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Sep 9 04:53:10.818470 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Sep 9 04:53:10.818546 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Sep 9 04:53:10.818609 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Sep 9 04:53:10.818671 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Sep 9 04:53:10.818785 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Sep 9 04:53:10.818852 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Sep 9 04:53:10.818924 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Sep 9 04:53:10.818992 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Sep 9 04:53:10.819050 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Sep 9 04:53:10.819114 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Sep 9 04:53:10.819178 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Sep 9 04:53:10.819259 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Sep 9 04:53:10.819340 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Sep 9 04:53:10.819402 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Sep 9 04:53:10.819461 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Sep 9 04:53:10.819519 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Sep 9 04:53:10.819579 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Sep 9 04:53:10.819637 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Sep 9 04:53:10.819784 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Sep 9 04:53:10.819860 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Sep 9 04:53:10.819918 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Sep 9 04:53:10.820023 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Sep 9 04:53:10.820087 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Sep 9 04:53:10.820145 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Sep 9 04:53:10.820285 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Sep 9 04:53:10.820351 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Sep 9 04:53:10.820429 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Sep 9 04:53:10.820507 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Sep 9 04:53:10.820568 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Sep 9 04:53:10.820631 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Sep 9 04:53:10.820710 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Sep 9 04:53:10.823284 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 9 04:53:10.823382 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Sep 9 04:53:10.823444 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 9 04:53:10.823504 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 9 04:53:10.823562 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Sep 9 04:53:10.823619 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Sep 9 04:53:10.823684 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Sep 9 04:53:10.823766 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 9 04:53:10.823833 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 9 04:53:10.823891 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Sep 9 04:53:10.823949 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Sep 9 04:53:10.824014 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Sep 9 04:53:10.824074 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Sep 9 04:53:10.824133 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 9 04:53:10.824191 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 9 04:53:10.824268 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Sep 9 04:53:10.824333 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Sep 9 04:53:10.824399 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Sep 9 04:53:10.824458 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 9 04:53:10.824516 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 9 04:53:10.824572 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Sep 9 04:53:10.824629 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Sep 9 04:53:10.824697 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Sep 9 04:53:10.825900 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 9 04:53:10.825996 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 9 04:53:10.826063 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 9 04:53:10.826133 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Sep 9 04:53:10.826250 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Sep 9 04:53:10.826349 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Sep 9 04:53:10.826425 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 9 04:53:10.826496 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 9 04:53:10.826574 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Sep 9 04:53:10.826640 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 9 04:53:10.826707 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Sep 9 04:53:10.826789 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Sep 9 04:53:10.826851 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Sep 9 04:53:10.826916 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 9 04:53:10.826975 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 9 04:53:10.827035 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Sep 9 04:53:10.827094 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 9 04:53:10.827158 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 9 04:53:10.827233 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 9 04:53:10.827293 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Sep 9 04:53:10.827360 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 9 04:53:10.827423 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 9 04:53:10.827484 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Sep 9 04:53:10.827544 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Sep 9 04:53:10.827606 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Sep 9 04:53:10.827670 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 04:53:10.827725 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 04:53:10.828505 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 04:53:10.828598 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 9 04:53:10.828654 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Sep 9 04:53:10.828714 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Sep 9 04:53:10.828884 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Sep 9 04:53:10.828947 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Sep 9 04:53:10.829002 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Sep 9 04:53:10.829066 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Sep 9 04:53:10.829203 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Sep 9 04:53:10.829305 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Sep 9 04:53:10.829379 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 9 04:53:10.829434 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Sep 9 04:53:10.829487 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Sep 9 04:53:10.829552 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Sep 9 04:53:10.829607 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Sep 9 04:53:10.829663 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Sep 9 04:53:10.829725 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Sep 9 04:53:10.829811 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Sep 9 04:53:10.829870 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 9 04:53:10.829933 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Sep 9 04:53:10.829985 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Sep 9 04:53:10.830038 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 9 04:53:10.830119 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Sep 9 04:53:10.830178 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Sep 9 04:53:10.830289 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 9 04:53:10.830362 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Sep 9 04:53:10.830417 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Sep 9 04:53:10.830470 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Sep 9 04:53:10.830479 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 04:53:10.830487 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 04:53:10.830494 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 04:53:10.830506 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 04:53:10.830514 kernel: iommu: Default domain type: Translated Sep 9 04:53:10.830521 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 04:53:10.830529 kernel: efivars: Registered efivars operations Sep 9 04:53:10.830536 kernel: vgaarb: loaded Sep 9 04:53:10.830543 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 04:53:10.830551 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 04:53:10.830559 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 04:53:10.830566 kernel: pnp: PnP ACPI init Sep 9 04:53:10.830636 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 04:53:10.830647 kernel: pnp: PnP ACPI: found 1 devices Sep 9 04:53:10.830655 kernel: NET: Registered PF_INET protocol family Sep 9 04:53:10.830662 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 04:53:10.830670 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 04:53:10.830678 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 04:53:10.830686 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 04:53:10.830693 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 04:53:10.830703 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 04:53:10.830710 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 04:53:10.830718 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 04:53:10.830725 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 04:53:10.831030 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Sep 9 04:53:10.831049 kernel: PCI: CLS 0 bytes, default 64 Sep 9 04:53:10.831056 kernel: kvm [1]: HYP mode not available Sep 9 04:53:10.831064 kernel: Initialise system trusted keyrings Sep 9 04:53:10.831072 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 04:53:10.831083 kernel: Key type asymmetric registered Sep 9 04:53:10.831091 kernel: Asymmetric key parser 'x509' registered Sep 9 04:53:10.831098 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 04:53:10.831106 kernel: io scheduler mq-deadline registered Sep 9 04:53:10.831116 kernel: io scheduler kyber registered Sep 9 04:53:10.831124 kernel: io scheduler bfq registered Sep 9 04:53:10.831133 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 9 04:53:10.831199 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Sep 9 04:53:10.831292 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Sep 9 04:53:10.831360 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 04:53:10.831422 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Sep 9 04:53:10.831482 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Sep 9 04:53:10.831540 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 04:53:10.831604 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Sep 9 04:53:10.831663 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Sep 9 04:53:10.831722 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 04:53:10.831807 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Sep 9 04:53:10.831873 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Sep 9 04:53:10.831931 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 04:53:10.831992 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Sep 9 04:53:10.832050 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Sep 9 04:53:10.832107 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 04:53:10.832169 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Sep 9 04:53:10.832247 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Sep 9 04:53:10.832312 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 04:53:10.832377 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Sep 9 04:53:10.832436 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Sep 9 04:53:10.832493 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 04:53:10.832583 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Sep 9 04:53:10.832645 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Sep 9 04:53:10.832703 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 04:53:10.832714 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Sep 9 04:53:10.833623 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Sep 9 04:53:10.833696 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Sep 9 04:53:10.833782 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 04:53:10.833794 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 04:53:10.833802 kernel: ACPI: button: Power Button [PWRB] Sep 9 04:53:10.833810 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 04:53:10.833876 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Sep 9 04:53:10.833942 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Sep 9 04:53:10.833959 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 04:53:10.833967 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 9 04:53:10.834030 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Sep 9 04:53:10.834040 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Sep 9 04:53:10.834047 kernel: thunder_xcv, ver 1.0 Sep 9 04:53:10.834055 kernel: thunder_bgx, ver 1.0 Sep 9 04:53:10.834062 kernel: nicpf, ver 1.0 Sep 9 04:53:10.834070 kernel: nicvf, ver 1.0 Sep 9 04:53:10.834139 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 04:53:10.834198 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T04:53:10 UTC (1757393590) Sep 9 04:53:10.834221 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 04:53:10.834230 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 9 04:53:10.834237 kernel: watchdog: NMI not fully supported Sep 9 04:53:10.834245 kernel: watchdog: Hard watchdog permanently disabled Sep 9 04:53:10.834252 kernel: NET: Registered PF_INET6 protocol family Sep 9 04:53:10.834260 kernel: Segment Routing with IPv6 Sep 9 04:53:10.834267 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 04:53:10.834274 kernel: NET: Registered PF_PACKET protocol family Sep 9 04:53:10.834284 kernel: Key type dns_resolver registered Sep 9 04:53:10.834292 kernel: registered taskstats version 1 Sep 9 04:53:10.834299 kernel: Loading compiled-in X.509 certificates Sep 9 04:53:10.834307 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 44d1e8b5c5ffbaa3cedd99c03d41580671fabec5' Sep 9 04:53:10.834315 kernel: Demotion targets for Node 0: null Sep 9 04:53:10.834322 kernel: Key type .fscrypt registered Sep 9 04:53:10.834329 kernel: Key type fscrypt-provisioning registered Sep 9 04:53:10.834337 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 04:53:10.834346 kernel: ima: Allocated hash algorithm: sha1 Sep 9 04:53:10.834353 kernel: ima: No architecture policies found Sep 9 04:53:10.834360 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 04:53:10.834368 kernel: clk: Disabling unused clocks Sep 9 04:53:10.834375 kernel: PM: genpd: Disabling unused power domains Sep 9 04:53:10.834383 kernel: Warning: unable to open an initial console. Sep 9 04:53:10.834390 kernel: Freeing unused kernel memory: 38976K Sep 9 04:53:10.834398 kernel: Run /init as init process Sep 9 04:53:10.834405 kernel: with arguments: Sep 9 04:53:10.834413 kernel: /init Sep 9 04:53:10.834421 kernel: with environment: Sep 9 04:53:10.834428 kernel: HOME=/ Sep 9 04:53:10.834435 kernel: TERM=linux Sep 9 04:53:10.834443 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 04:53:10.834451 systemd[1]: Successfully made /usr/ read-only. Sep 9 04:53:10.834462 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 04:53:10.834470 systemd[1]: Detected virtualization kvm. Sep 9 04:53:10.834479 systemd[1]: Detected architecture arm64. Sep 9 04:53:10.834487 systemd[1]: Running in initrd. Sep 9 04:53:10.834495 systemd[1]: No hostname configured, using default hostname. Sep 9 04:53:10.834503 systemd[1]: Hostname set to . Sep 9 04:53:10.834510 systemd[1]: Initializing machine ID from VM UUID. Sep 9 04:53:10.834518 systemd[1]: Queued start job for default target initrd.target. Sep 9 04:53:10.834526 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 04:53:10.834534 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 04:53:10.834544 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 04:53:10.834555 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 04:53:10.834563 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 04:53:10.834571 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 04:53:10.834581 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 04:53:10.834589 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 04:53:10.834597 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 04:53:10.834607 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 04:53:10.834615 systemd[1]: Reached target paths.target - Path Units. Sep 9 04:53:10.834623 systemd[1]: Reached target slices.target - Slice Units. Sep 9 04:53:10.834631 systemd[1]: Reached target swap.target - Swaps. Sep 9 04:53:10.834639 systemd[1]: Reached target timers.target - Timer Units. Sep 9 04:53:10.834646 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 04:53:10.834654 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 04:53:10.834662 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 04:53:10.834672 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 04:53:10.834679 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 04:53:10.834687 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 04:53:10.834695 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 04:53:10.834703 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 04:53:10.834711 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 04:53:10.834719 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 04:53:10.834727 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 04:53:10.834735 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 04:53:10.834763 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 04:53:10.834772 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 04:53:10.834780 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 04:53:10.834788 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 04:53:10.834795 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 04:53:10.834834 systemd-journald[244]: Collecting audit messages is disabled. Sep 9 04:53:10.834855 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 04:53:10.834863 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 04:53:10.834873 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 04:53:10.834883 systemd-journald[244]: Journal started Sep 9 04:53:10.834901 systemd-journald[244]: Runtime Journal (/run/log/journal/d07d872bc44a4124acb48488d17bcefe) is 8M, max 76.5M, 68.5M free. Sep 9 04:53:10.819290 systemd-modules-load[247]: Inserted module 'overlay' Sep 9 04:53:10.837200 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 04:53:10.840506 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 04:53:10.840561 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 04:53:10.840782 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:53:10.844254 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 04:53:10.846557 kernel: Bridge firewalling registered Sep 9 04:53:10.844934 systemd-modules-load[247]: Inserted module 'br_netfilter' Sep 9 04:53:10.848880 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 04:53:10.854948 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 04:53:10.861821 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 04:53:10.863479 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 04:53:10.866236 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 04:53:10.879425 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 04:53:10.882193 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 04:53:10.888957 systemd-tmpfiles[264]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 04:53:10.890359 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 04:53:10.893500 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 04:53:10.900985 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 04:53:10.920774 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=1e9320fd787e27d01e3b8a1acb67e0c640346112c469b7a652e9dcfc9271bf90 Sep 9 04:53:10.950810 systemd-resolved[286]: Positive Trust Anchors: Sep 9 04:53:10.950828 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 04:53:10.950916 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 04:53:10.956816 systemd-resolved[286]: Defaulting to hostname 'linux'. Sep 9 04:53:10.958072 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 04:53:10.958735 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 04:53:11.026857 kernel: SCSI subsystem initialized Sep 9 04:53:11.031779 kernel: Loading iSCSI transport class v2.0-870. Sep 9 04:53:11.039795 kernel: iscsi: registered transport (tcp) Sep 9 04:53:11.052931 kernel: iscsi: registered transport (qla4xxx) Sep 9 04:53:11.053045 kernel: QLogic iSCSI HBA Driver Sep 9 04:53:11.075987 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 04:53:11.106002 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 04:53:11.110913 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 04:53:11.173705 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 04:53:11.176543 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 04:53:11.250817 kernel: raid6: neonx8 gen() 15432 MB/s Sep 9 04:53:11.267813 kernel: raid6: neonx4 gen() 15553 MB/s Sep 9 04:53:11.284806 kernel: raid6: neonx2 gen() 13025 MB/s Sep 9 04:53:11.301807 kernel: raid6: neonx1 gen() 10308 MB/s Sep 9 04:53:11.318793 kernel: raid6: int64x8 gen() 6839 MB/s Sep 9 04:53:11.336002 kernel: raid6: int64x4 gen() 7249 MB/s Sep 9 04:53:11.352800 kernel: raid6: int64x2 gen() 6036 MB/s Sep 9 04:53:11.369810 kernel: raid6: int64x1 gen() 4976 MB/s Sep 9 04:53:11.369892 kernel: raid6: using algorithm neonx4 gen() 15553 MB/s Sep 9 04:53:11.386812 kernel: raid6: .... xor() 12175 MB/s, rmw enabled Sep 9 04:53:11.387012 kernel: raid6: using neon recovery algorithm Sep 9 04:53:11.391999 kernel: xor: measuring software checksum speed Sep 9 04:53:11.392061 kernel: 8regs : 20500 MB/sec Sep 9 04:53:11.392071 kernel: 32regs : 17088 MB/sec Sep 9 04:53:11.392090 kernel: arm64_neon : 28022 MB/sec Sep 9 04:53:11.392778 kernel: xor: using function: arm64_neon (28022 MB/sec) Sep 9 04:53:11.447795 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 04:53:11.459143 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 04:53:11.461732 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 04:53:11.489941 systemd-udevd[494]: Using default interface naming scheme 'v255'. Sep 9 04:53:11.494582 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 04:53:11.499046 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 04:53:11.529137 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Sep 9 04:53:11.561196 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 04:53:11.565352 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 04:53:11.633588 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 04:53:11.637259 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 04:53:11.747050 kernel: ACPI: bus type USB registered Sep 9 04:53:11.747105 kernel: usbcore: registered new interface driver usbfs Sep 9 04:53:11.747116 kernel: usbcore: registered new interface driver hub Sep 9 04:53:11.747759 kernel: usbcore: registered new device driver usb Sep 9 04:53:11.759771 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 9 04:53:11.760001 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 9 04:53:11.761762 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 9 04:53:11.766925 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 9 04:53:11.767118 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 9 04:53:11.767200 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 9 04:53:11.777780 kernel: hub 1-0:1.0: USB hub found Sep 9 04:53:11.784147 kernel: hub 1-0:1.0: 4 ports detected Sep 9 04:53:11.786814 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Sep 9 04:53:11.787043 kernel: scsi host0: Virtio SCSI HBA Sep 9 04:53:11.787756 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 9 04:53:11.788814 kernel: hub 2-0:1.0: USB hub found Sep 9 04:53:11.790806 kernel: hub 2-0:1.0: 4 ports detected Sep 9 04:53:11.794844 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 04:53:11.794931 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 9 04:53:11.803283 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 04:53:11.803409 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:53:11.805174 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 04:53:11.808009 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 04:53:11.810621 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 04:53:11.838778 kernel: sd 0:0:0:1: Power-on or device reset occurred Sep 9 04:53:11.838983 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 9 04:53:11.839924 kernel: sd 0:0:0:1: [sda] Write Protect is off Sep 9 04:53:11.840104 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Sep 9 04:53:11.840181 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 9 04:53:11.841113 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:53:11.850115 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 04:53:11.850190 kernel: GPT:17805311 != 80003071 Sep 9 04:53:11.850219 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 04:53:11.851116 kernel: GPT:17805311 != 80003071 Sep 9 04:53:11.851134 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 04:53:11.851786 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 04:53:11.852802 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Sep 9 04:53:11.857785 kernel: sr 0:0:0:0: Power-on or device reset occurred Sep 9 04:53:11.858908 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Sep 9 04:53:11.859115 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 04:53:11.861773 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Sep 9 04:53:11.909673 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 9 04:53:11.938757 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 9 04:53:11.949592 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 9 04:53:11.957974 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 9 04:53:11.960073 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 9 04:53:11.965021 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 04:53:11.967176 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 04:53:11.967959 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 04:53:11.968551 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 04:53:11.970769 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 04:53:11.974916 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 04:53:11.994490 disk-uuid[601]: Primary Header is updated. Sep 9 04:53:11.994490 disk-uuid[601]: Secondary Entries is updated. Sep 9 04:53:11.994490 disk-uuid[601]: Secondary Header is updated. Sep 9 04:53:12.004530 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 04:53:12.007862 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 04:53:12.015779 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 04:53:12.028858 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 9 04:53:12.169762 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Sep 9 04:53:12.171772 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 9 04:53:12.171943 kernel: usbcore: registered new interface driver usbhid Sep 9 04:53:12.171960 kernel: usbhid: USB HID core driver Sep 9 04:53:12.265791 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Sep 9 04:53:12.392782 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Sep 9 04:53:12.445825 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Sep 9 04:53:13.025783 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 04:53:13.026254 disk-uuid[607]: The operation has completed successfully. Sep 9 04:53:13.095430 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 04:53:13.095575 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 04:53:13.117968 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 04:53:13.149871 sh[626]: Success Sep 9 04:53:13.167969 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 04:53:13.168043 kernel: device-mapper: uevent: version 1.0.3 Sep 9 04:53:13.168812 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 04:53:13.181800 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 04:53:13.233126 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 04:53:13.237793 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 04:53:13.247504 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 04:53:13.263763 kernel: BTRFS: device fsid 72a0ff35-b4e8-4772-9a8d-d0e90c3fb364 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (638) Sep 9 04:53:13.266953 kernel: BTRFS info (device dm-0): first mount of filesystem 72a0ff35-b4e8-4772-9a8d-d0e90c3fb364 Sep 9 04:53:13.267039 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 04:53:13.273851 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 9 04:53:13.273914 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 04:53:13.274898 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 04:53:13.276541 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 04:53:13.277895 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 04:53:13.278735 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 04:53:13.279925 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 04:53:13.283171 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 04:53:13.316780 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (667) Sep 9 04:53:13.316848 kernel: BTRFS info (device sda6): first mount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 04:53:13.316860 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 04:53:13.323807 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 04:53:13.323876 kernel: BTRFS info (device sda6): turning on async discard Sep 9 04:53:13.323888 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 04:53:13.329774 kernel: BTRFS info (device sda6): last unmount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 04:53:13.331624 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 04:53:13.334698 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 04:53:13.423650 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 04:53:13.427635 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 04:53:13.480360 systemd-networkd[808]: lo: Link UP Sep 9 04:53:13.481626 systemd-networkd[808]: lo: Gained carrier Sep 9 04:53:13.484907 systemd-networkd[808]: Enumeration completed Sep 9 04:53:13.485062 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 04:53:13.485921 systemd-networkd[808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 04:53:13.485925 systemd-networkd[808]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 04:53:13.487086 systemd-networkd[808]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 04:53:13.487089 systemd-networkd[808]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 04:53:13.487894 systemd-networkd[808]: eth0: Link UP Sep 9 04:53:13.494442 ignition[718]: Ignition 2.22.0 Sep 9 04:53:13.487917 systemd[1]: Reached target network.target - Network. Sep 9 04:53:13.494450 ignition[718]: Stage: fetch-offline Sep 9 04:53:13.488030 systemd-networkd[808]: eth1: Link UP Sep 9 04:53:13.494481 ignition[718]: no configs at "/usr/lib/ignition/base.d" Sep 9 04:53:13.488162 systemd-networkd[808]: eth0: Gained carrier Sep 9 04:53:13.494490 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 9 04:53:13.488172 systemd-networkd[808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 04:53:13.494575 ignition[718]: parsed url from cmdline: "" Sep 9 04:53:13.496865 systemd-networkd[808]: eth1: Gained carrier Sep 9 04:53:13.494578 ignition[718]: no config URL provided Sep 9 04:53:13.496880 systemd-networkd[808]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 04:53:13.494583 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 04:53:13.497788 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 04:53:13.494590 ignition[718]: no config at "/usr/lib/ignition/user.ign" Sep 9 04:53:13.500149 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 9 04:53:13.494595 ignition[718]: failed to fetch config: resource requires networking Sep 9 04:53:13.494929 ignition[718]: Ignition finished successfully Sep 9 04:53:13.524848 systemd-networkd[808]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 9 04:53:13.532346 ignition[816]: Ignition 2.22.0 Sep 9 04:53:13.533079 ignition[816]: Stage: fetch Sep 9 04:53:13.533638 ignition[816]: no configs at "/usr/lib/ignition/base.d" Sep 9 04:53:13.533648 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 9 04:53:13.534316 ignition[816]: parsed url from cmdline: "" Sep 9 04:53:13.534332 ignition[816]: no config URL provided Sep 9 04:53:13.534340 ignition[816]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 04:53:13.534354 ignition[816]: no config at "/usr/lib/ignition/user.ign" Sep 9 04:53:13.534387 ignition[816]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 9 04:53:13.536055 ignition[816]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 9 04:53:13.549859 systemd-networkd[808]: eth0: DHCPv4 address 128.140.114.243/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 9 04:53:13.737180 ignition[816]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 9 04:53:13.744043 ignition[816]: GET result: OK Sep 9 04:53:13.744170 ignition[816]: parsing config with SHA512: c464081428720e8e4558dec465d4c294beb0a54e6f4914fea4e8567012ce29dd36091315922efdb6993b03d178bfe3e94c026991a7dee8dfeb906b02e42aca97 Sep 9 04:53:13.755081 unknown[816]: fetched base config from "system" Sep 9 04:53:13.755440 ignition[816]: fetch: fetch complete Sep 9 04:53:13.755097 unknown[816]: fetched base config from "system" Sep 9 04:53:13.755445 ignition[816]: fetch: fetch passed Sep 9 04:53:13.755102 unknown[816]: fetched user config from "hetzner" Sep 9 04:53:13.755508 ignition[816]: Ignition finished successfully Sep 9 04:53:13.759170 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 9 04:53:13.768436 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 04:53:13.810968 ignition[823]: Ignition 2.22.0 Sep 9 04:53:13.810987 ignition[823]: Stage: kargs Sep 9 04:53:13.811158 ignition[823]: no configs at "/usr/lib/ignition/base.d" Sep 9 04:53:13.811169 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 9 04:53:13.812088 ignition[823]: kargs: kargs passed Sep 9 04:53:13.815286 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 04:53:13.812142 ignition[823]: Ignition finished successfully Sep 9 04:53:13.817552 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 04:53:13.867679 ignition[829]: Ignition 2.22.0 Sep 9 04:53:13.868333 ignition[829]: Stage: disks Sep 9 04:53:13.868500 ignition[829]: no configs at "/usr/lib/ignition/base.d" Sep 9 04:53:13.868510 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 9 04:53:13.871651 ignition[829]: disks: disks passed Sep 9 04:53:13.871783 ignition[829]: Ignition finished successfully Sep 9 04:53:13.874019 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 04:53:13.876071 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 04:53:13.876723 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 04:53:13.877440 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 04:53:13.879529 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 04:53:13.881775 systemd[1]: Reached target basic.target - Basic System. Sep 9 04:53:13.883765 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 04:53:13.917244 systemd-fsck[837]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 9 04:53:13.920472 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 04:53:13.923495 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 04:53:13.995767 kernel: EXT4-fs (sda9): mounted filesystem 88574756-967d-44b3-be66-46689c8baf27 r/w with ordered data mode. Quota mode: none. Sep 9 04:53:13.996142 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 04:53:13.997652 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 04:53:14.000180 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 04:53:14.002562 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 04:53:14.011986 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 9 04:53:14.018354 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 04:53:14.019758 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 04:53:14.023072 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 04:53:14.029820 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (845) Sep 9 04:53:14.031730 kernel: BTRFS info (device sda6): first mount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 04:53:14.031778 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 04:53:14.033831 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 04:53:14.041945 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 04:53:14.041993 kernel: BTRFS info (device sda6): turning on async discard Sep 9 04:53:14.042862 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 04:53:14.044935 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 04:53:14.086163 initrd-setup-root[872]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 04:53:14.091992 coreos-metadata[847]: Sep 09 04:53:14.091 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 9 04:53:14.095809 coreos-metadata[847]: Sep 09 04:53:14.095 INFO Fetch successful Sep 9 04:53:14.097632 coreos-metadata[847]: Sep 09 04:53:14.096 INFO wrote hostname ci-4452-0-0-n-1f6e10e4b9 to /sysroot/etc/hostname Sep 9 04:53:14.098882 initrd-setup-root[879]: cut: /sysroot/etc/group: No such file or directory Sep 9 04:53:14.101012 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 9 04:53:14.107659 initrd-setup-root[887]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 04:53:14.112855 initrd-setup-root[894]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 04:53:14.214539 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 04:53:14.217266 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 04:53:14.219538 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 04:53:14.240821 kernel: BTRFS info (device sda6): last unmount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 04:53:14.266456 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 04:53:14.268573 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 04:53:14.275876 ignition[962]: INFO : Ignition 2.22.0 Sep 9 04:53:14.277139 ignition[962]: INFO : Stage: mount Sep 9 04:53:14.277139 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 04:53:14.277139 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 9 04:53:14.279421 ignition[962]: INFO : mount: mount passed Sep 9 04:53:14.279421 ignition[962]: INFO : Ignition finished successfully Sep 9 04:53:14.281967 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 04:53:14.284163 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 04:53:14.313181 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 04:53:14.339791 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (973) Sep 9 04:53:14.341268 kernel: BTRFS info (device sda6): first mount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 04:53:14.341330 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 04:53:14.345060 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 04:53:14.345118 kernel: BTRFS info (device sda6): turning on async discard Sep 9 04:53:14.345135 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 04:53:14.348029 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 04:53:14.388354 ignition[990]: INFO : Ignition 2.22.0 Sep 9 04:53:14.388354 ignition[990]: INFO : Stage: files Sep 9 04:53:14.392393 ignition[990]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 04:53:14.392393 ignition[990]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 9 04:53:14.392393 ignition[990]: DEBUG : files: compiled without relabeling support, skipping Sep 9 04:53:14.392393 ignition[990]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 04:53:14.392393 ignition[990]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 04:53:14.401679 ignition[990]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 04:53:14.401679 ignition[990]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 04:53:14.401679 ignition[990]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 04:53:14.399384 unknown[990]: wrote ssh authorized keys file for user: core Sep 9 04:53:14.405046 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 9 04:53:14.405046 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 9 04:53:14.582501 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 04:53:14.608976 systemd-networkd[808]: eth1: Gained IPv6LL Sep 9 04:53:15.440935 systemd-networkd[808]: eth0: Gained IPv6LL Sep 9 04:53:15.597347 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 9 04:53:15.599477 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 04:53:15.599477 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 04:53:15.599477 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 04:53:15.599477 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 04:53:15.599477 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 04:53:15.599477 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 04:53:15.599477 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 04:53:15.599477 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 04:53:15.611944 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 04:53:15.611944 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 04:53:15.611944 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 04:53:15.615293 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 04:53:15.615293 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 04:53:15.615293 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 9 04:53:16.023595 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 04:53:17.597806 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 04:53:17.597806 ignition[990]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 04:53:17.603907 ignition[990]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 04:53:17.603907 ignition[990]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 04:53:17.603907 ignition[990]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 04:53:17.603907 ignition[990]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 9 04:53:17.603907 ignition[990]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 9 04:53:17.603907 ignition[990]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 9 04:53:17.603907 ignition[990]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 9 04:53:17.603907 ignition[990]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Sep 9 04:53:17.603907 ignition[990]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 04:53:17.619844 ignition[990]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 04:53:17.619844 ignition[990]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 04:53:17.619844 ignition[990]: INFO : files: files passed Sep 9 04:53:17.619844 ignition[990]: INFO : Ignition finished successfully Sep 9 04:53:17.608244 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 04:53:17.613312 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 04:53:17.615089 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 04:53:17.629645 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 04:53:17.629857 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 04:53:17.639021 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 04:53:17.639021 initrd-setup-root-after-ignition[1019]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 04:53:17.641602 initrd-setup-root-after-ignition[1023]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 04:53:17.644159 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 04:53:17.646780 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 04:53:17.649369 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 04:53:17.721669 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 04:53:17.721830 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 04:53:17.723653 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 04:53:17.724704 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 04:53:17.726337 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 04:53:17.727519 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 04:53:17.753510 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 04:53:17.756138 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 04:53:17.782161 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 04:53:17.783569 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 04:53:17.784963 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 04:53:17.785967 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 04:53:17.786103 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 04:53:17.788511 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 04:53:17.789703 systemd[1]: Stopped target basic.target - Basic System. Sep 9 04:53:17.791536 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 04:53:17.793105 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 04:53:17.794518 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 04:53:17.795816 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 04:53:17.797024 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 04:53:17.798207 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 04:53:17.800205 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 04:53:17.801292 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 04:53:17.802366 systemd[1]: Stopped target swap.target - Swaps. Sep 9 04:53:17.803350 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 04:53:17.803506 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 04:53:17.805050 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 04:53:17.805854 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 04:53:17.807088 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 04:53:17.807166 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 04:53:17.808313 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 04:53:17.808428 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 04:53:17.809906 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 04:53:17.810015 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 04:53:17.811374 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 04:53:17.811469 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 04:53:17.812359 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 9 04:53:17.812453 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 9 04:53:17.814269 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 04:53:17.816221 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 04:53:17.816352 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 04:53:17.820572 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 04:53:17.821858 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 04:53:17.821991 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 04:53:17.824222 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 04:53:17.824959 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 04:53:17.833541 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 04:53:17.835927 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 04:53:17.845134 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 04:53:17.848444 ignition[1043]: INFO : Ignition 2.22.0 Sep 9 04:53:17.848444 ignition[1043]: INFO : Stage: umount Sep 9 04:53:17.850135 ignition[1043]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 04:53:17.850135 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 9 04:53:17.850135 ignition[1043]: INFO : umount: umount passed Sep 9 04:53:17.850135 ignition[1043]: INFO : Ignition finished successfully Sep 9 04:53:17.851065 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 04:53:17.851166 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 04:53:17.853097 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 04:53:17.853238 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 04:53:17.854265 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 04:53:17.854352 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 04:53:17.855882 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 04:53:17.855933 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 04:53:17.856701 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 9 04:53:17.856737 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 9 04:53:17.857624 systemd[1]: Stopped target network.target - Network. Sep 9 04:53:17.858421 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 04:53:17.858474 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 04:53:17.859427 systemd[1]: Stopped target paths.target - Path Units. Sep 9 04:53:17.860157 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 04:53:17.863811 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 04:53:17.865133 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 04:53:17.866454 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 04:53:17.867968 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 04:53:17.868043 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 04:53:17.869502 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 04:53:17.869563 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 04:53:17.871055 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 04:53:17.871150 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 04:53:17.871967 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 04:53:17.872001 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 04:53:17.872799 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 04:53:17.872840 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 04:53:17.873896 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 04:53:17.875547 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 04:53:17.878967 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 04:53:17.879268 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 04:53:17.882933 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 04:53:17.883238 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 04:53:17.883283 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 04:53:17.887136 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 04:53:17.888623 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 04:53:17.888785 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 04:53:17.891315 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 04:53:17.893086 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 04:53:17.894772 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 04:53:17.894831 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 04:53:17.897400 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 04:53:17.898851 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 04:53:17.899859 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 04:53:17.900556 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 04:53:17.900598 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 04:53:17.904362 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 04:53:17.904408 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 04:53:17.905921 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 04:53:17.909499 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 04:53:17.922996 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 04:53:17.923960 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 04:53:17.925068 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 04:53:17.925110 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 04:53:17.927671 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 04:53:17.927715 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 04:53:17.929830 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 04:53:17.929898 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 04:53:17.931799 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 04:53:17.931846 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 04:53:17.933417 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 04:53:17.933469 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 04:53:17.935883 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 04:53:17.937876 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 04:53:17.937951 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 04:53:17.941114 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 04:53:17.941165 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 04:53:17.942501 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 04:53:17.942548 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:53:17.944986 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 04:53:17.945096 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 04:53:17.951918 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 04:53:17.952061 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 04:53:17.953868 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 04:53:17.955972 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 04:53:17.980608 systemd[1]: Switching root. Sep 9 04:53:18.025592 systemd-journald[244]: Journal stopped Sep 9 04:53:19.025169 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 9 04:53:19.025260 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 04:53:19.025274 kernel: SELinux: policy capability open_perms=1 Sep 9 04:53:19.025283 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 04:53:19.025296 kernel: SELinux: policy capability always_check_network=0 Sep 9 04:53:19.025305 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 04:53:19.025315 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 04:53:19.025324 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 04:53:19.025334 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 04:53:19.025343 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 04:53:19.025353 kernel: audit: type=1403 audit(1757393598.215:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 04:53:19.025363 systemd[1]: Successfully loaded SELinux policy in 73.242ms. Sep 9 04:53:19.025385 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.445ms. Sep 9 04:53:19.025397 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 04:53:19.025408 systemd[1]: Detected virtualization kvm. Sep 9 04:53:19.025418 systemd[1]: Detected architecture arm64. Sep 9 04:53:19.025428 systemd[1]: Detected first boot. Sep 9 04:53:19.025438 systemd[1]: Hostname set to . Sep 9 04:53:19.025448 systemd[1]: Initializing machine ID from VM UUID. Sep 9 04:53:19.025458 zram_generator::config[1086]: No configuration found. Sep 9 04:53:19.025470 kernel: NET: Registered PF_VSOCK protocol family Sep 9 04:53:19.025482 systemd[1]: Populated /etc with preset unit settings. Sep 9 04:53:19.025493 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 04:53:19.025506 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 04:53:19.025520 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 04:53:19.025532 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 04:53:19.025544 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 04:53:19.025557 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 04:53:19.025568 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 04:53:19.025578 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 04:53:19.025589 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 04:53:19.025602 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 04:53:19.025612 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 04:53:19.025622 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 04:53:19.025634 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 04:53:19.025645 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 04:53:19.025655 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 04:53:19.025665 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 04:53:19.025675 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 04:53:19.025686 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 04:53:19.025696 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 04:53:19.025712 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 04:53:19.025722 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 04:53:19.025733 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 04:53:19.026149 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 04:53:19.026227 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 04:53:19.026244 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 04:53:19.026254 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 04:53:19.026265 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 04:53:19.026276 systemd[1]: Reached target slices.target - Slice Units. Sep 9 04:53:19.026290 systemd[1]: Reached target swap.target - Swaps. Sep 9 04:53:19.026300 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 04:53:19.026311 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 04:53:19.026321 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 04:53:19.026334 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 04:53:19.026345 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 04:53:19.026356 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 04:53:19.026367 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 04:53:19.026378 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 04:53:19.026389 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 04:53:19.026400 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 04:53:19.026410 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 04:53:19.026420 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 04:53:19.026431 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 04:53:19.026442 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 04:53:19.026452 systemd[1]: Reached target machines.target - Containers. Sep 9 04:53:19.026462 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 04:53:19.026475 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 04:53:19.026485 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 04:53:19.026497 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 04:53:19.026508 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 04:53:19.026518 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 04:53:19.026528 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 04:53:19.026539 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 04:53:19.026549 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 04:53:19.026560 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 04:53:19.026572 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 04:53:19.026583 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 04:53:19.026593 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 04:53:19.026604 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 04:53:19.026616 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 04:53:19.026626 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 04:53:19.026637 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 04:53:19.026649 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 04:53:19.026660 kernel: ACPI: bus type drm_connector registered Sep 9 04:53:19.026670 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 04:53:19.026681 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 04:53:19.026691 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 04:53:19.026703 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 04:53:19.026713 systemd[1]: Stopped verity-setup.service. Sep 9 04:53:19.026724 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 04:53:19.026736 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 04:53:19.026764 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 04:53:19.026776 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 04:53:19.026789 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 04:53:19.026841 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 04:53:19.026854 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 04:53:19.026865 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 04:53:19.026876 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 04:53:19.026886 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 04:53:19.026896 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 04:53:19.026906 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 04:53:19.026919 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 04:53:19.026930 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 04:53:19.026941 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 04:53:19.026951 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 04:53:19.026961 kernel: loop: module loaded Sep 9 04:53:19.026972 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 04:53:19.026982 kernel: fuse: init (API version 7.41) Sep 9 04:53:19.026992 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 04:53:19.027003 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 04:53:19.027015 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 04:53:19.027026 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 04:53:19.027037 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 04:53:19.027047 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 04:53:19.027057 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 04:53:19.027069 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 04:53:19.027079 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 04:53:19.027127 systemd-journald[1154]: Collecting audit messages is disabled. Sep 9 04:53:19.027272 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 04:53:19.027292 systemd-journald[1154]: Journal started Sep 9 04:53:19.027316 systemd-journald[1154]: Runtime Journal (/run/log/journal/d07d872bc44a4124acb48488d17bcefe) is 8M, max 76.5M, 68.5M free. Sep 9 04:53:18.720080 systemd[1]: Queued start job for default target multi-user.target. Sep 9 04:53:18.739898 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 9 04:53:18.740487 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 04:53:19.040160 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 04:53:19.043789 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 04:53:19.053793 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 04:53:19.057859 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 04:53:19.058085 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 04:53:19.059937 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 04:53:19.061803 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 04:53:19.065909 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 04:53:19.070270 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 04:53:19.073033 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 04:53:19.074840 kernel: loop0: detected capacity change from 0 to 100632 Sep 9 04:53:19.095385 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 04:53:19.106700 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 04:53:19.106880 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 04:53:19.109902 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 04:53:19.112822 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 04:53:19.116490 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 04:53:19.118091 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 04:53:19.123636 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 04:53:19.142149 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 04:53:19.150964 kernel: loop1: detected capacity change from 0 to 203944 Sep 9 04:53:19.162730 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 04:53:19.166151 systemd-journald[1154]: Time spent on flushing to /var/log/journal/d07d872bc44a4124acb48488d17bcefe is 71.740ms for 1177 entries. Sep 9 04:53:19.166151 systemd-journald[1154]: System Journal (/var/log/journal/d07d872bc44a4124acb48488d17bcefe) is 8M, max 584.8M, 576.8M free. Sep 9 04:53:19.250589 systemd-journald[1154]: Received client request to flush runtime journal. Sep 9 04:53:19.250674 kernel: loop2: detected capacity change from 0 to 119368 Sep 9 04:53:19.250694 kernel: loop3: detected capacity change from 0 to 8 Sep 9 04:53:19.202890 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 04:53:19.211041 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 04:53:19.227368 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 04:53:19.256844 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 04:53:19.274088 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Sep 9 04:53:19.274112 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Sep 9 04:53:19.282792 kernel: loop4: detected capacity change from 0 to 100632 Sep 9 04:53:19.282889 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 04:53:19.298764 kernel: loop5: detected capacity change from 0 to 203944 Sep 9 04:53:19.324772 kernel: loop6: detected capacity change from 0 to 119368 Sep 9 04:53:19.351775 kernel: loop7: detected capacity change from 0 to 8 Sep 9 04:53:19.352708 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 9 04:53:19.353402 (sd-merge)[1226]: Merged extensions into '/usr'. Sep 9 04:53:19.364927 systemd[1]: Reload requested from client PID 1185 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 04:53:19.364948 systemd[1]: Reloading... Sep 9 04:53:19.488770 zram_generator::config[1251]: No configuration found. Sep 9 04:53:19.551003 ldconfig[1178]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 04:53:19.705130 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 04:53:19.705446 systemd[1]: Reloading finished in 339 ms. Sep 9 04:53:19.720253 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 04:53:19.724779 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 04:53:19.739007 systemd[1]: Starting ensure-sysext.service... Sep 9 04:53:19.742958 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 04:53:19.753220 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 04:53:19.764861 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 04:53:19.775038 systemd[1]: Reload requested from client PID 1290 ('systemctl') (unit ensure-sysext.service)... Sep 9 04:53:19.775061 systemd[1]: Reloading... Sep 9 04:53:19.792937 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 04:53:19.793384 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 04:53:19.793695 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 04:53:19.795224 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 04:53:19.796413 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 04:53:19.797072 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Sep 9 04:53:19.797313 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Sep 9 04:53:19.804442 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 04:53:19.805807 systemd-tmpfiles[1291]: Skipping /boot Sep 9 04:53:19.822981 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 04:53:19.825757 systemd-tmpfiles[1291]: Skipping /boot Sep 9 04:53:19.835772 zram_generator::config[1316]: No configuration found. Sep 9 04:53:20.017704 systemd[1]: Reloading finished in 242 ms. Sep 9 04:53:20.033794 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 04:53:20.034988 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 04:53:20.049993 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 04:53:20.052952 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 04:53:20.055115 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 04:53:20.060030 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 04:53:20.070260 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 04:53:20.075050 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 04:53:20.085234 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 04:53:20.089634 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 04:53:20.095481 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 04:53:20.105118 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 04:53:20.114164 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 04:53:20.114946 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 04:53:20.115079 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 04:53:20.121640 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 04:53:20.123005 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 04:53:20.123153 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 04:53:20.129191 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 04:53:20.135901 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 04:53:20.136703 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 04:53:20.136907 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 04:53:20.139783 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 04:53:20.141259 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 04:53:20.141446 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 04:53:20.155090 systemd[1]: Finished ensure-sysext.service. Sep 9 04:53:20.157809 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 04:53:20.165945 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 04:53:20.166868 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 04:53:20.169165 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 04:53:20.172706 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 04:53:20.175392 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 04:53:20.175509 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 04:53:20.177919 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 04:53:20.183042 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 04:53:20.184820 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 04:53:20.185608 systemd-udevd[1363]: Using default interface naming scheme 'v255'. Sep 9 04:53:20.186292 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 04:53:20.187853 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 04:53:20.191374 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 04:53:20.213727 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 04:53:20.227601 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 04:53:20.234538 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 04:53:20.235928 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 04:53:20.246459 augenrules[1418]: No rules Sep 9 04:53:20.251050 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 04:53:20.251315 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 04:53:20.382589 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 04:53:20.500772 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 04:53:20.528185 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 9 04:53:20.532322 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 04:53:20.569631 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 04:53:20.576551 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 9 04:53:20.576680 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 04:53:20.579915 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 04:53:20.582446 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 04:53:20.586271 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 04:53:20.587384 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 04:53:20.587435 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 04:53:20.587459 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 04:53:20.601535 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 04:53:20.602299 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 04:53:20.614693 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 04:53:20.614900 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 04:53:20.617563 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 04:53:20.618935 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 04:53:20.621260 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 04:53:20.621335 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 04:53:20.673069 systemd-networkd[1405]: lo: Link UP Sep 9 04:53:20.673411 systemd-networkd[1405]: lo: Gained carrier Sep 9 04:53:20.675392 systemd-networkd[1405]: Enumeration completed Sep 9 04:53:20.676337 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 04:53:20.676826 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 04:53:20.676898 systemd-networkd[1405]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 04:53:20.677672 systemd-networkd[1405]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 04:53:20.677830 systemd-networkd[1405]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 04:53:20.678362 systemd-networkd[1405]: eth0: Link UP Sep 9 04:53:20.678815 systemd-networkd[1405]: eth0: Gained carrier Sep 9 04:53:20.679147 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Sep 9 04:53:20.679229 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 9 04:53:20.679272 kernel: [drm] features: -context_init Sep 9 04:53:20.678899 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 04:53:20.680711 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 04:53:20.684075 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 04:53:20.692023 systemd-networkd[1405]: eth1: Link UP Sep 9 04:53:20.693078 systemd-networkd[1405]: eth1: Gained carrier Sep 9 04:53:20.693189 systemd-networkd[1405]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 04:53:20.724787 kernel: [drm] number of scanouts: 1 Sep 9 04:53:20.724851 kernel: [drm] number of cap sets: 0 Sep 9 04:53:20.724997 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 04:53:20.727038 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 04:53:20.732895 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Sep 9 04:53:20.762443 systemd-networkd[1405]: eth0: DHCPv4 address 128.140.114.243/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 9 04:53:20.767097 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 04:53:20.767858 systemd-networkd[1405]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 9 04:53:20.768670 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Sep 9 04:53:20.785842 systemd-resolved[1362]: Positive Trust Anchors: Sep 9 04:53:20.785865 systemd-resolved[1362]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 04:53:20.785898 systemd-resolved[1362]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 04:53:20.795142 systemd-resolved[1362]: Using system hostname 'ci-4452-0-0-n-1f6e10e4b9'. Sep 9 04:53:20.803017 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 04:53:20.803987 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 04:53:20.805500 systemd[1]: Reached target network.target - Network. Sep 9 04:53:20.807273 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 04:53:20.811463 kernel: Console: switching to colour frame buffer device 160x50 Sep 9 04:53:20.819769 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 9 04:53:20.835077 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 04:53:20.835385 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:53:20.839880 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 04:53:20.842916 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 04:53:20.912323 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:53:20.913218 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 04:53:20.913865 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 04:53:20.915260 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 04:53:20.917222 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 04:53:20.918767 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 04:53:20.919731 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 04:53:20.920381 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 04:53:20.920415 systemd[1]: Reached target paths.target - Path Units. Sep 9 04:53:20.920898 systemd[1]: Reached target timers.target - Timer Units. Sep 9 04:53:20.922477 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 04:53:20.924796 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 04:53:20.927508 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 04:53:20.928482 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 04:53:20.929266 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 04:53:20.932207 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 04:53:20.933780 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 04:53:20.935262 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 04:53:20.936098 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 04:53:20.936694 systemd[1]: Reached target basic.target - Basic System. Sep 9 04:53:20.937449 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 04:53:20.937488 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 04:53:20.938834 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 04:53:20.942917 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 9 04:53:20.944885 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 04:53:20.948279 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 04:53:20.950013 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 04:53:20.960043 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 04:53:20.960650 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 04:53:20.963410 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 04:53:20.969463 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 04:53:20.977960 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 9 04:53:20.986099 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 04:53:20.987820 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 04:53:20.990237 jq[1505]: false Sep 9 04:53:20.990617 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 04:53:20.993947 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 04:53:20.994512 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 04:53:21.000229 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 04:53:21.005977 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 04:53:21.011798 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 04:53:21.012730 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 04:53:21.016860 extend-filesystems[1506]: Found /dev/sda6 Sep 9 04:53:21.016054 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 04:53:21.031405 extend-filesystems[1506]: Found /dev/sda9 Sep 9 04:53:21.032870 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 04:53:21.034150 extend-filesystems[1506]: Checking size of /dev/sda9 Sep 9 04:53:21.034848 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 04:53:21.058968 coreos-metadata[1502]: Sep 09 04:53:21.056 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 9 04:53:21.062883 extend-filesystems[1506]: Resized partition /dev/sda9 Sep 9 04:53:21.066488 coreos-metadata[1502]: Sep 09 04:53:21.066 INFO Fetch successful Sep 9 04:53:21.066488 coreos-metadata[1502]: Sep 09 04:53:21.066 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 9 04:53:21.072539 jq[1520]: true Sep 9 04:53:21.072835 coreos-metadata[1502]: Sep 09 04:53:21.072 INFO Fetch successful Sep 9 04:53:21.079766 extend-filesystems[1544]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 04:53:21.079702 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 04:53:21.079984 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 04:53:21.092488 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 9 04:53:21.096028 (ntainerd)[1539]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 04:53:21.103877 systemd-timesyncd[1384]: Contacted time server 77.90.40.94:123 (0.flatcar.pool.ntp.org). Sep 9 04:53:21.103952 systemd-timesyncd[1384]: Initial clock synchronization to Tue 2025-09-09 04:53:21.411332 UTC. Sep 9 04:53:21.105150 update_engine[1518]: I20250909 04:53:21.104943 1518 main.cc:92] Flatcar Update Engine starting Sep 9 04:53:21.108543 tar[1525]: linux-arm64/helm Sep 9 04:53:21.126310 dbus-daemon[1503]: [system] SELinux support is enabled Sep 9 04:53:21.126516 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 04:53:21.132923 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 04:53:21.132972 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 04:53:21.137910 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 04:53:21.137946 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 04:53:21.144983 jq[1547]: true Sep 9 04:53:21.163051 systemd[1]: Started update-engine.service - Update Engine. Sep 9 04:53:21.167362 update_engine[1518]: I20250909 04:53:21.166946 1518 update_check_scheduler.cc:74] Next update check in 7m39s Sep 9 04:53:21.193988 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 04:53:21.263780 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 9 04:53:21.276354 extend-filesystems[1544]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 9 04:53:21.276354 extend-filesystems[1544]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 9 04:53:21.276354 extend-filesystems[1544]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 9 04:53:21.279347 extend-filesystems[1506]: Resized filesystem in /dev/sda9 Sep 9 04:53:21.277900 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 04:53:21.279787 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 04:53:21.282699 bash[1575]: Updated "/home/core/.ssh/authorized_keys" Sep 9 04:53:21.288312 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 04:53:21.295097 systemd[1]: Starting sshkeys.service... Sep 9 04:53:21.309359 systemd-logind[1515]: New seat seat0. Sep 9 04:53:21.314355 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 9 04:53:21.315349 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 04:53:21.317705 systemd-logind[1515]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 04:53:21.317735 systemd-logind[1515]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Sep 9 04:53:21.322915 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 04:53:21.362036 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 9 04:53:21.369093 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 9 04:53:21.512075 coreos-metadata[1588]: Sep 09 04:53:21.511 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 9 04:53:21.516416 coreos-metadata[1588]: Sep 09 04:53:21.515 INFO Fetch successful Sep 9 04:53:21.524085 unknown[1588]: wrote ssh authorized keys file for user: core Sep 9 04:53:21.571347 containerd[1539]: time="2025-09-09T04:53:21Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 04:53:21.573773 containerd[1539]: time="2025-09-09T04:53:21.573441400Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 04:53:21.577406 update-ssh-keys[1595]: Updated "/home/core/.ssh/authorized_keys" Sep 9 04:53:21.581703 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 9 04:53:21.587287 containerd[1539]: time="2025-09-09T04:53:21.586581480Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.24µs" Sep 9 04:53:21.587287 containerd[1539]: time="2025-09-09T04:53:21.586629040Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 04:53:21.587287 containerd[1539]: time="2025-09-09T04:53:21.586656920Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 04:53:21.587594 containerd[1539]: time="2025-09-09T04:53:21.587563920Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 04:53:21.587705 containerd[1539]: time="2025-09-09T04:53:21.587685440Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 04:53:21.587794 containerd[1539]: time="2025-09-09T04:53:21.587777600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 04:53:21.587931 containerd[1539]: time="2025-09-09T04:53:21.587908400Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 04:53:21.588000 containerd[1539]: time="2025-09-09T04:53:21.587985560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 04:53:21.588070 systemd[1]: Finished sshkeys.service. Sep 9 04:53:21.588511 containerd[1539]: time="2025-09-09T04:53:21.588481080Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 04:53:21.588587 containerd[1539]: time="2025-09-09T04:53:21.588570520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 04:53:21.588643 containerd[1539]: time="2025-09-09T04:53:21.588625880Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 04:53:21.588725 containerd[1539]: time="2025-09-09T04:53:21.588709320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 04:53:21.589602 containerd[1539]: time="2025-09-09T04:53:21.589577440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 04:53:21.590396 containerd[1539]: time="2025-09-09T04:53:21.590362480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 04:53:21.590517 containerd[1539]: time="2025-09-09T04:53:21.590497800Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 04:53:21.590570 containerd[1539]: time="2025-09-09T04:53:21.590557840Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 04:53:21.590640 containerd[1539]: time="2025-09-09T04:53:21.590623920Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 04:53:21.591934 containerd[1539]: time="2025-09-09T04:53:21.591903680Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 04:53:21.592134 containerd[1539]: time="2025-09-09T04:53:21.592113160Z" level=info msg="metadata content store policy set" policy=shared Sep 9 04:53:21.596832 containerd[1539]: time="2025-09-09T04:53:21.596780560Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 04:53:21.597004 containerd[1539]: time="2025-09-09T04:53:21.596988160Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 04:53:21.597068 containerd[1539]: time="2025-09-09T04:53:21.597054640Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 04:53:21.597144 containerd[1539]: time="2025-09-09T04:53:21.597130280Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 04:53:21.597217 containerd[1539]: time="2025-09-09T04:53:21.597204280Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 04:53:21.597283 containerd[1539]: time="2025-09-09T04:53:21.597269760Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 04:53:21.597336 containerd[1539]: time="2025-09-09T04:53:21.597324320Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 04:53:21.597392 containerd[1539]: time="2025-09-09T04:53:21.597379280Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 04:53:21.597446 containerd[1539]: time="2025-09-09T04:53:21.597432600Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 04:53:21.597521 containerd[1539]: time="2025-09-09T04:53:21.597507120Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 04:53:21.597577 containerd[1539]: time="2025-09-09T04:53:21.597559880Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 04:53:21.597642 containerd[1539]: time="2025-09-09T04:53:21.597628720Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 04:53:21.597887 containerd[1539]: time="2025-09-09T04:53:21.597861080Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 04:53:21.597963 containerd[1539]: time="2025-09-09T04:53:21.597949160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 04:53:21.598073 containerd[1539]: time="2025-09-09T04:53:21.598050200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 04:53:21.598135 containerd[1539]: time="2025-09-09T04:53:21.598121680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 04:53:21.598232 containerd[1539]: time="2025-09-09T04:53:21.598216480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 04:53:21.598303 containerd[1539]: time="2025-09-09T04:53:21.598285920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 04:53:21.598385 containerd[1539]: time="2025-09-09T04:53:21.598371320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 04:53:21.598454 containerd[1539]: time="2025-09-09T04:53:21.598441120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 04:53:21.598515 containerd[1539]: time="2025-09-09T04:53:21.598502960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 04:53:21.598580 containerd[1539]: time="2025-09-09T04:53:21.598567920Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 04:53:21.598646 containerd[1539]: time="2025-09-09T04:53:21.598632400Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 04:53:21.598922 containerd[1539]: time="2025-09-09T04:53:21.598905880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 04:53:21.598985 containerd[1539]: time="2025-09-09T04:53:21.598974400Z" level=info msg="Start snapshots syncer" Sep 9 04:53:21.599063 containerd[1539]: time="2025-09-09T04:53:21.599047960Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 04:53:21.599533 containerd[1539]: time="2025-09-09T04:53:21.599490360Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 04:53:21.599761 containerd[1539]: time="2025-09-09T04:53:21.599729680Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 04:53:21.599916 containerd[1539]: time="2025-09-09T04:53:21.599899040Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 04:53:21.600290 containerd[1539]: time="2025-09-09T04:53:21.600264120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 04:53:21.600381 containerd[1539]: time="2025-09-09T04:53:21.600366960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 04:53:21.600432 containerd[1539]: time="2025-09-09T04:53:21.600420560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 04:53:21.600494 containerd[1539]: time="2025-09-09T04:53:21.600480880Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 04:53:21.600705 containerd[1539]: time="2025-09-09T04:53:21.600667280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 04:53:21.600789 containerd[1539]: time="2025-09-09T04:53:21.600774520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 04:53:21.600856 containerd[1539]: time="2025-09-09T04:53:21.600843920Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 04:53:21.600930 containerd[1539]: time="2025-09-09T04:53:21.600916760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 04:53:21.600987 containerd[1539]: time="2025-09-09T04:53:21.600973720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 04:53:21.601044 containerd[1539]: time="2025-09-09T04:53:21.601030920Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 04:53:21.601125 containerd[1539]: time="2025-09-09T04:53:21.601110600Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 04:53:21.601627 containerd[1539]: time="2025-09-09T04:53:21.601300840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 04:53:21.601627 containerd[1539]: time="2025-09-09T04:53:21.601323120Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 04:53:21.601627 containerd[1539]: time="2025-09-09T04:53:21.601342320Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 04:53:21.601627 containerd[1539]: time="2025-09-09T04:53:21.601353560Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 04:53:21.601627 containerd[1539]: time="2025-09-09T04:53:21.601407040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 04:53:21.601627 containerd[1539]: time="2025-09-09T04:53:21.601428800Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 04:53:21.601627 containerd[1539]: time="2025-09-09T04:53:21.601508920Z" level=info msg="runtime interface created" Sep 9 04:53:21.601627 containerd[1539]: time="2025-09-09T04:53:21.601514480Z" level=info msg="created NRI interface" Sep 9 04:53:21.601627 containerd[1539]: time="2025-09-09T04:53:21.601523600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 04:53:21.601627 containerd[1539]: time="2025-09-09T04:53:21.601543240Z" level=info msg="Connect containerd service" Sep 9 04:53:21.601627 containerd[1539]: time="2025-09-09T04:53:21.601591640Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 04:53:21.603049 containerd[1539]: time="2025-09-09T04:53:21.603020360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 04:53:21.614894 locksmithd[1559]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 04:53:21.688935 containerd[1539]: time="2025-09-09T04:53:21.688485120Z" level=info msg="Start subscribing containerd event" Sep 9 04:53:21.688935 containerd[1539]: time="2025-09-09T04:53:21.688561360Z" level=info msg="Start recovering state" Sep 9 04:53:21.688935 containerd[1539]: time="2025-09-09T04:53:21.688653760Z" level=info msg="Start event monitor" Sep 9 04:53:21.688935 containerd[1539]: time="2025-09-09T04:53:21.688667200Z" level=info msg="Start cni network conf syncer for default" Sep 9 04:53:21.688935 containerd[1539]: time="2025-09-09T04:53:21.688678680Z" level=info msg="Start streaming server" Sep 9 04:53:21.688935 containerd[1539]: time="2025-09-09T04:53:21.688686520Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 04:53:21.688935 containerd[1539]: time="2025-09-09T04:53:21.688704320Z" level=info msg="runtime interface starting up..." Sep 9 04:53:21.688935 containerd[1539]: time="2025-09-09T04:53:21.688711640Z" level=info msg="starting plugins..." Sep 9 04:53:21.688935 containerd[1539]: time="2025-09-09T04:53:21.688728280Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 04:53:21.690009 containerd[1539]: time="2025-09-09T04:53:21.689827360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 04:53:21.690009 containerd[1539]: time="2025-09-09T04:53:21.689979600Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 04:53:21.691273 containerd[1539]: time="2025-09-09T04:53:21.690868680Z" level=info msg="containerd successfully booted in 0.120178s" Sep 9 04:53:21.690986 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 04:53:21.745673 tar[1525]: linux-arm64/LICENSE Sep 9 04:53:21.745784 tar[1525]: linux-arm64/README.md Sep 9 04:53:21.768795 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 04:53:21.841007 systemd-networkd[1405]: eth1: Gained IPv6LL Sep 9 04:53:21.847129 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 04:53:21.849358 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 04:53:21.855017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:53:21.859059 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 04:53:21.904251 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 04:53:22.188013 sshd_keygen[1553]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 04:53:22.219224 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 04:53:22.225028 systemd-networkd[1405]: eth0: Gained IPv6LL Sep 9 04:53:22.226372 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 04:53:22.253111 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 04:53:22.253439 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 04:53:22.258403 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 04:53:22.288811 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 04:53:22.294364 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 04:53:22.298390 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 04:53:22.299411 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 04:53:22.769455 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:53:22.771702 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 04:53:22.777995 systemd[1]: Startup finished in 2.321s (kernel) + 7.580s (initrd) + 4.633s (userspace) = 14.536s. Sep 9 04:53:22.780577 (kubelet)[1652]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:53:23.339620 kubelet[1652]: E0909 04:53:23.339541 1652 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:53:23.342756 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:53:23.342913 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:53:23.343426 systemd[1]: kubelet.service: Consumed 932ms CPU time, 257.1M memory peak. Sep 9 04:53:33.397588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 04:53:33.400253 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:53:33.564714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:53:33.576301 (kubelet)[1669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:53:33.632580 kubelet[1669]: E0909 04:53:33.632511 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:53:33.636610 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:53:33.636914 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:53:33.637682 systemd[1]: kubelet.service: Consumed 174ms CPU time, 105.5M memory peak. Sep 9 04:53:43.647325 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 04:53:43.649950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:53:43.814457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:53:43.825657 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:53:43.869201 kubelet[1684]: E0909 04:53:43.869077 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:53:43.871636 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:53:43.871792 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:53:43.872309 systemd[1]: kubelet.service: Consumed 163ms CPU time, 105.2M memory peak. Sep 9 04:53:53.897369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 04:53:53.903025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:53:54.067531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:53:54.074383 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:53:54.116521 kubelet[1699]: E0909 04:53:54.116421 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:53:54.119512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:53:54.119643 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:53:54.120257 systemd[1]: kubelet.service: Consumed 164ms CPU time, 107M memory peak. Sep 9 04:53:58.921113 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 04:53:58.923278 systemd[1]: Started sshd@0-128.140.114.243:22-147.75.109.163:55478.service - OpenSSH per-connection server daemon (147.75.109.163:55478). Sep 9 04:53:59.955564 sshd[1707]: Accepted publickey for core from 147.75.109.163 port 55478 ssh2: RSA SHA256:BJJsF+/R5vtVbpUFCd789n0h3+dB8+hZ951f8X5U2so Sep 9 04:53:59.959089 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:53:59.969831 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 04:53:59.971625 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 04:53:59.982013 systemd-logind[1515]: New session 1 of user core. Sep 9 04:54:00.000896 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 04:54:00.003893 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 04:54:00.028261 (systemd)[1712]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 04:54:00.033685 systemd-logind[1515]: New session c1 of user core. Sep 9 04:54:00.177016 systemd[1712]: Queued start job for default target default.target. Sep 9 04:54:00.191714 systemd[1712]: Created slice app.slice - User Application Slice. Sep 9 04:54:00.191796 systemd[1712]: Reached target paths.target - Paths. Sep 9 04:54:00.191863 systemd[1712]: Reached target timers.target - Timers. Sep 9 04:54:00.193912 systemd[1712]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 04:54:00.208567 systemd[1712]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 04:54:00.208827 systemd[1712]: Reached target sockets.target - Sockets. Sep 9 04:54:00.208931 systemd[1712]: Reached target basic.target - Basic System. Sep 9 04:54:00.208995 systemd[1712]: Reached target default.target - Main User Target. Sep 9 04:54:00.209044 systemd[1712]: Startup finished in 166ms. Sep 9 04:54:00.209274 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 04:54:00.214959 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 04:54:00.910812 systemd[1]: Started sshd@1-128.140.114.243:22-147.75.109.163:56980.service - OpenSSH per-connection server daemon (147.75.109.163:56980). Sep 9 04:54:01.924055 sshd[1723]: Accepted publickey for core from 147.75.109.163 port 56980 ssh2: RSA SHA256:BJJsF+/R5vtVbpUFCd789n0h3+dB8+hZ951f8X5U2so Sep 9 04:54:01.926163 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:54:01.933288 systemd-logind[1515]: New session 2 of user core. Sep 9 04:54:01.941107 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 04:54:02.603240 sshd[1726]: Connection closed by 147.75.109.163 port 56980 Sep 9 04:54:02.604222 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Sep 9 04:54:02.612523 systemd[1]: sshd@1-128.140.114.243:22-147.75.109.163:56980.service: Deactivated successfully. Sep 9 04:54:02.616069 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 04:54:02.617499 systemd-logind[1515]: Session 2 logged out. Waiting for processes to exit. Sep 9 04:54:02.619303 systemd-logind[1515]: Removed session 2. Sep 9 04:54:02.786983 systemd[1]: Started sshd@2-128.140.114.243:22-147.75.109.163:56984.service - OpenSSH per-connection server daemon (147.75.109.163:56984). Sep 9 04:54:03.802285 sshd[1732]: Accepted publickey for core from 147.75.109.163 port 56984 ssh2: RSA SHA256:BJJsF+/R5vtVbpUFCd789n0h3+dB8+hZ951f8X5U2so Sep 9 04:54:03.804359 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:54:03.811157 systemd-logind[1515]: New session 3 of user core. Sep 9 04:54:03.819068 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 04:54:04.146787 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 9 04:54:04.149318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:54:04.328071 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:54:04.337149 (kubelet)[1745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:54:04.383949 kubelet[1745]: E0909 04:54:04.383881 1745 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:54:04.387420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:54:04.387810 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:54:04.388719 systemd[1]: kubelet.service: Consumed 177ms CPU time, 105.1M memory peak. Sep 9 04:54:04.480025 sshd[1735]: Connection closed by 147.75.109.163 port 56984 Sep 9 04:54:04.481262 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Sep 9 04:54:04.487032 systemd[1]: sshd@2-128.140.114.243:22-147.75.109.163:56984.service: Deactivated successfully. Sep 9 04:54:04.487100 systemd-logind[1515]: Session 3 logged out. Waiting for processes to exit. Sep 9 04:54:04.488991 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 04:54:04.492035 systemd-logind[1515]: Removed session 3. Sep 9 04:54:04.653037 systemd[1]: Started sshd@3-128.140.114.243:22-147.75.109.163:56986.service - OpenSSH per-connection server daemon (147.75.109.163:56986). Sep 9 04:54:05.665536 sshd[1756]: Accepted publickey for core from 147.75.109.163 port 56986 ssh2: RSA SHA256:BJJsF+/R5vtVbpUFCd789n0h3+dB8+hZ951f8X5U2so Sep 9 04:54:05.667955 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:54:05.673911 systemd-logind[1515]: New session 4 of user core. Sep 9 04:54:05.679063 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 04:54:06.242728 update_engine[1518]: I20250909 04:54:06.241910 1518 update_attempter.cc:509] Updating boot flags... Sep 9 04:54:06.348801 sshd[1759]: Connection closed by 147.75.109.163 port 56986 Sep 9 04:54:06.349328 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Sep 9 04:54:06.356806 systemd[1]: sshd@3-128.140.114.243:22-147.75.109.163:56986.service: Deactivated successfully. Sep 9 04:54:06.362639 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 04:54:06.364949 systemd-logind[1515]: Session 4 logged out. Waiting for processes to exit. Sep 9 04:54:06.374849 systemd-logind[1515]: Removed session 4. Sep 9 04:54:06.522414 systemd[1]: Started sshd@4-128.140.114.243:22-147.75.109.163:56990.service - OpenSSH per-connection server daemon (147.75.109.163:56990). Sep 9 04:54:07.529067 sshd[1785]: Accepted publickey for core from 147.75.109.163 port 56990 ssh2: RSA SHA256:BJJsF+/R5vtVbpUFCd789n0h3+dB8+hZ951f8X5U2so Sep 9 04:54:07.531292 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:54:07.536995 systemd-logind[1515]: New session 5 of user core. Sep 9 04:54:07.548614 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 04:54:08.062325 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 04:54:08.062605 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 04:54:08.079332 sudo[1789]: pam_unix(sudo:session): session closed for user root Sep 9 04:54:08.239947 sshd[1788]: Connection closed by 147.75.109.163 port 56990 Sep 9 04:54:08.241097 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Sep 9 04:54:08.247151 systemd-logind[1515]: Session 5 logged out. Waiting for processes to exit. Sep 9 04:54:08.247233 systemd[1]: sshd@4-128.140.114.243:22-147.75.109.163:56990.service: Deactivated successfully. Sep 9 04:54:08.249513 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 04:54:08.252571 systemd-logind[1515]: Removed session 5. Sep 9 04:54:08.414862 systemd[1]: Started sshd@5-128.140.114.243:22-147.75.109.163:57004.service - OpenSSH per-connection server daemon (147.75.109.163:57004). Sep 9 04:54:09.439553 sshd[1795]: Accepted publickey for core from 147.75.109.163 port 57004 ssh2: RSA SHA256:BJJsF+/R5vtVbpUFCd789n0h3+dB8+hZ951f8X5U2so Sep 9 04:54:09.442058 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:54:09.447386 systemd-logind[1515]: New session 6 of user core. Sep 9 04:54:09.455037 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 04:54:09.963063 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 04:54:09.963362 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 04:54:09.969014 sudo[1800]: pam_unix(sudo:session): session closed for user root Sep 9 04:54:09.975250 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 04:54:09.975531 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 04:54:09.987883 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 04:54:10.033568 augenrules[1822]: No rules Sep 9 04:54:10.035249 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 04:54:10.035469 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 04:54:10.039329 sudo[1799]: pam_unix(sudo:session): session closed for user root Sep 9 04:54:10.199794 sshd[1798]: Connection closed by 147.75.109.163 port 57004 Sep 9 04:54:10.199776 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Sep 9 04:54:10.207474 systemd[1]: sshd@5-128.140.114.243:22-147.75.109.163:57004.service: Deactivated successfully. Sep 9 04:54:10.209387 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 04:54:10.210352 systemd-logind[1515]: Session 6 logged out. Waiting for processes to exit. Sep 9 04:54:10.212227 systemd-logind[1515]: Removed session 6. Sep 9 04:54:10.375869 systemd[1]: Started sshd@6-128.140.114.243:22-147.75.109.163:54668.service - OpenSSH per-connection server daemon (147.75.109.163:54668). Sep 9 04:54:11.386434 sshd[1831]: Accepted publickey for core from 147.75.109.163 port 54668 ssh2: RSA SHA256:BJJsF+/R5vtVbpUFCd789n0h3+dB8+hZ951f8X5U2so Sep 9 04:54:11.388554 sshd-session[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:54:11.394188 systemd-logind[1515]: New session 7 of user core. Sep 9 04:54:11.402498 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 04:54:11.910931 sudo[1835]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 04:54:11.911200 sudo[1835]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 04:54:12.249724 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 04:54:12.273939 (dockerd)[1852]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 04:54:12.515852 dockerd[1852]: time="2025-09-09T04:54:12.515560407Z" level=info msg="Starting up" Sep 9 04:54:12.520017 dockerd[1852]: time="2025-09-09T04:54:12.519969721Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 04:54:12.536612 dockerd[1852]: time="2025-09-09T04:54:12.536501657Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 04:54:12.565493 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2041592640-merged.mount: Deactivated successfully. Sep 9 04:54:12.590409 dockerd[1852]: time="2025-09-09T04:54:12.590338338Z" level=info msg="Loading containers: start." Sep 9 04:54:12.602777 kernel: Initializing XFRM netlink socket Sep 9 04:54:12.871906 systemd-networkd[1405]: docker0: Link UP Sep 9 04:54:12.877714 dockerd[1852]: time="2025-09-09T04:54:12.877074252Z" level=info msg="Loading containers: done." Sep 9 04:54:12.893373 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1502029587-merged.mount: Deactivated successfully. Sep 9 04:54:12.896040 dockerd[1852]: time="2025-09-09T04:54:12.895533235Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 04:54:12.896040 dockerd[1852]: time="2025-09-09T04:54:12.895688235Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 04:54:12.896040 dockerd[1852]: time="2025-09-09T04:54:12.895832791Z" level=info msg="Initializing buildkit" Sep 9 04:54:12.930177 dockerd[1852]: time="2025-09-09T04:54:12.930101368Z" level=info msg="Completed buildkit initialization" Sep 9 04:54:12.942254 dockerd[1852]: time="2025-09-09T04:54:12.942196863Z" level=info msg="Daemon has completed initialization" Sep 9 04:54:12.942413 dockerd[1852]: time="2025-09-09T04:54:12.942264240Z" level=info msg="API listen on /run/docker.sock" Sep 9 04:54:12.943809 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 04:54:14.027798 containerd[1539]: time="2025-09-09T04:54:14.027591860Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 04:54:14.396866 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 9 04:54:14.398696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:54:14.559219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:54:14.569207 (kubelet)[2071]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:54:14.630800 kubelet[2071]: E0909 04:54:14.629679 2071 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:54:14.638638 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:54:14.639860 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:54:14.641837 systemd[1]: kubelet.service: Consumed 168ms CPU time, 109M memory peak. Sep 9 04:54:14.659175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount836281172.mount: Deactivated successfully. Sep 9 04:54:15.528772 containerd[1539]: time="2025-09-09T04:54:15.528673422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:15.531496 containerd[1539]: time="2025-09-09T04:54:15.531108640Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652533" Sep 9 04:54:15.533307 containerd[1539]: time="2025-09-09T04:54:15.533213785Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:15.538942 containerd[1539]: time="2025-09-09T04:54:15.538884477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:15.539792 containerd[1539]: time="2025-09-09T04:54:15.539734505Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.512096314s" Sep 9 04:54:15.539868 containerd[1539]: time="2025-09-09T04:54:15.539796039Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 9 04:54:15.541287 containerd[1539]: time="2025-09-09T04:54:15.541256641Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 04:54:16.684779 containerd[1539]: time="2025-09-09T04:54:16.683698388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:16.685957 containerd[1539]: time="2025-09-09T04:54:16.685906695Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460329" Sep 9 04:54:16.687181 containerd[1539]: time="2025-09-09T04:54:16.687123913Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:16.691096 containerd[1539]: time="2025-09-09T04:54:16.691050144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:16.692197 containerd[1539]: time="2025-09-09T04:54:16.692154737Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.150861328s" Sep 9 04:54:16.692329 containerd[1539]: time="2025-09-09T04:54:16.692313371Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 9 04:54:16.693274 containerd[1539]: time="2025-09-09T04:54:16.693231845Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 04:54:17.731815 containerd[1539]: time="2025-09-09T04:54:17.731063579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:17.733338 containerd[1539]: time="2025-09-09T04:54:17.733299473Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125923" Sep 9 04:54:17.735101 containerd[1539]: time="2025-09-09T04:54:17.735067271Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:17.739275 containerd[1539]: time="2025-09-09T04:54:17.739224715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:17.740889 containerd[1539]: time="2025-09-09T04:54:17.740568227Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.047286172s" Sep 9 04:54:17.740889 containerd[1539]: time="2025-09-09T04:54:17.740631160Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 9 04:54:17.741497 containerd[1539]: time="2025-09-09T04:54:17.741424081Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 04:54:18.705038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1134510888.mount: Deactivated successfully. Sep 9 04:54:19.007855 containerd[1539]: time="2025-09-09T04:54:19.006887438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:19.009921 containerd[1539]: time="2025-09-09T04:54:19.009881278Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916121" Sep 9 04:54:19.010937 containerd[1539]: time="2025-09-09T04:54:19.010898028Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:19.014361 containerd[1539]: time="2025-09-09T04:54:19.014320108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:19.015197 containerd[1539]: time="2025-09-09T04:54:19.014896976Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.273419125s" Sep 9 04:54:19.015197 containerd[1539]: time="2025-09-09T04:54:19.015006316Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 9 04:54:19.015756 containerd[1539]: time="2025-09-09T04:54:19.015660879Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 04:54:19.666783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2128325947.mount: Deactivated successfully. Sep 9 04:54:20.346238 containerd[1539]: time="2025-09-09T04:54:20.346185544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:20.348109 containerd[1539]: time="2025-09-09T04:54:20.348070243Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Sep 9 04:54:20.349131 containerd[1539]: time="2025-09-09T04:54:20.349007371Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:20.353770 containerd[1539]: time="2025-09-09T04:54:20.353212967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:20.358003 containerd[1539]: time="2025-09-09T04:54:20.357959541Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.342268496s" Sep 9 04:54:20.358213 containerd[1539]: time="2025-09-09T04:54:20.358157056Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 04:54:20.358875 containerd[1539]: time="2025-09-09T04:54:20.358842260Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 04:54:20.934002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1019160702.mount: Deactivated successfully. Sep 9 04:54:20.943906 containerd[1539]: time="2025-09-09T04:54:20.943732122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 04:54:20.945918 containerd[1539]: time="2025-09-09T04:54:20.945853503Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Sep 9 04:54:20.947660 containerd[1539]: time="2025-09-09T04:54:20.947602178Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 04:54:20.951768 containerd[1539]: time="2025-09-09T04:54:20.951590855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 04:54:20.952760 containerd[1539]: time="2025-09-09T04:54:20.952681011Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 593.671482ms" Sep 9 04:54:20.952760 containerd[1539]: time="2025-09-09T04:54:20.952717178Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 04:54:20.953452 containerd[1539]: time="2025-09-09T04:54:20.953419784Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 04:54:21.475052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount213125681.mount: Deactivated successfully. Sep 9 04:54:22.977380 containerd[1539]: time="2025-09-09T04:54:22.977308837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:22.979809 containerd[1539]: time="2025-09-09T04:54:22.979468517Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537235" Sep 9 04:54:22.981760 containerd[1539]: time="2025-09-09T04:54:22.981693288Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:22.985875 containerd[1539]: time="2025-09-09T04:54:22.985820376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:22.986872 containerd[1539]: time="2025-09-09T04:54:22.986608948Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.033153077s" Sep 9 04:54:22.986872 containerd[1539]: time="2025-09-09T04:54:22.986781016Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 9 04:54:24.647319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 9 04:54:24.650992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:54:24.802678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:54:24.813324 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:54:24.860660 kubelet[2285]: E0909 04:54:24.860613 2285 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:54:24.863400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:54:24.863720 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:54:24.864288 systemd[1]: kubelet.service: Consumed 160ms CPU time, 105M memory peak. Sep 9 04:54:28.622251 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:54:28.622395 systemd[1]: kubelet.service: Consumed 160ms CPU time, 105M memory peak. Sep 9 04:54:28.625073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:54:28.664342 systemd[1]: Reload requested from client PID 2299 ('systemctl') (unit session-7.scope)... Sep 9 04:54:28.664364 systemd[1]: Reloading... Sep 9 04:54:28.788776 zram_generator::config[2343]: No configuration found. Sep 9 04:54:28.996919 systemd[1]: Reloading finished in 332 ms. Sep 9 04:54:29.059034 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 04:54:29.059116 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 04:54:29.059373 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:54:29.059423 systemd[1]: kubelet.service: Consumed 106ms CPU time, 95M memory peak. Sep 9 04:54:29.061121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:54:29.220490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:54:29.230275 (kubelet)[2391]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 04:54:29.278810 kubelet[2391]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 04:54:29.279769 kubelet[2391]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 04:54:29.279769 kubelet[2391]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 04:54:29.279769 kubelet[2391]: I0909 04:54:29.279265 2391 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 04:54:30.289142 kubelet[2391]: I0909 04:54:30.289091 2391 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 04:54:30.289812 kubelet[2391]: I0909 04:54:30.289784 2391 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 04:54:30.290308 kubelet[2391]: I0909 04:54:30.290291 2391 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 04:54:30.324767 kubelet[2391]: E0909 04:54:30.324454 2391 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://128.140.114.243:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 128.140.114.243:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:54:30.325752 kubelet[2391]: I0909 04:54:30.325710 2391 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 04:54:30.337216 kubelet[2391]: I0909 04:54:30.337168 2391 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 04:54:30.341139 kubelet[2391]: I0909 04:54:30.340996 2391 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 04:54:30.342292 kubelet[2391]: I0909 04:54:30.342269 2391 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 04:54:30.342687 kubelet[2391]: I0909 04:54:30.342631 2391 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 04:54:30.343023 kubelet[2391]: I0909 04:54:30.342790 2391 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4452-0-0-n-1f6e10e4b9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 04:54:30.343270 kubelet[2391]: I0909 04:54:30.343253 2391 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 04:54:30.343329 kubelet[2391]: I0909 04:54:30.343322 2391 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 04:54:30.343642 kubelet[2391]: I0909 04:54:30.343629 2391 state_mem.go:36] "Initialized new in-memory state store" Sep 9 04:54:30.347405 kubelet[2391]: I0909 04:54:30.347369 2391 kubelet.go:408] "Attempting to sync node with API server" Sep 9 04:54:30.347621 kubelet[2391]: I0909 04:54:30.347608 2391 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 04:54:30.347729 kubelet[2391]: I0909 04:54:30.347718 2391 kubelet.go:314] "Adding apiserver pod source" Sep 9 04:54:30.347875 kubelet[2391]: I0909 04:54:30.347866 2391 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 04:54:30.353620 kubelet[2391]: W0909 04:54:30.352948 2391 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://128.140.114.243:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4452-0-0-n-1f6e10e4b9&limit=500&resourceVersion=0": dial tcp 128.140.114.243:6443: connect: connection refused Sep 9 04:54:30.353620 kubelet[2391]: E0909 04:54:30.353020 2391 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://128.140.114.243:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4452-0-0-n-1f6e10e4b9&limit=500&resourceVersion=0\": dial tcp 128.140.114.243:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:54:30.353620 kubelet[2391]: W0909 04:54:30.353438 2391 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://128.140.114.243:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 128.140.114.243:6443: connect: connection refused Sep 9 04:54:30.353620 kubelet[2391]: E0909 04:54:30.353475 2391 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://128.140.114.243:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 128.140.114.243:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:54:30.354099 kubelet[2391]: I0909 04:54:30.354069 2391 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 04:54:30.354898 kubelet[2391]: I0909 04:54:30.354876 2391 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 04:54:30.355071 kubelet[2391]: W0909 04:54:30.355053 2391 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 04:54:30.358102 kubelet[2391]: I0909 04:54:30.358049 2391 server.go:1274] "Started kubelet" Sep 9 04:54:30.366731 kubelet[2391]: I0909 04:54:30.366606 2391 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 04:54:30.367779 kubelet[2391]: E0909 04:54:30.366111 2391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://128.140.114.243:6443/api/v1/namespaces/default/events\": dial tcp 128.140.114.243:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4452-0-0-n-1f6e10e4b9.1863843b6c35b408 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4452-0-0-n-1f6e10e4b9,UID:ci-4452-0-0-n-1f6e10e4b9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4452-0-0-n-1f6e10e4b9,},FirstTimestamp:2025-09-09 04:54:30.358021128 +0000 UTC m=+1.122677792,LastTimestamp:2025-09-09 04:54:30.358021128 +0000 UTC m=+1.122677792,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4452-0-0-n-1f6e10e4b9,}" Sep 9 04:54:30.368767 kubelet[2391]: I0909 04:54:30.368504 2391 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 04:54:30.369437 kubelet[2391]: I0909 04:54:30.369382 2391 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 04:54:30.369948 kubelet[2391]: I0909 04:54:30.369794 2391 server.go:449] "Adding debug handlers to kubelet server" Sep 9 04:54:30.370202 kubelet[2391]: I0909 04:54:30.370182 2391 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 04:54:30.370575 kubelet[2391]: I0909 04:54:30.370550 2391 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 04:54:30.370693 kubelet[2391]: I0909 04:54:30.370651 2391 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 04:54:30.371049 kubelet[2391]: E0909 04:54:30.371021 2391 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4452-0-0-n-1f6e10e4b9\" not found" Sep 9 04:54:30.373133 kubelet[2391]: E0909 04:54:30.373089 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.114.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452-0-0-n-1f6e10e4b9?timeout=10s\": dial tcp 128.140.114.243:6443: connect: connection refused" interval="200ms" Sep 9 04:54:30.373255 kubelet[2391]: I0909 04:54:30.373244 2391 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 04:54:30.374809 kubelet[2391]: W0909 04:54:30.374489 2391 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://128.140.114.243:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 128.140.114.243:6443: connect: connection refused Sep 9 04:54:30.374992 kubelet[2391]: E0909 04:54:30.374965 2391 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://128.140.114.243:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 128.140.114.243:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:54:30.375487 kubelet[2391]: I0909 04:54:30.375460 2391 factory.go:221] Registration of the systemd container factory successfully Sep 9 04:54:30.375670 kubelet[2391]: I0909 04:54:30.375635 2391 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 04:54:30.376711 kubelet[2391]: I0909 04:54:30.376692 2391 reconciler.go:26] "Reconciler: start to sync state" Sep 9 04:54:30.377015 kubelet[2391]: E0909 04:54:30.376999 2391 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 04:54:30.377231 kubelet[2391]: I0909 04:54:30.377215 2391 factory.go:221] Registration of the containerd container factory successfully Sep 9 04:54:30.390086 kubelet[2391]: I0909 04:54:30.389976 2391 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 04:54:30.391119 kubelet[2391]: I0909 04:54:30.391064 2391 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 04:54:30.391119 kubelet[2391]: I0909 04:54:30.391100 2391 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 04:54:30.391119 kubelet[2391]: I0909 04:54:30.391123 2391 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 04:54:30.391301 kubelet[2391]: E0909 04:54:30.391166 2391 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 04:54:30.402406 kubelet[2391]: W0909 04:54:30.402306 2391 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://128.140.114.243:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 128.140.114.243:6443: connect: connection refused Sep 9 04:54:30.402406 kubelet[2391]: E0909 04:54:30.402385 2391 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://128.140.114.243:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 128.140.114.243:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:54:30.413196 kubelet[2391]: I0909 04:54:30.413168 2391 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 04:54:30.413446 kubelet[2391]: I0909 04:54:30.413369 2391 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 04:54:30.413446 kubelet[2391]: I0909 04:54:30.413394 2391 state_mem.go:36] "Initialized new in-memory state store" Sep 9 04:54:30.417025 kubelet[2391]: I0909 04:54:30.416980 2391 policy_none.go:49] "None policy: Start" Sep 9 04:54:30.419026 kubelet[2391]: I0909 04:54:30.418991 2391 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 04:54:30.419156 kubelet[2391]: I0909 04:54:30.419049 2391 state_mem.go:35] "Initializing new in-memory state store" Sep 9 04:54:30.428978 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 04:54:30.440605 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 04:54:30.445383 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 04:54:30.455514 kubelet[2391]: I0909 04:54:30.455422 2391 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 04:54:30.456464 kubelet[2391]: I0909 04:54:30.455780 2391 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 04:54:30.456464 kubelet[2391]: I0909 04:54:30.455814 2391 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 04:54:30.456464 kubelet[2391]: I0909 04:54:30.456404 2391 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 04:54:30.460182 kubelet[2391]: E0909 04:54:30.460062 2391 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4452-0-0-n-1f6e10e4b9\" not found" Sep 9 04:54:30.506758 systemd[1]: Created slice kubepods-burstable-pod37dcac0bab3381dafeb47cb52e4dca4a.slice - libcontainer container kubepods-burstable-pod37dcac0bab3381dafeb47cb52e4dca4a.slice. Sep 9 04:54:30.528783 systemd[1]: Created slice kubepods-burstable-podff73763c03bca984511f1652c8d21391.slice - libcontainer container kubepods-burstable-podff73763c03bca984511f1652c8d21391.slice. Sep 9 04:54:30.533962 systemd[1]: Created slice kubepods-burstable-pod687a54f3106bb66e0d5e6125b138a2bd.slice - libcontainer container kubepods-burstable-pod687a54f3106bb66e0d5e6125b138a2bd.slice. Sep 9 04:54:30.560818 kubelet[2391]: I0909 04:54:30.560632 2391 kubelet_node_status.go:72] "Attempting to register node" node="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:30.562188 kubelet[2391]: E0909 04:54:30.562107 2391 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://128.140.114.243:6443/api/v1/nodes\": dial tcp 128.140.114.243:6443: connect: connection refused" node="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:30.574704 kubelet[2391]: E0909 04:54:30.574533 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.114.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452-0-0-n-1f6e10e4b9?timeout=10s\": dial tcp 128.140.114.243:6443: connect: connection refused" interval="400ms" Sep 9 04:54:30.580068 kubelet[2391]: I0909 04:54:30.580003 2391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37dcac0bab3381dafeb47cb52e4dca4a-k8s-certs\") pod \"kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9\" (UID: \"37dcac0bab3381dafeb47cb52e4dca4a\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:30.580068 kubelet[2391]: I0909 04:54:30.580063 2391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/687a54f3106bb66e0d5e6125b138a2bd-ca-certs\") pod \"kube-apiserver-ci-4452-0-0-n-1f6e10e4b9\" (UID: \"687a54f3106bb66e0d5e6125b138a2bd\") " pod="kube-system/kube-apiserver-ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:30.580224 kubelet[2391]: I0909 04:54:30.580099 2391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/37dcac0bab3381dafeb47cb52e4dca4a-ca-certs\") pod \"kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9\" (UID: \"37dcac0bab3381dafeb47cb52e4dca4a\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:30.580224 kubelet[2391]: I0909 04:54:30.580121 2391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/37dcac0bab3381dafeb47cb52e4dca4a-flexvolume-dir\") pod \"kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9\" (UID: \"37dcac0bab3381dafeb47cb52e4dca4a\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:30.580224 kubelet[2391]: I0909 04:54:30.580148 2391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ff73763c03bca984511f1652c8d21391-kubeconfig\") pod \"kube-scheduler-ci-4452-0-0-n-1f6e10e4b9\" (UID: \"ff73763c03bca984511f1652c8d21391\") " pod="kube-system/kube-scheduler-ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:30.580224 kubelet[2391]: I0909 04:54:30.580165 2391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/687a54f3106bb66e0d5e6125b138a2bd-k8s-certs\") pod \"kube-apiserver-ci-4452-0-0-n-1f6e10e4b9\" (UID: \"687a54f3106bb66e0d5e6125b138a2bd\") " pod="kube-system/kube-apiserver-ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:30.580224 kubelet[2391]: I0909 04:54:30.580187 2391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/687a54f3106bb66e0d5e6125b138a2bd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4452-0-0-n-1f6e10e4b9\" (UID: \"687a54f3106bb66e0d5e6125b138a2bd\") " pod="kube-system/kube-apiserver-ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:30.580340 kubelet[2391]: I0909 04:54:30.580219 2391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37dcac0bab3381dafeb47cb52e4dca4a-kubeconfig\") pod \"kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9\" (UID: \"37dcac0bab3381dafeb47cb52e4dca4a\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:30.580340 kubelet[2391]: I0909 04:54:30.580242 2391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37dcac0bab3381dafeb47cb52e4dca4a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9\" (UID: \"37dcac0bab3381dafeb47cb52e4dca4a\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:30.765774 kubelet[2391]: I0909 04:54:30.765600 2391 kubelet_node_status.go:72] "Attempting to register node" node="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:30.766268 kubelet[2391]: E0909 04:54:30.766221 2391 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://128.140.114.243:6443/api/v1/nodes\": dial tcp 128.140.114.243:6443: connect: connection refused" node="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:30.827549 containerd[1539]: time="2025-09-09T04:54:30.827330407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9,Uid:37dcac0bab3381dafeb47cb52e4dca4a,Namespace:kube-system,Attempt:0,}" Sep 9 04:54:30.833116 containerd[1539]: time="2025-09-09T04:54:30.832921926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4452-0-0-n-1f6e10e4b9,Uid:ff73763c03bca984511f1652c8d21391,Namespace:kube-system,Attempt:0,}" Sep 9 04:54:30.838278 containerd[1539]: time="2025-09-09T04:54:30.838083870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4452-0-0-n-1f6e10e4b9,Uid:687a54f3106bb66e0d5e6125b138a2bd,Namespace:kube-system,Attempt:0,}" Sep 9 04:54:30.861049 containerd[1539]: time="2025-09-09T04:54:30.861005818Z" level=info msg="connecting to shim 01c3b4174e58dcc59dbd16d67bc4082503b8bfb654017ddee2aa314d17b5c95c" address="unix:///run/containerd/s/1bd8d63ba1034314f7dab6c6d209c1563ffeb04032fbafd9a7453b90abc0786b" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:54:30.886734 containerd[1539]: time="2025-09-09T04:54:30.886676079Z" level=info msg="connecting to shim 581b9049a0f3e26fa9baa3cc967de7b68dba9c04405ba26cc57d3d2c39f4b4d2" address="unix:///run/containerd/s/aaeb5e2a6fa05bf5a7f710d4a0c2d6ec3c20fb18cef9fdaf1bd80b966e1a21f6" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:54:30.895118 systemd[1]: Started cri-containerd-01c3b4174e58dcc59dbd16d67bc4082503b8bfb654017ddee2aa314d17b5c95c.scope - libcontainer container 01c3b4174e58dcc59dbd16d67bc4082503b8bfb654017ddee2aa314d17b5c95c. Sep 9 04:54:30.905113 containerd[1539]: time="2025-09-09T04:54:30.905016758Z" level=info msg="connecting to shim 1c271c1d56d7eaed699e07ed7f1ca5c541145931d830907f6f888c97a1214bd3" address="unix:///run/containerd/s/bcd92ba5a3d6d42f1e7de0b8f18f38ff36b5672b2e4833696b3d35c37a53512b" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:54:30.934065 systemd[1]: Started cri-containerd-581b9049a0f3e26fa9baa3cc967de7b68dba9c04405ba26cc57d3d2c39f4b4d2.scope - libcontainer container 581b9049a0f3e26fa9baa3cc967de7b68dba9c04405ba26cc57d3d2c39f4b4d2. Sep 9 04:54:30.939891 systemd[1]: Started cri-containerd-1c271c1d56d7eaed699e07ed7f1ca5c541145931d830907f6f888c97a1214bd3.scope - libcontainer container 1c271c1d56d7eaed699e07ed7f1ca5c541145931d830907f6f888c97a1214bd3. Sep 9 04:54:30.961823 containerd[1539]: time="2025-09-09T04:54:30.961352803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9,Uid:37dcac0bab3381dafeb47cb52e4dca4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"01c3b4174e58dcc59dbd16d67bc4082503b8bfb654017ddee2aa314d17b5c95c\"" Sep 9 04:54:30.969106 containerd[1539]: time="2025-09-09T04:54:30.969048793Z" level=info msg="CreateContainer within sandbox \"01c3b4174e58dcc59dbd16d67bc4082503b8bfb654017ddee2aa314d17b5c95c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 04:54:30.975937 kubelet[2391]: E0909 04:54:30.975646 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.114.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452-0-0-n-1f6e10e4b9?timeout=10s\": dial tcp 128.140.114.243:6443: connect: connection refused" interval="800ms" Sep 9 04:54:30.985560 containerd[1539]: time="2025-09-09T04:54:30.985446342Z" level=info msg="Container 813c3f00306f9e225aec0fcccb595b7e456a781f40338a1b015f51095d07f91e: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:54:31.002709 containerd[1539]: time="2025-09-09T04:54:31.002577342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4452-0-0-n-1f6e10e4b9,Uid:ff73763c03bca984511f1652c8d21391,Namespace:kube-system,Attempt:0,} returns sandbox id \"581b9049a0f3e26fa9baa3cc967de7b68dba9c04405ba26cc57d3d2c39f4b4d2\"" Sep 9 04:54:31.007033 containerd[1539]: time="2025-09-09T04:54:31.006564081Z" level=info msg="CreateContainer within sandbox \"01c3b4174e58dcc59dbd16d67bc4082503b8bfb654017ddee2aa314d17b5c95c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"813c3f00306f9e225aec0fcccb595b7e456a781f40338a1b015f51095d07f91e\"" Sep 9 04:54:31.007408 containerd[1539]: time="2025-09-09T04:54:31.007286572Z" level=info msg="StartContainer for \"813c3f00306f9e225aec0fcccb595b7e456a781f40338a1b015f51095d07f91e\"" Sep 9 04:54:31.007482 containerd[1539]: time="2025-09-09T04:54:31.007369062Z" level=info msg="CreateContainer within sandbox \"581b9049a0f3e26fa9baa3cc967de7b68dba9c04405ba26cc57d3d2c39f4b4d2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 04:54:31.011257 containerd[1539]: time="2025-09-09T04:54:31.011105169Z" level=info msg="connecting to shim 813c3f00306f9e225aec0fcccb595b7e456a781f40338a1b015f51095d07f91e" address="unix:///run/containerd/s/1bd8d63ba1034314f7dab6c6d209c1563ffeb04032fbafd9a7453b90abc0786b" protocol=ttrpc version=3 Sep 9 04:54:31.016182 containerd[1539]: time="2025-09-09T04:54:31.016105115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4452-0-0-n-1f6e10e4b9,Uid:687a54f3106bb66e0d5e6125b138a2bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c271c1d56d7eaed699e07ed7f1ca5c541145931d830907f6f888c97a1214bd3\"" Sep 9 04:54:31.019843 containerd[1539]: time="2025-09-09T04:54:31.019412448Z" level=info msg="CreateContainer within sandbox \"1c271c1d56d7eaed699e07ed7f1ca5c541145931d830907f6f888c97a1214bd3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 04:54:31.021313 containerd[1539]: time="2025-09-09T04:54:31.021274081Z" level=info msg="Container 466a25b14111b30a30335246947d685a07ec318145ad2cf045474d59c47b84d9: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:54:31.032121 containerd[1539]: time="2025-09-09T04:54:31.032078913Z" level=info msg="Container dd7ec135a8a57e6ff13e64cabf9cef82e8a3b8ce1be36e7c20d56562252a2b81: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:54:31.033669 containerd[1539]: time="2025-09-09T04:54:31.033617825Z" level=info msg="CreateContainer within sandbox \"581b9049a0f3e26fa9baa3cc967de7b68dba9c04405ba26cc57d3d2c39f4b4d2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"466a25b14111b30a30335246947d685a07ec318145ad2cf045474d59c47b84d9\"" Sep 9 04:54:31.035226 systemd[1]: Started cri-containerd-813c3f00306f9e225aec0fcccb595b7e456a781f40338a1b015f51095d07f91e.scope - libcontainer container 813c3f00306f9e225aec0fcccb595b7e456a781f40338a1b015f51095d07f91e. Sep 9 04:54:31.037098 containerd[1539]: time="2025-09-09T04:54:31.034725524Z" level=info msg="StartContainer for \"466a25b14111b30a30335246947d685a07ec318145ad2cf045474d59c47b84d9\"" Sep 9 04:54:31.037624 containerd[1539]: time="2025-09-09T04:54:31.037583481Z" level=info msg="connecting to shim 466a25b14111b30a30335246947d685a07ec318145ad2cf045474d59c47b84d9" address="unix:///run/containerd/s/aaeb5e2a6fa05bf5a7f710d4a0c2d6ec3c20fb18cef9fdaf1bd80b966e1a21f6" protocol=ttrpc version=3 Sep 9 04:54:31.045512 containerd[1539]: time="2025-09-09T04:54:31.045459506Z" level=info msg="CreateContainer within sandbox \"1c271c1d56d7eaed699e07ed7f1ca5c541145931d830907f6f888c97a1214bd3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dd7ec135a8a57e6ff13e64cabf9cef82e8a3b8ce1be36e7c20d56562252a2b81\"" Sep 9 04:54:31.046697 containerd[1539]: time="2025-09-09T04:54:31.046659016Z" level=info msg="StartContainer for \"dd7ec135a8a57e6ff13e64cabf9cef82e8a3b8ce1be36e7c20d56562252a2b81\"" Sep 9 04:54:31.049940 containerd[1539]: time="2025-09-09T04:54:31.049911143Z" level=info msg="connecting to shim dd7ec135a8a57e6ff13e64cabf9cef82e8a3b8ce1be36e7c20d56562252a2b81" address="unix:///run/containerd/s/bcd92ba5a3d6d42f1e7de0b8f18f38ff36b5672b2e4833696b3d35c37a53512b" protocol=ttrpc version=3 Sep 9 04:54:31.070082 systemd[1]: Started cri-containerd-466a25b14111b30a30335246947d685a07ec318145ad2cf045474d59c47b84d9.scope - libcontainer container 466a25b14111b30a30335246947d685a07ec318145ad2cf045474d59c47b84d9. Sep 9 04:54:31.079983 systemd[1]: Started cri-containerd-dd7ec135a8a57e6ff13e64cabf9cef82e8a3b8ce1be36e7c20d56562252a2b81.scope - libcontainer container dd7ec135a8a57e6ff13e64cabf9cef82e8a3b8ce1be36e7c20d56562252a2b81. Sep 9 04:54:31.103592 containerd[1539]: time="2025-09-09T04:54:31.103534730Z" level=info msg="StartContainer for \"813c3f00306f9e225aec0fcccb595b7e456a781f40338a1b015f51095d07f91e\" returns successfully" Sep 9 04:54:31.166140 containerd[1539]: time="2025-09-09T04:54:31.166016225Z" level=info msg="StartContainer for \"dd7ec135a8a57e6ff13e64cabf9cef82e8a3b8ce1be36e7c20d56562252a2b81\" returns successfully" Sep 9 04:54:31.172015 kubelet[2391]: I0909 04:54:31.171985 2391 kubelet_node_status.go:72] "Attempting to register node" node="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:31.173107 kubelet[2391]: E0909 04:54:31.173071 2391 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://128.140.114.243:6443/api/v1/nodes\": dial tcp 128.140.114.243:6443: connect: connection refused" node="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:31.173645 containerd[1539]: time="2025-09-09T04:54:31.173610375Z" level=info msg="StartContainer for \"466a25b14111b30a30335246947d685a07ec318145ad2cf045474d59c47b84d9\" returns successfully" Sep 9 04:54:31.975586 kubelet[2391]: I0909 04:54:31.974986 2391 kubelet_node_status.go:72] "Attempting to register node" node="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:33.720142 kubelet[2391]: E0909 04:54:33.720088 2391 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4452-0-0-n-1f6e10e4b9\" not found" node="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:33.771326 kubelet[2391]: E0909 04:54:33.771180 2391 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4452-0-0-n-1f6e10e4b9.1863843b6c35b408 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4452-0-0-n-1f6e10e4b9,UID:ci-4452-0-0-n-1f6e10e4b9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4452-0-0-n-1f6e10e4b9,},FirstTimestamp:2025-09-09 04:54:30.358021128 +0000 UTC m=+1.122677792,LastTimestamp:2025-09-09 04:54:30.358021128 +0000 UTC m=+1.122677792,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4452-0-0-n-1f6e10e4b9,}" Sep 9 04:54:33.833979 kubelet[2391]: I0909 04:54:33.833927 2391 kubelet_node_status.go:75] "Successfully registered node" node="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:34.354777 kubelet[2391]: I0909 04:54:34.354672 2391 apiserver.go:52] "Watching apiserver" Sep 9 04:54:34.374407 kubelet[2391]: I0909 04:54:34.374336 2391 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 04:54:35.686669 systemd[1]: Reload requested from client PID 2663 ('systemctl') (unit session-7.scope)... Sep 9 04:54:35.686699 systemd[1]: Reloading... Sep 9 04:54:35.798776 zram_generator::config[2707]: No configuration found. Sep 9 04:54:35.999269 systemd[1]: Reloading finished in 312 ms. Sep 9 04:54:36.024965 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:54:36.038829 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 04:54:36.039142 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:54:36.039231 systemd[1]: kubelet.service: Consumed 1.576s CPU time, 125.8M memory peak. Sep 9 04:54:36.043192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:54:36.200344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:54:36.212144 (kubelet)[2752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 04:54:36.274045 kubelet[2752]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 04:54:36.274045 kubelet[2752]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 04:54:36.274045 kubelet[2752]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 04:54:36.276852 kubelet[2752]: I0909 04:54:36.274428 2752 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 04:54:36.281086 kubelet[2752]: I0909 04:54:36.281040 2752 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 04:54:36.281086 kubelet[2752]: I0909 04:54:36.281065 2752 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 04:54:36.281777 kubelet[2752]: I0909 04:54:36.281313 2752 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 04:54:36.283539 kubelet[2752]: I0909 04:54:36.283497 2752 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 04:54:36.286243 kubelet[2752]: I0909 04:54:36.285798 2752 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 04:54:36.297298 kubelet[2752]: I0909 04:54:36.297221 2752 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 04:54:36.299759 kubelet[2752]: I0909 04:54:36.299716 2752 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 04:54:36.299886 kubelet[2752]: I0909 04:54:36.299852 2752 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 04:54:36.299969 kubelet[2752]: I0909 04:54:36.299944 2752 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 04:54:36.300171 kubelet[2752]: I0909 04:54:36.299969 2752 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4452-0-0-n-1f6e10e4b9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 04:54:36.300260 kubelet[2752]: I0909 04:54:36.300177 2752 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 04:54:36.300260 kubelet[2752]: I0909 04:54:36.300186 2752 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 04:54:36.300260 kubelet[2752]: I0909 04:54:36.300225 2752 state_mem.go:36] "Initialized new in-memory state store" Sep 9 04:54:36.300344 kubelet[2752]: I0909 04:54:36.300326 2752 kubelet.go:408] "Attempting to sync node with API server" Sep 9 04:54:36.300389 kubelet[2752]: I0909 04:54:36.300369 2752 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 04:54:36.300424 kubelet[2752]: I0909 04:54:36.300399 2752 kubelet.go:314] "Adding apiserver pod source" Sep 9 04:54:36.300424 kubelet[2752]: I0909 04:54:36.300415 2752 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 04:54:36.302316 kubelet[2752]: I0909 04:54:36.302208 2752 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 04:54:36.302956 kubelet[2752]: I0909 04:54:36.302708 2752 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 04:54:36.303236 kubelet[2752]: I0909 04:54:36.303136 2752 server.go:1274] "Started kubelet" Sep 9 04:54:36.306438 kubelet[2752]: I0909 04:54:36.304993 2752 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 04:54:36.311183 kubelet[2752]: I0909 04:54:36.309264 2752 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 04:54:36.311183 kubelet[2752]: I0909 04:54:36.310100 2752 server.go:449] "Adding debug handlers to kubelet server" Sep 9 04:54:36.311298 kubelet[2752]: I0909 04:54:36.311172 2752 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 04:54:36.311761 kubelet[2752]: I0909 04:54:36.311370 2752 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 04:54:36.311761 kubelet[2752]: I0909 04:54:36.311562 2752 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 04:54:36.313601 kubelet[2752]: I0909 04:54:36.313239 2752 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 04:54:36.313601 kubelet[2752]: E0909 04:54:36.313490 2752 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4452-0-0-n-1f6e10e4b9\" not found" Sep 9 04:54:36.317984 kubelet[2752]: I0909 04:54:36.315250 2752 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 04:54:36.317984 kubelet[2752]: I0909 04:54:36.315418 2752 reconciler.go:26] "Reconciler: start to sync state" Sep 9 04:54:36.317984 kubelet[2752]: I0909 04:54:36.317519 2752 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 04:54:36.318561 kubelet[2752]: I0909 04:54:36.318525 2752 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 04:54:36.318561 kubelet[2752]: I0909 04:54:36.318551 2752 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 04:54:36.318643 kubelet[2752]: I0909 04:54:36.318567 2752 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 04:54:36.318643 kubelet[2752]: E0909 04:54:36.318607 2752 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 04:54:36.328773 kubelet[2752]: I0909 04:54:36.327596 2752 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 04:54:36.333466 kubelet[2752]: E0909 04:54:36.333282 2752 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 04:54:36.334372 kubelet[2752]: I0909 04:54:36.334183 2752 factory.go:221] Registration of the containerd container factory successfully Sep 9 04:54:36.334372 kubelet[2752]: I0909 04:54:36.334215 2752 factory.go:221] Registration of the systemd container factory successfully Sep 9 04:54:36.404138 kubelet[2752]: I0909 04:54:36.404077 2752 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 04:54:36.404138 kubelet[2752]: I0909 04:54:36.404114 2752 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 04:54:36.404138 kubelet[2752]: I0909 04:54:36.404147 2752 state_mem.go:36] "Initialized new in-memory state store" Sep 9 04:54:36.404413 kubelet[2752]: I0909 04:54:36.404298 2752 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 04:54:36.404413 kubelet[2752]: I0909 04:54:36.404309 2752 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 04:54:36.404413 kubelet[2752]: I0909 04:54:36.404327 2752 policy_none.go:49] "None policy: Start" Sep 9 04:54:36.405292 kubelet[2752]: I0909 04:54:36.405054 2752 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 04:54:36.405292 kubelet[2752]: I0909 04:54:36.405087 2752 state_mem.go:35] "Initializing new in-memory state store" Sep 9 04:54:36.405292 kubelet[2752]: I0909 04:54:36.405231 2752 state_mem.go:75] "Updated machine memory state" Sep 9 04:54:36.410146 kubelet[2752]: I0909 04:54:36.410097 2752 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 04:54:36.410281 kubelet[2752]: I0909 04:54:36.410265 2752 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 04:54:36.410312 kubelet[2752]: I0909 04:54:36.410281 2752 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 04:54:36.412031 kubelet[2752]: I0909 04:54:36.412004 2752 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 04:54:36.433661 kubelet[2752]: E0909 04:54:36.433613 2752 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4452-0-0-n-1f6e10e4b9\" already exists" pod="kube-system/kube-scheduler-ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:36.518381 kubelet[2752]: I0909 04:54:36.518008 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/687a54f3106bb66e0d5e6125b138a2bd-k8s-certs\") pod \"kube-apiserver-ci-4452-0-0-n-1f6e10e4b9\" (UID: \"687a54f3106bb66e0d5e6125b138a2bd\") " pod="kube-system/kube-apiserver-ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:36.518381 kubelet[2752]: I0909 04:54:36.518095 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/687a54f3106bb66e0d5e6125b138a2bd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4452-0-0-n-1f6e10e4b9\" (UID: \"687a54f3106bb66e0d5e6125b138a2bd\") " pod="kube-system/kube-apiserver-ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:36.518381 kubelet[2752]: I0909 04:54:36.518154 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37dcac0bab3381dafeb47cb52e4dca4a-k8s-certs\") pod \"kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9\" (UID: \"37dcac0bab3381dafeb47cb52e4dca4a\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:36.518381 kubelet[2752]: I0909 04:54:36.518183 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ff73763c03bca984511f1652c8d21391-kubeconfig\") pod \"kube-scheduler-ci-4452-0-0-n-1f6e10e4b9\" (UID: \"ff73763c03bca984511f1652c8d21391\") " pod="kube-system/kube-scheduler-ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:36.518381 kubelet[2752]: I0909 04:54:36.518230 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/687a54f3106bb66e0d5e6125b138a2bd-ca-certs\") pod \"kube-apiserver-ci-4452-0-0-n-1f6e10e4b9\" (UID: \"687a54f3106bb66e0d5e6125b138a2bd\") " pod="kube-system/kube-apiserver-ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:36.518677 kubelet[2752]: I0909 04:54:36.518259 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/37dcac0bab3381dafeb47cb52e4dca4a-ca-certs\") pod \"kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9\" (UID: \"37dcac0bab3381dafeb47cb52e4dca4a\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:36.518677 kubelet[2752]: I0909 04:54:36.518284 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/37dcac0bab3381dafeb47cb52e4dca4a-flexvolume-dir\") pod \"kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9\" (UID: \"37dcac0bab3381dafeb47cb52e4dca4a\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:36.518677 kubelet[2752]: I0909 04:54:36.518309 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37dcac0bab3381dafeb47cb52e4dca4a-kubeconfig\") pod \"kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9\" (UID: \"37dcac0bab3381dafeb47cb52e4dca4a\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:36.518677 kubelet[2752]: I0909 04:54:36.518332 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37dcac0bab3381dafeb47cb52e4dca4a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9\" (UID: \"37dcac0bab3381dafeb47cb52e4dca4a\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:36.518948 kubelet[2752]: I0909 04:54:36.518724 2752 kubelet_node_status.go:72] "Attempting to register node" node="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:36.531279 kubelet[2752]: I0909 04:54:36.531165 2752 kubelet_node_status.go:111] "Node was previously registered" node="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:36.533789 kubelet[2752]: I0909 04:54:36.531262 2752 kubelet_node_status.go:75] "Successfully registered node" node="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:54:37.301203 kubelet[2752]: I0909 04:54:37.301126 2752 apiserver.go:52] "Watching apiserver" Sep 9 04:54:37.316589 kubelet[2752]: I0909 04:54:37.316518 2752 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 04:54:37.446952 kubelet[2752]: I0909 04:54:37.446814 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4452-0-0-n-1f6e10e4b9" podStartSLOduration=2.446794021 podStartE2EDuration="2.446794021s" podCreationTimestamp="2025-09-09 04:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:54:37.426863027 +0000 UTC m=+1.210293204" watchObservedRunningTime="2025-09-09 04:54:37.446794021 +0000 UTC m=+1.230224158" Sep 9 04:54:37.465167 kubelet[2752]: I0909 04:54:37.464983 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4452-0-0-n-1f6e10e4b9" podStartSLOduration=1.464963385 podStartE2EDuration="1.464963385s" podCreationTimestamp="2025-09-09 04:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:54:37.449065867 +0000 UTC m=+1.232496044" watchObservedRunningTime="2025-09-09 04:54:37.464963385 +0000 UTC m=+1.248393522" Sep 9 04:54:41.091191 kubelet[2752]: I0909 04:54:41.090881 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9" podStartSLOduration=5.090861697 podStartE2EDuration="5.090861697s" podCreationTimestamp="2025-09-09 04:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:54:37.465883684 +0000 UTC m=+1.249313821" watchObservedRunningTime="2025-09-09 04:54:41.090861697 +0000 UTC m=+4.874291874" Sep 9 04:54:42.090825 kubelet[2752]: I0909 04:54:42.090555 2752 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 04:54:42.091263 containerd[1539]: time="2025-09-09T04:54:42.091080896Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 04:54:42.092067 kubelet[2752]: I0909 04:54:42.091771 2752 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 04:54:42.783208 systemd[1]: Created slice kubepods-besteffort-podc095c86c_bc60_4342_8aaf_116e623749ec.slice - libcontainer container kubepods-besteffort-podc095c86c_bc60_4342_8aaf_116e623749ec.slice. Sep 9 04:54:42.787793 kubelet[2752]: W0909 04:54:42.787254 2752 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc095c86c_bc60_4342_8aaf_116e623749ec.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc095c86c_bc60_4342_8aaf_116e623749ec.slice/cpuset.cpus.effective: no such device Sep 9 04:54:42.861089 kubelet[2752]: I0909 04:54:42.860987 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c095c86c-bc60-4342-8aaf-116e623749ec-kube-proxy\") pod \"kube-proxy-5g8v9\" (UID: \"c095c86c-bc60-4342-8aaf-116e623749ec\") " pod="kube-system/kube-proxy-5g8v9" Sep 9 04:54:42.861520 kubelet[2752]: I0909 04:54:42.861497 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c095c86c-bc60-4342-8aaf-116e623749ec-xtables-lock\") pod \"kube-proxy-5g8v9\" (UID: \"c095c86c-bc60-4342-8aaf-116e623749ec\") " pod="kube-system/kube-proxy-5g8v9" Sep 9 04:54:42.861845 kubelet[2752]: I0909 04:54:42.861803 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c095c86c-bc60-4342-8aaf-116e623749ec-lib-modules\") pod \"kube-proxy-5g8v9\" (UID: \"c095c86c-bc60-4342-8aaf-116e623749ec\") " pod="kube-system/kube-proxy-5g8v9" Sep 9 04:54:42.862081 kubelet[2752]: I0909 04:54:42.861922 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-678jr\" (UniqueName: \"kubernetes.io/projected/c095c86c-bc60-4342-8aaf-116e623749ec-kube-api-access-678jr\") pod \"kube-proxy-5g8v9\" (UID: \"c095c86c-bc60-4342-8aaf-116e623749ec\") " pod="kube-system/kube-proxy-5g8v9" Sep 9 04:54:43.098804 containerd[1539]: time="2025-09-09T04:54:43.098646717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5g8v9,Uid:c095c86c-bc60-4342-8aaf-116e623749ec,Namespace:kube-system,Attempt:0,}" Sep 9 04:54:43.130513 containerd[1539]: time="2025-09-09T04:54:43.130380020Z" level=info msg="connecting to shim 63d676c46c19e1bc1686ec8f09396090498a41fe9c9cfbde2e1d530c37a1cca7" address="unix:///run/containerd/s/663e93a43a6cec212ed67c1d301d85e0c63e4edb0d034f062f11b52f93b50a66" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:54:43.174014 systemd[1]: Started cri-containerd-63d676c46c19e1bc1686ec8f09396090498a41fe9c9cfbde2e1d530c37a1cca7.scope - libcontainer container 63d676c46c19e1bc1686ec8f09396090498a41fe9c9cfbde2e1d530c37a1cca7. Sep 9 04:54:43.192930 systemd[1]: Created slice kubepods-besteffort-pod2cebccc6_7f81_4211_bc76_13c25e966f00.slice - libcontainer container kubepods-besteffort-pod2cebccc6_7f81_4211_bc76_13c25e966f00.slice. Sep 9 04:54:43.221923 containerd[1539]: time="2025-09-09T04:54:43.221798485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5g8v9,Uid:c095c86c-bc60-4342-8aaf-116e623749ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"63d676c46c19e1bc1686ec8f09396090498a41fe9c9cfbde2e1d530c37a1cca7\"" Sep 9 04:54:43.226363 containerd[1539]: time="2025-09-09T04:54:43.226316441Z" level=info msg="CreateContainer within sandbox \"63d676c46c19e1bc1686ec8f09396090498a41fe9c9cfbde2e1d530c37a1cca7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 04:54:43.241806 containerd[1539]: time="2025-09-09T04:54:43.240130094Z" level=info msg="Container b404aaf3906d9c9e7d44e7ad0d70a1efd71c4124983e226c8a4236ac93f83bdd: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:54:43.251856 containerd[1539]: time="2025-09-09T04:54:43.250408966Z" level=info msg="CreateContainer within sandbox \"63d676c46c19e1bc1686ec8f09396090498a41fe9c9cfbde2e1d530c37a1cca7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b404aaf3906d9c9e7d44e7ad0d70a1efd71c4124983e226c8a4236ac93f83bdd\"" Sep 9 04:54:43.252738 containerd[1539]: time="2025-09-09T04:54:43.252392878Z" level=info msg="StartContainer for \"b404aaf3906d9c9e7d44e7ad0d70a1efd71c4124983e226c8a4236ac93f83bdd\"" Sep 9 04:54:43.256242 containerd[1539]: time="2025-09-09T04:54:43.256217127Z" level=info msg="connecting to shim b404aaf3906d9c9e7d44e7ad0d70a1efd71c4124983e226c8a4236ac93f83bdd" address="unix:///run/containerd/s/663e93a43a6cec212ed67c1d301d85e0c63e4edb0d034f062f11b52f93b50a66" protocol=ttrpc version=3 Sep 9 04:54:43.264077 kubelet[2752]: I0909 04:54:43.263919 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnfqw\" (UniqueName: \"kubernetes.io/projected/2cebccc6-7f81-4211-bc76-13c25e966f00-kube-api-access-tnfqw\") pod \"tigera-operator-58fc44c59b-sjl9f\" (UID: \"2cebccc6-7f81-4211-bc76-13c25e966f00\") " pod="tigera-operator/tigera-operator-58fc44c59b-sjl9f" Sep 9 04:54:43.264077 kubelet[2752]: I0909 04:54:43.263968 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2cebccc6-7f81-4211-bc76-13c25e966f00-var-lib-calico\") pod \"tigera-operator-58fc44c59b-sjl9f\" (UID: \"2cebccc6-7f81-4211-bc76-13c25e966f00\") " pod="tigera-operator/tigera-operator-58fc44c59b-sjl9f" Sep 9 04:54:43.277963 systemd[1]: Started cri-containerd-b404aaf3906d9c9e7d44e7ad0d70a1efd71c4124983e226c8a4236ac93f83bdd.scope - libcontainer container b404aaf3906d9c9e7d44e7ad0d70a1efd71c4124983e226c8a4236ac93f83bdd. Sep 9 04:54:43.317093 containerd[1539]: time="2025-09-09T04:54:43.316980633Z" level=info msg="StartContainer for \"b404aaf3906d9c9e7d44e7ad0d70a1efd71c4124983e226c8a4236ac93f83bdd\" returns successfully" Sep 9 04:54:43.421625 kubelet[2752]: I0909 04:54:43.421541 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5g8v9" podStartSLOduration=1.421430315 podStartE2EDuration="1.421430315s" podCreationTimestamp="2025-09-09 04:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:54:43.421135047 +0000 UTC m=+7.204565184" watchObservedRunningTime="2025-09-09 04:54:43.421430315 +0000 UTC m=+7.204860412" Sep 9 04:54:43.497559 containerd[1539]: time="2025-09-09T04:54:43.497467975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-sjl9f,Uid:2cebccc6-7f81-4211-bc76-13c25e966f00,Namespace:tigera-operator,Attempt:0,}" Sep 9 04:54:43.529035 containerd[1539]: time="2025-09-09T04:54:43.528943173Z" level=info msg="connecting to shim f7c419e06946d6db0714dda7294a8551194b4b53bd9f5e283392ef6aca719e3f" address="unix:///run/containerd/s/0b6be4896e6509e395089e41c9f5415c1c5a093f97f827117f3a71f54e84bbd7" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:54:43.556148 systemd[1]: Started cri-containerd-f7c419e06946d6db0714dda7294a8551194b4b53bd9f5e283392ef6aca719e3f.scope - libcontainer container f7c419e06946d6db0714dda7294a8551194b4b53bd9f5e283392ef6aca719e3f. Sep 9 04:54:43.599265 containerd[1539]: time="2025-09-09T04:54:43.599221997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-sjl9f,Uid:2cebccc6-7f81-4211-bc76-13c25e966f00,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f7c419e06946d6db0714dda7294a8551194b4b53bd9f5e283392ef6aca719e3f\"" Sep 9 04:54:43.602032 containerd[1539]: time="2025-09-09T04:54:43.601997785Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 04:54:46.700178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2628370698.mount: Deactivated successfully. Sep 9 04:54:50.439895 containerd[1539]: time="2025-09-09T04:54:50.439649159Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:50.441361 containerd[1539]: time="2025-09-09T04:54:50.441131929Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 9 04:54:50.442588 containerd[1539]: time="2025-09-09T04:54:50.442529251Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:50.447839 containerd[1539]: time="2025-09-09T04:54:50.447484286Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:50.448283 containerd[1539]: time="2025-09-09T04:54:50.448254313Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 6.846005543s" Sep 9 04:54:50.448372 containerd[1539]: time="2025-09-09T04:54:50.448356002Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 9 04:54:50.451945 containerd[1539]: time="2025-09-09T04:54:50.451866350Z" level=info msg="CreateContainer within sandbox \"f7c419e06946d6db0714dda7294a8551194b4b53bd9f5e283392ef6aca719e3f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 04:54:50.462791 containerd[1539]: time="2025-09-09T04:54:50.462447397Z" level=info msg="Container 97557d872fe402a7698436de8a42cd3f550bf933b8b3427d5f5972e9d6df603a: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:54:50.467704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3269837599.mount: Deactivated successfully. Sep 9 04:54:50.475702 containerd[1539]: time="2025-09-09T04:54:50.475532304Z" level=info msg="CreateContainer within sandbox \"f7c419e06946d6db0714dda7294a8551194b4b53bd9f5e283392ef6aca719e3f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"97557d872fe402a7698436de8a42cd3f550bf933b8b3427d5f5972e9d6df603a\"" Sep 9 04:54:50.477282 containerd[1539]: time="2025-09-09T04:54:50.477213332Z" level=info msg="StartContainer for \"97557d872fe402a7698436de8a42cd3f550bf933b8b3427d5f5972e9d6df603a\"" Sep 9 04:54:50.480153 containerd[1539]: time="2025-09-09T04:54:50.480033459Z" level=info msg="connecting to shim 97557d872fe402a7698436de8a42cd3f550bf933b8b3427d5f5972e9d6df603a" address="unix:///run/containerd/s/0b6be4896e6509e395089e41c9f5415c1c5a093f97f827117f3a71f54e84bbd7" protocol=ttrpc version=3 Sep 9 04:54:50.506034 systemd[1]: Started cri-containerd-97557d872fe402a7698436de8a42cd3f550bf933b8b3427d5f5972e9d6df603a.scope - libcontainer container 97557d872fe402a7698436de8a42cd3f550bf933b8b3427d5f5972e9d6df603a. Sep 9 04:54:50.551700 containerd[1539]: time="2025-09-09T04:54:50.551470801Z" level=info msg="StartContainer for \"97557d872fe402a7698436de8a42cd3f550bf933b8b3427d5f5972e9d6df603a\" returns successfully" Sep 9 04:54:51.442789 kubelet[2752]: I0909 04:54:51.442436 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-sjl9f" podStartSLOduration=1.593074536 podStartE2EDuration="8.442411028s" podCreationTimestamp="2025-09-09 04:54:43 +0000 UTC" firstStartedPulling="2025-09-09 04:54:43.600442795 +0000 UTC m=+7.383872932" lastFinishedPulling="2025-09-09 04:54:50.449779287 +0000 UTC m=+14.233209424" observedRunningTime="2025-09-09 04:54:51.442073199 +0000 UTC m=+15.225503376" watchObservedRunningTime="2025-09-09 04:54:51.442411028 +0000 UTC m=+15.225841205" Sep 9 04:54:56.788426 sudo[1835]: pam_unix(sudo:session): session closed for user root Sep 9 04:54:56.948909 sshd[1834]: Connection closed by 147.75.109.163 port 54668 Sep 9 04:54:56.948308 sshd-session[1831]: pam_unix(sshd:session): session closed for user core Sep 9 04:54:56.954813 systemd[1]: sshd@6-128.140.114.243:22-147.75.109.163:54668.service: Deactivated successfully. Sep 9 04:54:56.959388 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 04:54:56.963320 systemd[1]: session-7.scope: Consumed 7.497s CPU time, 216.1M memory peak. Sep 9 04:54:56.968348 systemd-logind[1515]: Session 7 logged out. Waiting for processes to exit. Sep 9 04:54:56.972423 systemd-logind[1515]: Removed session 7. Sep 9 04:55:05.254011 systemd[1]: Created slice kubepods-besteffort-pod8cedbc3b_5db4_408d_814e_a8c1eec562d7.slice - libcontainer container kubepods-besteffort-pod8cedbc3b_5db4_408d_814e_a8c1eec562d7.slice. Sep 9 04:55:05.387289 systemd[1]: Created slice kubepods-besteffort-pod34c262a1_83cf_4e18_98cc_c8837a4024b9.slice - libcontainer container kubepods-besteffort-pod34c262a1_83cf_4e18_98cc_c8837a4024b9.slice. Sep 9 04:55:05.414730 kubelet[2752]: I0909 04:55:05.414668 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/34c262a1-83cf-4e18-98cc-c8837a4024b9-cni-bin-dir\") pod \"calico-node-bscrp\" (UID: \"34c262a1-83cf-4e18-98cc-c8837a4024b9\") " pod="calico-system/calico-node-bscrp" Sep 9 04:55:05.414730 kubelet[2752]: I0909 04:55:05.414722 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/34c262a1-83cf-4e18-98cc-c8837a4024b9-policysync\") pod \"calico-node-bscrp\" (UID: \"34c262a1-83cf-4e18-98cc-c8837a4024b9\") " pod="calico-system/calico-node-bscrp" Sep 9 04:55:05.414730 kubelet[2752]: I0909 04:55:05.414752 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/34c262a1-83cf-4e18-98cc-c8837a4024b9-var-lib-calico\") pod \"calico-node-bscrp\" (UID: \"34c262a1-83cf-4e18-98cc-c8837a4024b9\") " pod="calico-system/calico-node-bscrp" Sep 9 04:55:05.414730 kubelet[2752]: I0909 04:55:05.414772 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8cedbc3b-5db4-408d-814e-a8c1eec562d7-typha-certs\") pod \"calico-typha-6cf56cbcfb-m8c8z\" (UID: \"8cedbc3b-5db4-408d-814e-a8c1eec562d7\") " pod="calico-system/calico-typha-6cf56cbcfb-m8c8z" Sep 9 04:55:05.415737 kubelet[2752]: I0909 04:55:05.414987 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/34c262a1-83cf-4e18-98cc-c8837a4024b9-cni-log-dir\") pod \"calico-node-bscrp\" (UID: \"34c262a1-83cf-4e18-98cc-c8837a4024b9\") " pod="calico-system/calico-node-bscrp" Sep 9 04:55:05.415737 kubelet[2752]: I0909 04:55:05.415023 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/34c262a1-83cf-4e18-98cc-c8837a4024b9-node-certs\") pod \"calico-node-bscrp\" (UID: \"34c262a1-83cf-4e18-98cc-c8837a4024b9\") " pod="calico-system/calico-node-bscrp" Sep 9 04:55:05.415737 kubelet[2752]: I0909 04:55:05.415045 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34c262a1-83cf-4e18-98cc-c8837a4024b9-xtables-lock\") pod \"calico-node-bscrp\" (UID: \"34c262a1-83cf-4e18-98cc-c8837a4024b9\") " pod="calico-system/calico-node-bscrp" Sep 9 04:55:05.415737 kubelet[2752]: I0909 04:55:05.415061 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/34c262a1-83cf-4e18-98cc-c8837a4024b9-var-run-calico\") pod \"calico-node-bscrp\" (UID: \"34c262a1-83cf-4e18-98cc-c8837a4024b9\") " pod="calico-system/calico-node-bscrp" Sep 9 04:55:05.415737 kubelet[2752]: I0909 04:55:05.415079 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cedbc3b-5db4-408d-814e-a8c1eec562d7-tigera-ca-bundle\") pod \"calico-typha-6cf56cbcfb-m8c8z\" (UID: \"8cedbc3b-5db4-408d-814e-a8c1eec562d7\") " pod="calico-system/calico-typha-6cf56cbcfb-m8c8z" Sep 9 04:55:05.415918 kubelet[2752]: I0909 04:55:05.415106 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnqpj\" (UniqueName: \"kubernetes.io/projected/8cedbc3b-5db4-408d-814e-a8c1eec562d7-kube-api-access-pnqpj\") pod \"calico-typha-6cf56cbcfb-m8c8z\" (UID: \"8cedbc3b-5db4-408d-814e-a8c1eec562d7\") " pod="calico-system/calico-typha-6cf56cbcfb-m8c8z" Sep 9 04:55:05.415918 kubelet[2752]: I0909 04:55:05.415122 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/34c262a1-83cf-4e18-98cc-c8837a4024b9-flexvol-driver-host\") pod \"calico-node-bscrp\" (UID: \"34c262a1-83cf-4e18-98cc-c8837a4024b9\") " pod="calico-system/calico-node-bscrp" Sep 9 04:55:05.416991 kubelet[2752]: I0909 04:55:05.415237 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34c262a1-83cf-4e18-98cc-c8837a4024b9-lib-modules\") pod \"calico-node-bscrp\" (UID: \"34c262a1-83cf-4e18-98cc-c8837a4024b9\") " pod="calico-system/calico-node-bscrp" Sep 9 04:55:05.416991 kubelet[2752]: I0909 04:55:05.416877 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv65r\" (UniqueName: \"kubernetes.io/projected/34c262a1-83cf-4e18-98cc-c8837a4024b9-kube-api-access-rv65r\") pod \"calico-node-bscrp\" (UID: \"34c262a1-83cf-4e18-98cc-c8837a4024b9\") " pod="calico-system/calico-node-bscrp" Sep 9 04:55:05.416991 kubelet[2752]: I0909 04:55:05.416908 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/34c262a1-83cf-4e18-98cc-c8837a4024b9-cni-net-dir\") pod \"calico-node-bscrp\" (UID: \"34c262a1-83cf-4e18-98cc-c8837a4024b9\") " pod="calico-system/calico-node-bscrp" Sep 9 04:55:05.416991 kubelet[2752]: I0909 04:55:05.416937 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34c262a1-83cf-4e18-98cc-c8837a4024b9-tigera-ca-bundle\") pod \"calico-node-bscrp\" (UID: \"34c262a1-83cf-4e18-98cc-c8837a4024b9\") " pod="calico-system/calico-node-bscrp" Sep 9 04:55:05.508398 kubelet[2752]: E0909 04:55:05.507600 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6brn6" podUID="e660f608-9e83-4b76-ae9d-6598e92ef788" Sep 9 04:55:05.517987 kubelet[2752]: I0909 04:55:05.517922 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcz4q\" (UniqueName: \"kubernetes.io/projected/e660f608-9e83-4b76-ae9d-6598e92ef788-kube-api-access-wcz4q\") pod \"csi-node-driver-6brn6\" (UID: \"e660f608-9e83-4b76-ae9d-6598e92ef788\") " pod="calico-system/csi-node-driver-6brn6" Sep 9 04:55:05.518189 kubelet[2752]: I0909 04:55:05.518018 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e660f608-9e83-4b76-ae9d-6598e92ef788-registration-dir\") pod \"csi-node-driver-6brn6\" (UID: \"e660f608-9e83-4b76-ae9d-6598e92ef788\") " pod="calico-system/csi-node-driver-6brn6" Sep 9 04:55:05.518189 kubelet[2752]: I0909 04:55:05.518041 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e660f608-9e83-4b76-ae9d-6598e92ef788-kubelet-dir\") pod \"csi-node-driver-6brn6\" (UID: \"e660f608-9e83-4b76-ae9d-6598e92ef788\") " pod="calico-system/csi-node-driver-6brn6" Sep 9 04:55:05.518189 kubelet[2752]: I0909 04:55:05.518059 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e660f608-9e83-4b76-ae9d-6598e92ef788-socket-dir\") pod \"csi-node-driver-6brn6\" (UID: \"e660f608-9e83-4b76-ae9d-6598e92ef788\") " pod="calico-system/csi-node-driver-6brn6" Sep 9 04:55:05.518189 kubelet[2752]: I0909 04:55:05.518091 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e660f608-9e83-4b76-ae9d-6598e92ef788-varrun\") pod \"csi-node-driver-6brn6\" (UID: \"e660f608-9e83-4b76-ae9d-6598e92ef788\") " pod="calico-system/csi-node-driver-6brn6" Sep 9 04:55:05.520209 kubelet[2752]: E0909 04:55:05.520165 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.520209 kubelet[2752]: W0909 04:55:05.520202 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.520397 kubelet[2752]: E0909 04:55:05.520230 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.523319 kubelet[2752]: E0909 04:55:05.523280 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.523319 kubelet[2752]: W0909 04:55:05.523308 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.523319 kubelet[2752]: E0909 04:55:05.523331 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.523532 kubelet[2752]: E0909 04:55:05.523510 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.523532 kubelet[2752]: W0909 04:55:05.523525 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.523532 kubelet[2752]: E0909 04:55:05.523539 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.523702 kubelet[2752]: E0909 04:55:05.523686 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.523702 kubelet[2752]: W0909 04:55:05.523699 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.523798 kubelet[2752]: E0909 04:55:05.523718 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.523892 kubelet[2752]: E0909 04:55:05.523870 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.523892 kubelet[2752]: W0909 04:55:05.523885 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.523991 kubelet[2752]: E0909 04:55:05.523961 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.524264 kubelet[2752]: E0909 04:55:05.524234 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.524264 kubelet[2752]: W0909 04:55:05.524253 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.524357 kubelet[2752]: E0909 04:55:05.524334 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.524888 kubelet[2752]: E0909 04:55:05.524848 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.524888 kubelet[2752]: W0909 04:55:05.524872 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.525042 kubelet[2752]: E0909 04:55:05.524935 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.525110 kubelet[2752]: E0909 04:55:05.525087 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.525110 kubelet[2752]: W0909 04:55:05.525103 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.525188 kubelet[2752]: E0909 04:55:05.525142 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.525279 kubelet[2752]: E0909 04:55:05.525243 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.525334 kubelet[2752]: W0909 04:55:05.525280 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.526397 kubelet[2752]: E0909 04:55:05.525383 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.526614 kubelet[2752]: E0909 04:55:05.526586 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.526614 kubelet[2752]: W0909 04:55:05.526610 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.526721 kubelet[2752]: E0909 04:55:05.526687 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.528569 kubelet[2752]: E0909 04:55:05.528542 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.528569 kubelet[2752]: W0909 04:55:05.528564 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.528736 kubelet[2752]: E0909 04:55:05.528706 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.529020 kubelet[2752]: E0909 04:55:05.528986 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.529020 kubelet[2752]: W0909 04:55:05.529013 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.529122 kubelet[2752]: E0909 04:55:05.529097 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.529726 kubelet[2752]: E0909 04:55:05.529688 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.529726 kubelet[2752]: W0909 04:55:05.529723 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.531008 kubelet[2752]: E0909 04:55:05.530954 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.531181 kubelet[2752]: E0909 04:55:05.531153 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.531181 kubelet[2752]: W0909 04:55:05.531174 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.531306 kubelet[2752]: E0909 04:55:05.531263 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.531405 kubelet[2752]: E0909 04:55:05.531383 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.531405 kubelet[2752]: W0909 04:55:05.531398 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.531514 kubelet[2752]: E0909 04:55:05.531481 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.531597 kubelet[2752]: E0909 04:55:05.531579 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.531597 kubelet[2752]: W0909 04:55:05.531594 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.531735 kubelet[2752]: E0909 04:55:05.531712 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.531875 kubelet[2752]: E0909 04:55:05.531812 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.531875 kubelet[2752]: W0909 04:55:05.531819 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.531921 kubelet[2752]: E0909 04:55:05.531896 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.531993 kubelet[2752]: E0909 04:55:05.531975 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.531993 kubelet[2752]: W0909 04:55:05.531989 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.532087 kubelet[2752]: E0909 04:55:05.532070 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.532321 kubelet[2752]: E0909 04:55:05.532301 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.532321 kubelet[2752]: W0909 04:55:05.532318 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.532423 kubelet[2752]: E0909 04:55:05.532404 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.534688 kubelet[2752]: E0909 04:55:05.534659 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.534688 kubelet[2752]: W0909 04:55:05.534678 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.535948 kubelet[2752]: E0909 04:55:05.535912 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.536117 kubelet[2752]: E0909 04:55:05.536093 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.536117 kubelet[2752]: W0909 04:55:05.536109 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.536207 kubelet[2752]: E0909 04:55:05.536190 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.536312 kubelet[2752]: E0909 04:55:05.536296 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.536312 kubelet[2752]: W0909 04:55:05.536309 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.536459 kubelet[2752]: E0909 04:55:05.536399 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.536520 kubelet[2752]: E0909 04:55:05.536504 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.536520 kubelet[2752]: W0909 04:55:05.536518 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.536579 kubelet[2752]: E0909 04:55:05.536553 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.536750 kubelet[2752]: E0909 04:55:05.536684 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.536750 kubelet[2752]: W0909 04:55:05.536697 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.536817 kubelet[2752]: E0909 04:55:05.536778 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.536920 kubelet[2752]: E0909 04:55:05.536901 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.536920 kubelet[2752]: W0909 04:55:05.536917 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.537190 kubelet[2752]: E0909 04:55:05.537100 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.539793 kubelet[2752]: E0909 04:55:05.538649 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.539793 kubelet[2752]: W0909 04:55:05.538672 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.539793 kubelet[2752]: E0909 04:55:05.538688 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.540419 kubelet[2752]: E0909 04:55:05.540387 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.540419 kubelet[2752]: W0909 04:55:05.540409 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.540491 kubelet[2752]: E0909 04:55:05.540424 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.540912 kubelet[2752]: E0909 04:55:05.540885 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.540912 kubelet[2752]: W0909 04:55:05.540907 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.541007 kubelet[2752]: E0909 04:55:05.540921 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.549775 kubelet[2752]: E0909 04:55:05.544311 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.549775 kubelet[2752]: W0909 04:55:05.544337 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.549775 kubelet[2752]: E0909 04:55:05.544358 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.549775 kubelet[2752]: E0909 04:55:05.547432 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.549775 kubelet[2752]: W0909 04:55:05.547449 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.549775 kubelet[2752]: E0909 04:55:05.547471 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.549775 kubelet[2752]: E0909 04:55:05.547691 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.549775 kubelet[2752]: W0909 04:55:05.547699 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.549775 kubelet[2752]: E0909 04:55:05.547708 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.555892 kubelet[2752]: E0909 04:55:05.555855 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.555892 kubelet[2752]: W0909 04:55:05.555883 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.556058 kubelet[2752]: E0909 04:55:05.555905 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.571888 kubelet[2752]: E0909 04:55:05.569867 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.571888 kubelet[2752]: W0909 04:55:05.569899 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.571888 kubelet[2752]: E0909 04:55:05.569919 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.578859 kubelet[2752]: E0909 04:55:05.578544 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.578859 kubelet[2752]: W0909 04:55:05.578576 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.578859 kubelet[2752]: E0909 04:55:05.578597 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.619323 kubelet[2752]: E0909 04:55:05.619269 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.619323 kubelet[2752]: W0909 04:55:05.619301 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.619323 kubelet[2752]: E0909 04:55:05.619322 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.619815 kubelet[2752]: E0909 04:55:05.619788 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.619815 kubelet[2752]: W0909 04:55:05.619808 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.619909 kubelet[2752]: E0909 04:55:05.619830 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.620106 kubelet[2752]: E0909 04:55:05.620033 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.620106 kubelet[2752]: W0909 04:55:05.620093 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.620175 kubelet[2752]: E0909 04:55:05.620116 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.620645 kubelet[2752]: E0909 04:55:05.620595 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.620645 kubelet[2752]: W0909 04:55:05.620619 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.620852 kubelet[2752]: E0909 04:55:05.620656 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.620879 kubelet[2752]: E0909 04:55:05.620869 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.620902 kubelet[2752]: W0909 04:55:05.620878 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.621311 kubelet[2752]: E0909 04:55:05.620964 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.621311 kubelet[2752]: E0909 04:55:05.621118 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.621311 kubelet[2752]: W0909 04:55:05.621126 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.621792 kubelet[2752]: E0909 04:55:05.621759 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.622841 kubelet[2752]: E0909 04:55:05.621860 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.622841 kubelet[2752]: W0909 04:55:05.621869 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.622841 kubelet[2752]: E0909 04:55:05.621950 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.622841 kubelet[2752]: E0909 04:55:05.622042 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.622841 kubelet[2752]: W0909 04:55:05.622048 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.622841 kubelet[2752]: E0909 04:55:05.622127 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.622841 kubelet[2752]: E0909 04:55:05.622209 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.622841 kubelet[2752]: W0909 04:55:05.622237 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.622841 kubelet[2752]: E0909 04:55:05.622344 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.622841 kubelet[2752]: E0909 04:55:05.622386 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.623062 kubelet[2752]: W0909 04:55:05.622392 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.623062 kubelet[2752]: E0909 04:55:05.622578 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.623062 kubelet[2752]: E0909 04:55:05.622788 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.623062 kubelet[2752]: W0909 04:55:05.622800 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.623062 kubelet[2752]: E0909 04:55:05.622816 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.623966 kubelet[2752]: E0909 04:55:05.623940 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.623966 kubelet[2752]: W0909 04:55:05.623959 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.624080 kubelet[2752]: E0909 04:55:05.624057 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.624137 kubelet[2752]: E0909 04:55:05.624118 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.624137 kubelet[2752]: W0909 04:55:05.624131 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.624230 kubelet[2752]: E0909 04:55:05.624211 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.624305 kubelet[2752]: E0909 04:55:05.624291 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.624305 kubelet[2752]: W0909 04:55:05.624303 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.624393 kubelet[2752]: E0909 04:55:05.624377 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.624463 kubelet[2752]: E0909 04:55:05.624447 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.624463 kubelet[2752]: W0909 04:55:05.624456 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.624603 kubelet[2752]: E0909 04:55:05.624529 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.624603 kubelet[2752]: E0909 04:55:05.624602 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.624690 kubelet[2752]: W0909 04:55:05.624608 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.624690 kubelet[2752]: E0909 04:55:05.624682 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.624930 kubelet[2752]: E0909 04:55:05.624820 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.624930 kubelet[2752]: W0909 04:55:05.624834 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.624930 kubelet[2752]: E0909 04:55:05.624847 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.625249 kubelet[2752]: E0909 04:55:05.625197 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.625249 kubelet[2752]: W0909 04:55:05.625218 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.625440 kubelet[2752]: E0909 04:55:05.625253 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.625845 kubelet[2752]: E0909 04:55:05.625816 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.625845 kubelet[2752]: W0909 04:55:05.625834 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.625933 kubelet[2752]: E0909 04:55:05.625855 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.627046 kubelet[2752]: E0909 04:55:05.627016 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.627046 kubelet[2752]: W0909 04:55:05.627036 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.627273 kubelet[2752]: E0909 04:55:05.627061 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.627273 kubelet[2752]: E0909 04:55:05.627247 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.627273 kubelet[2752]: W0909 04:55:05.627256 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.627273 kubelet[2752]: E0909 04:55:05.627265 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.627549 kubelet[2752]: E0909 04:55:05.627520 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.627549 kubelet[2752]: W0909 04:55:05.627555 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.627549 kubelet[2752]: E0909 04:55:05.627572 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.627846 kubelet[2752]: E0909 04:55:05.627820 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.627846 kubelet[2752]: W0909 04:55:05.627838 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.627946 kubelet[2752]: E0909 04:55:05.627857 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.628060 kubelet[2752]: E0909 04:55:05.628005 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.628060 kubelet[2752]: W0909 04:55:05.628019 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.628060 kubelet[2752]: E0909 04:55:05.628027 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.628233 kubelet[2752]: E0909 04:55:05.628166 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.628233 kubelet[2752]: W0909 04:55:05.628181 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.628233 kubelet[2752]: E0909 04:55:05.628191 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.655021 kubelet[2752]: E0909 04:55:05.654971 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:05.655021 kubelet[2752]: W0909 04:55:05.655003 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:05.655021 kubelet[2752]: E0909 04:55:05.655024 2752 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:05.692922 containerd[1539]: time="2025-09-09T04:55:05.692859184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bscrp,Uid:34c262a1-83cf-4e18-98cc-c8837a4024b9,Namespace:calico-system,Attempt:0,}" Sep 9 04:55:05.724081 containerd[1539]: time="2025-09-09T04:55:05.724026915Z" level=info msg="connecting to shim 1c1d6d8028809af1cb51928469cc396739c4b880d45e37abebec47211c2edade" address="unix:///run/containerd/s/f00b64b752ba23f8c71174bd10017dad8e046990d8cd13bc37a540c5d08d0e8e" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:05.764153 systemd[1]: Started cri-containerd-1c1d6d8028809af1cb51928469cc396739c4b880d45e37abebec47211c2edade.scope - libcontainer container 1c1d6d8028809af1cb51928469cc396739c4b880d45e37abebec47211c2edade. Sep 9 04:55:05.813560 containerd[1539]: time="2025-09-09T04:55:05.813456250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bscrp,Uid:34c262a1-83cf-4e18-98cc-c8837a4024b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"1c1d6d8028809af1cb51928469cc396739c4b880d45e37abebec47211c2edade\"" Sep 9 04:55:05.817580 containerd[1539]: time="2025-09-09T04:55:05.817260927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 04:55:05.862628 containerd[1539]: time="2025-09-09T04:55:05.862511920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cf56cbcfb-m8c8z,Uid:8cedbc3b-5db4-408d-814e-a8c1eec562d7,Namespace:calico-system,Attempt:0,}" Sep 9 04:55:05.891710 containerd[1539]: time="2025-09-09T04:55:05.891492311Z" level=info msg="connecting to shim 88a9b86c9582a488fa864176aa99fd487bf6949b99312f4fb7c63d5d4e29c6fb" address="unix:///run/containerd/s/b0eb9cb4d6a80ea77e12c727b3413711d0eadb4faa4804ea0a639d031b153d27" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:05.944049 systemd[1]: Started cri-containerd-88a9b86c9582a488fa864176aa99fd487bf6949b99312f4fb7c63d5d4e29c6fb.scope - libcontainer container 88a9b86c9582a488fa864176aa99fd487bf6949b99312f4fb7c63d5d4e29c6fb. Sep 9 04:55:06.045654 containerd[1539]: time="2025-09-09T04:55:06.044838086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cf56cbcfb-m8c8z,Uid:8cedbc3b-5db4-408d-814e-a8c1eec562d7,Namespace:calico-system,Attempt:0,} returns sandbox id \"88a9b86c9582a488fa864176aa99fd487bf6949b99312f4fb7c63d5d4e29c6fb\"" Sep 9 04:55:07.317614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1000869511.mount: Deactivated successfully. Sep 9 04:55:07.320565 kubelet[2752]: E0909 04:55:07.320058 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6brn6" podUID="e660f608-9e83-4b76-ae9d-6598e92ef788" Sep 9 04:55:07.397980 containerd[1539]: time="2025-09-09T04:55:07.397883623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:07.399501 containerd[1539]: time="2025-09-09T04:55:07.399424496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5636193" Sep 9 04:55:07.401783 containerd[1539]: time="2025-09-09T04:55:07.401158678Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:07.404626 containerd[1539]: time="2025-09-09T04:55:07.404576005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:07.405428 containerd[1539]: time="2025-09-09T04:55:07.405400478Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.588082115s" Sep 9 04:55:07.405539 containerd[1539]: time="2025-09-09T04:55:07.405507872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 9 04:55:07.408176 containerd[1539]: time="2025-09-09T04:55:07.408155682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 04:55:07.409554 containerd[1539]: time="2025-09-09T04:55:07.409474528Z" level=info msg="CreateContainer within sandbox \"1c1d6d8028809af1cb51928469cc396739c4b880d45e37abebec47211c2edade\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 04:55:07.421093 containerd[1539]: time="2025-09-09T04:55:07.421041953Z" level=info msg="Container 2ffb3a3b6c1ec6bf401bdf539de8c7719003db8f93f387d2cb40d35dd9654cf3: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:07.430159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2272434165.mount: Deactivated successfully. Sep 9 04:55:07.442415 containerd[1539]: time="2025-09-09T04:55:07.442363747Z" level=info msg="CreateContainer within sandbox \"1c1d6d8028809af1cb51928469cc396739c4b880d45e37abebec47211c2edade\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2ffb3a3b6c1ec6bf401bdf539de8c7719003db8f93f387d2cb40d35dd9654cf3\"" Sep 9 04:55:07.444732 containerd[1539]: time="2025-09-09T04:55:07.443386009Z" level=info msg="StartContainer for \"2ffb3a3b6c1ec6bf401bdf539de8c7719003db8f93f387d2cb40d35dd9654cf3\"" Sep 9 04:55:07.446581 containerd[1539]: time="2025-09-09T04:55:07.446389280Z" level=info msg="connecting to shim 2ffb3a3b6c1ec6bf401bdf539de8c7719003db8f93f387d2cb40d35dd9654cf3" address="unix:///run/containerd/s/f00b64b752ba23f8c71174bd10017dad8e046990d8cd13bc37a540c5d08d0e8e" protocol=ttrpc version=3 Sep 9 04:55:07.468963 systemd[1]: Started cri-containerd-2ffb3a3b6c1ec6bf401bdf539de8c7719003db8f93f387d2cb40d35dd9654cf3.scope - libcontainer container 2ffb3a3b6c1ec6bf401bdf539de8c7719003db8f93f387d2cb40d35dd9654cf3. Sep 9 04:55:07.536634 containerd[1539]: time="2025-09-09T04:55:07.536545260Z" level=info msg="StartContainer for \"2ffb3a3b6c1ec6bf401bdf539de8c7719003db8f93f387d2cb40d35dd9654cf3\" returns successfully" Sep 9 04:55:07.563396 systemd[1]: cri-containerd-2ffb3a3b6c1ec6bf401bdf539de8c7719003db8f93f387d2cb40d35dd9654cf3.scope: Deactivated successfully. Sep 9 04:55:07.570817 containerd[1539]: time="2025-09-09T04:55:07.570352827Z" level=info msg="received exit event container_id:\"2ffb3a3b6c1ec6bf401bdf539de8c7719003db8f93f387d2cb40d35dd9654cf3\" id:\"2ffb3a3b6c1ec6bf401bdf539de8c7719003db8f93f387d2cb40d35dd9654cf3\" pid:3330 exited_at:{seconds:1757393707 nanos:569976769}" Sep 9 04:55:07.570817 containerd[1539]: time="2025-09-09T04:55:07.570390265Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ffb3a3b6c1ec6bf401bdf539de8c7719003db8f93f387d2cb40d35dd9654cf3\" id:\"2ffb3a3b6c1ec6bf401bdf539de8c7719003db8f93f387d2cb40d35dd9654cf3\" pid:3330 exited_at:{seconds:1757393707 nanos:569976769}" Sep 9 04:55:07.610055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ffb3a3b6c1ec6bf401bdf539de8c7719003db8f93f387d2cb40d35dd9654cf3-rootfs.mount: Deactivated successfully. Sep 9 04:55:09.319545 kubelet[2752]: E0909 04:55:09.319487 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6brn6" podUID="e660f608-9e83-4b76-ae9d-6598e92ef788" Sep 9 04:55:10.047379 containerd[1539]: time="2025-09-09T04:55:10.047318267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:10.049089 containerd[1539]: time="2025-09-09T04:55:10.048983470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=31736396" Sep 9 04:55:10.050293 containerd[1539]: time="2025-09-09T04:55:10.050083939Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:10.052991 containerd[1539]: time="2025-09-09T04:55:10.052836771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:10.053680 containerd[1539]: time="2025-09-09T04:55:10.053630974Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.645379617s" Sep 9 04:55:10.053680 containerd[1539]: time="2025-09-09T04:55:10.053668772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 9 04:55:10.057156 containerd[1539]: time="2025-09-09T04:55:10.056917701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 04:55:10.074119 containerd[1539]: time="2025-09-09T04:55:10.073442212Z" level=info msg="CreateContainer within sandbox \"88a9b86c9582a488fa864176aa99fd487bf6949b99312f4fb7c63d5d4e29c6fb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 04:55:10.085979 containerd[1539]: time="2025-09-09T04:55:10.085933070Z" level=info msg="Container d0819173bf11c5ae96a141009b253418f5d0df56fb254f1a177ccf222d6bd481: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:10.091207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount172746996.mount: Deactivated successfully. Sep 9 04:55:10.102091 containerd[1539]: time="2025-09-09T04:55:10.102042080Z" level=info msg="CreateContainer within sandbox \"88a9b86c9582a488fa864176aa99fd487bf6949b99312f4fb7c63d5d4e29c6fb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d0819173bf11c5ae96a141009b253418f5d0df56fb254f1a177ccf222d6bd481\"" Sep 9 04:55:10.102832 containerd[1539]: time="2025-09-09T04:55:10.102800285Z" level=info msg="StartContainer for \"d0819173bf11c5ae96a141009b253418f5d0df56fb254f1a177ccf222d6bd481\"" Sep 9 04:55:10.104565 containerd[1539]: time="2025-09-09T04:55:10.104325854Z" level=info msg="connecting to shim d0819173bf11c5ae96a141009b253418f5d0df56fb254f1a177ccf222d6bd481" address="unix:///run/containerd/s/b0eb9cb4d6a80ea77e12c727b3413711d0eadb4faa4804ea0a639d031b153d27" protocol=ttrpc version=3 Sep 9 04:55:10.127987 systemd[1]: Started cri-containerd-d0819173bf11c5ae96a141009b253418f5d0df56fb254f1a177ccf222d6bd481.scope - libcontainer container d0819173bf11c5ae96a141009b253418f5d0df56fb254f1a177ccf222d6bd481. Sep 9 04:55:10.185456 containerd[1539]: time="2025-09-09T04:55:10.185273487Z" level=info msg="StartContainer for \"d0819173bf11c5ae96a141009b253418f5d0df56fb254f1a177ccf222d6bd481\" returns successfully" Sep 9 04:55:10.563644 kubelet[2752]: I0909 04:55:10.563553 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6cf56cbcfb-m8c8z" podStartSLOduration=1.560893848 podStartE2EDuration="5.563535162s" podCreationTimestamp="2025-09-09 04:55:05 +0000 UTC" firstStartedPulling="2025-09-09 04:55:06.052169245 +0000 UTC m=+29.835599382" lastFinishedPulling="2025-09-09 04:55:10.054810599 +0000 UTC m=+33.838240696" observedRunningTime="2025-09-09 04:55:10.56207819 +0000 UTC m=+34.345508327" watchObservedRunningTime="2025-09-09 04:55:10.563535162 +0000 UTC m=+34.346965299" Sep 9 04:55:11.319520 kubelet[2752]: E0909 04:55:11.319432 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6brn6" podUID="e660f608-9e83-4b76-ae9d-6598e92ef788" Sep 9 04:55:11.501123 kubelet[2752]: I0909 04:55:11.501040 2752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:55:13.319473 kubelet[2752]: E0909 04:55:13.319277 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6brn6" podUID="e660f608-9e83-4b76-ae9d-6598e92ef788" Sep 9 04:55:13.719236 containerd[1539]: time="2025-09-09T04:55:13.719149948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:13.720314 containerd[1539]: time="2025-09-09T04:55:13.720262667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 9 04:55:13.721384 containerd[1539]: time="2025-09-09T04:55:13.721324107Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:13.730938 containerd[1539]: time="2025-09-09T04:55:13.730299091Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.673340353s" Sep 9 04:55:13.730938 containerd[1539]: time="2025-09-09T04:55:13.730353489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 9 04:55:13.730938 containerd[1539]: time="2025-09-09T04:55:13.730362449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:13.734353 containerd[1539]: time="2025-09-09T04:55:13.734312341Z" level=info msg="CreateContainer within sandbox \"1c1d6d8028809af1cb51928469cc396739c4b880d45e37abebec47211c2edade\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 04:55:13.749408 containerd[1539]: time="2025-09-09T04:55:13.749361898Z" level=info msg="Container e5e9e8ba8a7d28035427ece6735aec5822e657c20107bb6ddae2e84e50425874: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:13.764533 containerd[1539]: time="2025-09-09T04:55:13.764449653Z" level=info msg="CreateContainer within sandbox \"1c1d6d8028809af1cb51928469cc396739c4b880d45e37abebec47211c2edade\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e5e9e8ba8a7d28035427ece6735aec5822e657c20107bb6ddae2e84e50425874\"" Sep 9 04:55:13.765299 containerd[1539]: time="2025-09-09T04:55:13.765257703Z" level=info msg="StartContainer for \"e5e9e8ba8a7d28035427ece6735aec5822e657c20107bb6ddae2e84e50425874\"" Sep 9 04:55:13.769161 containerd[1539]: time="2025-09-09T04:55:13.769118838Z" level=info msg="connecting to shim e5e9e8ba8a7d28035427ece6735aec5822e657c20107bb6ddae2e84e50425874" address="unix:///run/containerd/s/f00b64b752ba23f8c71174bd10017dad8e046990d8cd13bc37a540c5d08d0e8e" protocol=ttrpc version=3 Sep 9 04:55:13.796957 systemd[1]: Started cri-containerd-e5e9e8ba8a7d28035427ece6735aec5822e657c20107bb6ddae2e84e50425874.scope - libcontainer container e5e9e8ba8a7d28035427ece6735aec5822e657c20107bb6ddae2e84e50425874. Sep 9 04:55:13.842260 containerd[1539]: time="2025-09-09T04:55:13.842215102Z" level=info msg="StartContainer for \"e5e9e8ba8a7d28035427ece6735aec5822e657c20107bb6ddae2e84e50425874\" returns successfully" Sep 9 04:55:14.355491 containerd[1539]: time="2025-09-09T04:55:14.355437544Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 04:55:14.359223 systemd[1]: cri-containerd-e5e9e8ba8a7d28035427ece6735aec5822e657c20107bb6ddae2e84e50425874.scope: Deactivated successfully. Sep 9 04:55:14.360003 systemd[1]: cri-containerd-e5e9e8ba8a7d28035427ece6735aec5822e657c20107bb6ddae2e84e50425874.scope: Consumed 520ms CPU time, 189.4M memory peak, 165.8M written to disk. Sep 9 04:55:14.363373 containerd[1539]: time="2025-09-09T04:55:14.363320512Z" level=info msg="received exit event container_id:\"e5e9e8ba8a7d28035427ece6735aec5822e657c20107bb6ddae2e84e50425874\" id:\"e5e9e8ba8a7d28035427ece6735aec5822e657c20107bb6ddae2e84e50425874\" pid:3430 exited_at:{seconds:1757393714 nanos:362312867}" Sep 9 04:55:14.363655 containerd[1539]: time="2025-09-09T04:55:14.363637141Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5e9e8ba8a7d28035427ece6735aec5822e657c20107bb6ddae2e84e50425874\" id:\"e5e9e8ba8a7d28035427ece6735aec5822e657c20107bb6ddae2e84e50425874\" pid:3430 exited_at:{seconds:1757393714 nanos:362312867}" Sep 9 04:55:14.385535 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5e9e8ba8a7d28035427ece6735aec5822e657c20107bb6ddae2e84e50425874-rootfs.mount: Deactivated successfully. Sep 9 04:55:14.417314 kubelet[2752]: I0909 04:55:14.417270 2752 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 04:55:14.480370 systemd[1]: Created slice kubepods-besteffort-podcf41e489_e598_4a00_bd01_7a899133e93e.slice - libcontainer container kubepods-besteffort-podcf41e489_e598_4a00_bd01_7a899133e93e.slice. Sep 9 04:55:14.499730 systemd[1]: Created slice kubepods-burstable-podf6077cf2_ad7b_48a7_94ba_6e46ba9a474f.slice - libcontainer container kubepods-burstable-podf6077cf2_ad7b_48a7_94ba_6e46ba9a474f.slice. Sep 9 04:55:14.511661 systemd[1]: Created slice kubepods-burstable-pode2c5c76d_598f_40f5_a9cd_152b6731067e.slice - libcontainer container kubepods-burstable-pode2c5c76d_598f_40f5_a9cd_152b6731067e.slice. Sep 9 04:55:14.522229 systemd[1]: Created slice kubepods-besteffort-pod897ef4af_ec3b_4639_90ae_9b9b2ec794f6.slice - libcontainer container kubepods-besteffort-pod897ef4af_ec3b_4639_90ae_9b9b2ec794f6.slice. Sep 9 04:55:14.539897 systemd[1]: Created slice kubepods-besteffort-podc949512a_ac76_4e34_849b_f3ddd2412487.slice - libcontainer container kubepods-besteffort-podc949512a_ac76_4e34_849b_f3ddd2412487.slice. Sep 9 04:55:14.544897 kubelet[2752]: W0909 04:55:14.542807 2752 reflector.go:561] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ci-4452-0-0-n-1f6e10e4b9" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4452-0-0-n-1f6e10e4b9' and this object Sep 9 04:55:14.544897 kubelet[2752]: E0909 04:55:14.542859 2752 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ci-4452-0-0-n-1f6e10e4b9\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4452-0-0-n-1f6e10e4b9' and this object" logger="UnhandledError" Sep 9 04:55:14.544897 kubelet[2752]: W0909 04:55:14.542927 2752 reflector.go:561] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ci-4452-0-0-n-1f6e10e4b9" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4452-0-0-n-1f6e10e4b9' and this object Sep 9 04:55:14.544897 kubelet[2752]: E0909 04:55:14.542939 2752 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ci-4452-0-0-n-1f6e10e4b9\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4452-0-0-n-1f6e10e4b9' and this object" logger="UnhandledError" Sep 9 04:55:14.544897 kubelet[2752]: W0909 04:55:14.542978 2752 reflector.go:561] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ci-4452-0-0-n-1f6e10e4b9" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4452-0-0-n-1f6e10e4b9' and this object Sep 9 04:55:14.545180 kubelet[2752]: E0909 04:55:14.542993 2752 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ci-4452-0-0-n-1f6e10e4b9\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4452-0-0-n-1f6e10e4b9' and this object" logger="UnhandledError" Sep 9 04:55:14.550711 kubelet[2752]: W0909 04:55:14.549607 2752 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4452-0-0-n-1f6e10e4b9" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4452-0-0-n-1f6e10e4b9' and this object Sep 9 04:55:14.550711 kubelet[2752]: E0909 04:55:14.549662 2752 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4452-0-0-n-1f6e10e4b9\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4452-0-0-n-1f6e10e4b9' and this object" logger="UnhandledError" Sep 9 04:55:14.550711 kubelet[2752]: W0909 04:55:14.549720 2752 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4452-0-0-n-1f6e10e4b9" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4452-0-0-n-1f6e10e4b9' and this object Sep 9 04:55:14.550711 kubelet[2752]: E0909 04:55:14.549737 2752 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4452-0-0-n-1f6e10e4b9\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4452-0-0-n-1f6e10e4b9' and this object" logger="UnhandledError" Sep 9 04:55:14.556977 systemd[1]: Created slice kubepods-besteffort-pod49c24938_7e87_4974_8b97_83c495fe1674.slice - libcontainer container kubepods-besteffort-pod49c24938_7e87_4974_8b97_83c495fe1674.slice. Sep 9 04:55:14.566006 systemd[1]: Created slice kubepods-besteffort-pod508e4faa_80f1_4c32_8a1d_6406e0cf50f0.slice - libcontainer container kubepods-besteffort-pod508e4faa_80f1_4c32_8a1d_6406e0cf50f0.slice. Sep 9 04:55:14.574805 systemd[1]: Created slice kubepods-besteffort-pod31c0265a_7539_4746_89ab_c4858c3d797d.slice - libcontainer container kubepods-besteffort-pod31c0265a_7539_4746_89ab_c4858c3d797d.slice. Sep 9 04:55:14.587169 kubelet[2752]: I0909 04:55:14.587094 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnpfs\" (UniqueName: \"kubernetes.io/projected/f6077cf2-ad7b-48a7-94ba-6e46ba9a474f-kube-api-access-hnpfs\") pod \"coredns-7c65d6cfc9-4987m\" (UID: \"f6077cf2-ad7b-48a7-94ba-6e46ba9a474f\") " pod="kube-system/coredns-7c65d6cfc9-4987m" Sep 9 04:55:14.587169 kubelet[2752]: I0909 04:55:14.587153 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c949512a-ac76-4e34-849b-f3ddd2412487-calico-apiserver-certs\") pod \"calico-apiserver-86b766bf49-45pbp\" (UID: \"c949512a-ac76-4e34-849b-f3ddd2412487\") " pod="calico-apiserver/calico-apiserver-86b766bf49-45pbp" Sep 9 04:55:14.587395 kubelet[2752]: I0909 04:55:14.587353 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxcqd\" (UniqueName: \"kubernetes.io/projected/c949512a-ac76-4e34-849b-f3ddd2412487-kube-api-access-qxcqd\") pod \"calico-apiserver-86b766bf49-45pbp\" (UID: \"c949512a-ac76-4e34-849b-f3ddd2412487\") " pod="calico-apiserver/calico-apiserver-86b766bf49-45pbp" Sep 9 04:55:14.587425 kubelet[2752]: I0909 04:55:14.587398 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf41e489-e598-4a00-bd01-7a899133e93e-tigera-ca-bundle\") pod \"calico-kube-controllers-f49cb55df-s2tds\" (UID: \"cf41e489-e598-4a00-bd01-7a899133e93e\") " pod="calico-system/calico-kube-controllers-f49cb55df-s2tds" Sep 9 04:55:14.587425 kubelet[2752]: I0909 04:55:14.587420 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6077cf2-ad7b-48a7-94ba-6e46ba9a474f-config-volume\") pod \"coredns-7c65d6cfc9-4987m\" (UID: \"f6077cf2-ad7b-48a7-94ba-6e46ba9a474f\") " pod="kube-system/coredns-7c65d6cfc9-4987m" Sep 9 04:55:14.587501 kubelet[2752]: I0909 04:55:14.587439 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2c5c76d-598f-40f5-a9cd-152b6731067e-config-volume\") pod \"coredns-7c65d6cfc9-jkxm5\" (UID: \"e2c5c76d-598f-40f5-a9cd-152b6731067e\") " pod="kube-system/coredns-7c65d6cfc9-jkxm5" Sep 9 04:55:14.587501 kubelet[2752]: I0909 04:55:14.587473 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8sdt\" (UniqueName: \"kubernetes.io/projected/cf41e489-e598-4a00-bd01-7a899133e93e-kube-api-access-h8sdt\") pod \"calico-kube-controllers-f49cb55df-s2tds\" (UID: \"cf41e489-e598-4a00-bd01-7a899133e93e\") " pod="calico-system/calico-kube-controllers-f49cb55df-s2tds" Sep 9 04:55:14.587501 kubelet[2752]: I0909 04:55:14.587497 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/897ef4af-ec3b-4639-90ae-9b9b2ec794f6-calico-apiserver-certs\") pod \"calico-apiserver-86b766bf49-7bvkb\" (UID: \"897ef4af-ec3b-4639-90ae-9b9b2ec794f6\") " pod="calico-apiserver/calico-apiserver-86b766bf49-7bvkb" Sep 9 04:55:14.587574 kubelet[2752]: I0909 04:55:14.587516 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhmjr\" (UniqueName: \"kubernetes.io/projected/897ef4af-ec3b-4639-90ae-9b9b2ec794f6-kube-api-access-jhmjr\") pod \"calico-apiserver-86b766bf49-7bvkb\" (UID: \"897ef4af-ec3b-4639-90ae-9b9b2ec794f6\") " pod="calico-apiserver/calico-apiserver-86b766bf49-7bvkb" Sep 9 04:55:14.588404 kubelet[2752]: I0909 04:55:14.587793 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqnxn\" (UniqueName: \"kubernetes.io/projected/e2c5c76d-598f-40f5-a9cd-152b6731067e-kube-api-access-vqnxn\") pod \"coredns-7c65d6cfc9-jkxm5\" (UID: \"e2c5c76d-598f-40f5-a9cd-152b6731067e\") " pod="kube-system/coredns-7c65d6cfc9-jkxm5" Sep 9 04:55:14.690021 kubelet[2752]: I0909 04:55:14.688951 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/49c24938-7e87-4974-8b97-83c495fe1674-calico-apiserver-certs\") pod \"calico-apiserver-76d8564c8-jpwmc\" (UID: \"49c24938-7e87-4974-8b97-83c495fe1674\") " pod="calico-apiserver/calico-apiserver-76d8564c8-jpwmc" Sep 9 04:55:14.690021 kubelet[2752]: I0909 04:55:14.688996 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlpkj\" (UniqueName: \"kubernetes.io/projected/31c0265a-7539-4746-89ab-c4858c3d797d-kube-api-access-vlpkj\") pod \"whisker-557d574bcd-j2fcw\" (UID: \"31c0265a-7539-4746-89ab-c4858c3d797d\") " pod="calico-system/whisker-557d574bcd-j2fcw" Sep 9 04:55:14.690021 kubelet[2752]: I0909 04:55:14.689027 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/508e4faa-80f1-4c32-8a1d-6406e0cf50f0-config\") pod \"goldmane-7988f88666-nfp9b\" (UID: \"508e4faa-80f1-4c32-8a1d-6406e0cf50f0\") " pod="calico-system/goldmane-7988f88666-nfp9b" Sep 9 04:55:14.690021 kubelet[2752]: I0909 04:55:14.689091 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/508e4faa-80f1-4c32-8a1d-6406e0cf50f0-goldmane-ca-bundle\") pod \"goldmane-7988f88666-nfp9b\" (UID: \"508e4faa-80f1-4c32-8a1d-6406e0cf50f0\") " pod="calico-system/goldmane-7988f88666-nfp9b" Sep 9 04:55:14.690021 kubelet[2752]: I0909 04:55:14.689109 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/508e4faa-80f1-4c32-8a1d-6406e0cf50f0-goldmane-key-pair\") pod \"goldmane-7988f88666-nfp9b\" (UID: \"508e4faa-80f1-4c32-8a1d-6406e0cf50f0\") " pod="calico-system/goldmane-7988f88666-nfp9b" Sep 9 04:55:14.690343 kubelet[2752]: I0909 04:55:14.689165 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql47j\" (UniqueName: \"kubernetes.io/projected/508e4faa-80f1-4c32-8a1d-6406e0cf50f0-kube-api-access-ql47j\") pod \"goldmane-7988f88666-nfp9b\" (UID: \"508e4faa-80f1-4c32-8a1d-6406e0cf50f0\") " pod="calico-system/goldmane-7988f88666-nfp9b" Sep 9 04:55:14.690343 kubelet[2752]: I0909 04:55:14.689186 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/31c0265a-7539-4746-89ab-c4858c3d797d-whisker-backend-key-pair\") pod \"whisker-557d574bcd-j2fcw\" (UID: \"31c0265a-7539-4746-89ab-c4858c3d797d\") " pod="calico-system/whisker-557d574bcd-j2fcw" Sep 9 04:55:14.690343 kubelet[2752]: I0909 04:55:14.689206 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31c0265a-7539-4746-89ab-c4858c3d797d-whisker-ca-bundle\") pod \"whisker-557d574bcd-j2fcw\" (UID: \"31c0265a-7539-4746-89ab-c4858c3d797d\") " pod="calico-system/whisker-557d574bcd-j2fcw" Sep 9 04:55:14.690343 kubelet[2752]: I0909 04:55:14.689251 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktbxw\" (UniqueName: \"kubernetes.io/projected/49c24938-7e87-4974-8b97-83c495fe1674-kube-api-access-ktbxw\") pod \"calico-apiserver-76d8564c8-jpwmc\" (UID: \"49c24938-7e87-4974-8b97-83c495fe1674\") " pod="calico-apiserver/calico-apiserver-76d8564c8-jpwmc" Sep 9 04:55:14.794704 containerd[1539]: time="2025-09-09T04:55:14.794530602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f49cb55df-s2tds,Uid:cf41e489-e598-4a00-bd01-7a899133e93e,Namespace:calico-system,Attempt:0,}" Sep 9 04:55:14.811785 containerd[1539]: time="2025-09-09T04:55:14.810631565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4987m,Uid:f6077cf2-ad7b-48a7-94ba-6e46ba9a474f,Namespace:kube-system,Attempt:0,}" Sep 9 04:55:14.824092 containerd[1539]: time="2025-09-09T04:55:14.824042541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jkxm5,Uid:e2c5c76d-598f-40f5-a9cd-152b6731067e,Namespace:kube-system,Attempt:0,}" Sep 9 04:55:14.832007 containerd[1539]: time="2025-09-09T04:55:14.831944668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86b766bf49-7bvkb,Uid:897ef4af-ec3b-4639-90ae-9b9b2ec794f6,Namespace:calico-apiserver,Attempt:0,}" Sep 9 04:55:14.850835 containerd[1539]: time="2025-09-09T04:55:14.849729253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86b766bf49-45pbp,Uid:c949512a-ac76-4e34-849b-f3ddd2412487,Namespace:calico-apiserver,Attempt:0,}" Sep 9 04:55:14.863669 containerd[1539]: time="2025-09-09T04:55:14.863629612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d8564c8-jpwmc,Uid:49c24938-7e87-4974-8b97-83c495fe1674,Namespace:calico-apiserver,Attempt:0,}" Sep 9 04:55:14.995253 containerd[1539]: time="2025-09-09T04:55:14.994958031Z" level=error msg="Failed to destroy network for sandbox \"7e644588f9dd4079835f5468a584e54dab7144f8bea97412a8b99a044ef04d3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:14.999301 containerd[1539]: time="2025-09-09T04:55:14.999218044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d8564c8-jpwmc,Uid:49c24938-7e87-4974-8b97-83c495fe1674,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e644588f9dd4079835f5468a584e54dab7144f8bea97412a8b99a044ef04d3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:14.999889 kubelet[2752]: E0909 04:55:14.999513 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e644588f9dd4079835f5468a584e54dab7144f8bea97412a8b99a044ef04d3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:14.999889 kubelet[2752]: E0909 04:55:14.999588 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e644588f9dd4079835f5468a584e54dab7144f8bea97412a8b99a044ef04d3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76d8564c8-jpwmc" Sep 9 04:55:14.999889 kubelet[2752]: E0909 04:55:14.999607 2752 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e644588f9dd4079835f5468a584e54dab7144f8bea97412a8b99a044ef04d3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76d8564c8-jpwmc" Sep 9 04:55:15.000004 kubelet[2752]: E0909 04:55:14.999649 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76d8564c8-jpwmc_calico-apiserver(49c24938-7e87-4974-8b97-83c495fe1674)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76d8564c8-jpwmc_calico-apiserver(49c24938-7e87-4974-8b97-83c495fe1674)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e644588f9dd4079835f5468a584e54dab7144f8bea97412a8b99a044ef04d3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76d8564c8-jpwmc" podUID="49c24938-7e87-4974-8b97-83c495fe1674" Sep 9 04:55:15.010637 containerd[1539]: time="2025-09-09T04:55:15.010573358Z" level=error msg="Failed to destroy network for sandbox \"d0651fe219547fa0c587754cf34e2985322f5348463c68202781999d3088c47c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:15.013176 containerd[1539]: time="2025-09-09T04:55:15.013097158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86b766bf49-45pbp,Uid:c949512a-ac76-4e34-849b-f3ddd2412487,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0651fe219547fa0c587754cf34e2985322f5348463c68202781999d3088c47c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:15.013876 kubelet[2752]: E0909 04:55:15.013812 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0651fe219547fa0c587754cf34e2985322f5348463c68202781999d3088c47c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:15.013973 kubelet[2752]: E0909 04:55:15.013885 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0651fe219547fa0c587754cf34e2985322f5348463c68202781999d3088c47c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86b766bf49-45pbp" Sep 9 04:55:15.013973 kubelet[2752]: E0909 04:55:15.013905 2752 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0651fe219547fa0c587754cf34e2985322f5348463c68202781999d3088c47c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86b766bf49-45pbp" Sep 9 04:55:15.014156 kubelet[2752]: E0909 04:55:15.013980 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86b766bf49-45pbp_calico-apiserver(c949512a-ac76-4e34-849b-f3ddd2412487)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86b766bf49-45pbp_calico-apiserver(c949512a-ac76-4e34-849b-f3ddd2412487)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0651fe219547fa0c587754cf34e2985322f5348463c68202781999d3088c47c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86b766bf49-45pbp" podUID="c949512a-ac76-4e34-849b-f3ddd2412487" Sep 9 04:55:15.016467 containerd[1539]: time="2025-09-09T04:55:15.016072983Z" level=error msg="Failed to destroy network for sandbox \"19dce09c238aa29523443f45f1346edacbe201c975ccf59e97fbe3f91048f018\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:15.020236 containerd[1539]: time="2025-09-09T04:55:15.020063376Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jkxm5,Uid:e2c5c76d-598f-40f5-a9cd-152b6731067e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"19dce09c238aa29523443f45f1346edacbe201c975ccf59e97fbe3f91048f018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:15.020725 kubelet[2752]: E0909 04:55:15.020395 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19dce09c238aa29523443f45f1346edacbe201c975ccf59e97fbe3f91048f018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:15.020725 kubelet[2752]: E0909 04:55:15.020456 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19dce09c238aa29523443f45f1346edacbe201c975ccf59e97fbe3f91048f018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-jkxm5" Sep 9 04:55:15.020725 kubelet[2752]: E0909 04:55:15.020478 2752 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19dce09c238aa29523443f45f1346edacbe201c975ccf59e97fbe3f91048f018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-jkxm5" Sep 9 04:55:15.020864 kubelet[2752]: E0909 04:55:15.020532 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-jkxm5_kube-system(e2c5c76d-598f-40f5-a9cd-152b6731067e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-jkxm5_kube-system(e2c5c76d-598f-40f5-a9cd-152b6731067e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19dce09c238aa29523443f45f1346edacbe201c975ccf59e97fbe3f91048f018\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-jkxm5" podUID="e2c5c76d-598f-40f5-a9cd-152b6731067e" Sep 9 04:55:15.040944 containerd[1539]: time="2025-09-09T04:55:15.040879674Z" level=error msg="Failed to destroy network for sandbox \"fff080055f0e78bd4de3e36bb4d0c2b5835bdb78e9b890cea3f919f7d0088d3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:15.043116 containerd[1539]: time="2025-09-09T04:55:15.043003207Z" level=error msg="Failed to destroy network for sandbox \"def1ca831b6f7f9d38156427b1e0b457a8209d375a264224f93c874e6b8da5c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:15.044358 containerd[1539]: time="2025-09-09T04:55:15.044223128Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f49cb55df-s2tds,Uid:cf41e489-e598-4a00-bd01-7a899133e93e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fff080055f0e78bd4de3e36bb4d0c2b5835bdb78e9b890cea3f919f7d0088d3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:15.044546 kubelet[2752]: E0909 04:55:15.044483 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fff080055f0e78bd4de3e36bb4d0c2b5835bdb78e9b890cea3f919f7d0088d3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:15.044598 kubelet[2752]: E0909 04:55:15.044542 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fff080055f0e78bd4de3e36bb4d0c2b5835bdb78e9b890cea3f919f7d0088d3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f49cb55df-s2tds" Sep 9 04:55:15.044598 kubelet[2752]: E0909 04:55:15.044568 2752 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fff080055f0e78bd4de3e36bb4d0c2b5835bdb78e9b890cea3f919f7d0088d3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f49cb55df-s2tds" Sep 9 04:55:15.044655 kubelet[2752]: E0909 04:55:15.044620 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f49cb55df-s2tds_calico-system(cf41e489-e598-4a00-bd01-7a899133e93e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f49cb55df-s2tds_calico-system(cf41e489-e598-4a00-bd01-7a899133e93e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fff080055f0e78bd4de3e36bb4d0c2b5835bdb78e9b890cea3f919f7d0088d3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f49cb55df-s2tds" podUID="cf41e489-e598-4a00-bd01-7a899133e93e" Sep 9 04:55:15.046849 containerd[1539]: time="2025-09-09T04:55:15.046601012Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86b766bf49-7bvkb,Uid:897ef4af-ec3b-4639-90ae-9b9b2ec794f6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"def1ca831b6f7f9d38156427b1e0b457a8209d375a264224f93c874e6b8da5c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:15.047325 kubelet[2752]: E0909 04:55:15.047107 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"def1ca831b6f7f9d38156427b1e0b457a8209d375a264224f93c874e6b8da5c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:15.047325 kubelet[2752]: E0909 04:55:15.047184 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"def1ca831b6f7f9d38156427b1e0b457a8209d375a264224f93c874e6b8da5c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86b766bf49-7bvkb" Sep 9 04:55:15.047325 kubelet[2752]: E0909 04:55:15.047204 2752 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"def1ca831b6f7f9d38156427b1e0b457a8209d375a264224f93c874e6b8da5c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86b766bf49-7bvkb" Sep 9 04:55:15.047442 kubelet[2752]: E0909 04:55:15.047251 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86b766bf49-7bvkb_calico-apiserver(897ef4af-ec3b-4639-90ae-9b9b2ec794f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86b766bf49-7bvkb_calico-apiserver(897ef4af-ec3b-4639-90ae-9b9b2ec794f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"def1ca831b6f7f9d38156427b1e0b457a8209d375a264224f93c874e6b8da5c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86b766bf49-7bvkb" podUID="897ef4af-ec3b-4639-90ae-9b9b2ec794f6" Sep 9 04:55:15.048189 containerd[1539]: time="2025-09-09T04:55:15.048151843Z" level=error msg="Failed to destroy network for sandbox \"bdd77c111e1538b98737df2bf789a3b062b033c99a9187f598599ec6289f6894\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:15.050000 containerd[1539]: time="2025-09-09T04:55:15.049823870Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4987m,Uid:f6077cf2-ad7b-48a7-94ba-6e46ba9a474f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdd77c111e1538b98737df2bf789a3b062b033c99a9187f598599ec6289f6894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:15.050161 kubelet[2752]: E0909 04:55:15.050100 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdd77c111e1538b98737df2bf789a3b062b033c99a9187f598599ec6289f6894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:15.050223 kubelet[2752]: E0909 04:55:15.050203 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdd77c111e1538b98737df2bf789a3b062b033c99a9187f598599ec6289f6894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4987m" Sep 9 04:55:15.050274 kubelet[2752]: E0909 04:55:15.050241 2752 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdd77c111e1538b98737df2bf789a3b062b033c99a9187f598599ec6289f6894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4987m" Sep 9 04:55:15.050325 kubelet[2752]: E0909 04:55:15.050301 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-4987m_kube-system(f6077cf2-ad7b-48a7-94ba-6e46ba9a474f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-4987m_kube-system(f6077cf2-ad7b-48a7-94ba-6e46ba9a474f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bdd77c111e1538b98737df2bf789a3b062b033c99a9187f598599ec6289f6894\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4987m" podUID="f6077cf2-ad7b-48a7-94ba-6e46ba9a474f" Sep 9 04:55:15.326038 systemd[1]: Created slice kubepods-besteffort-pode660f608_9e83_4b76_ae9d_6598e92ef788.slice - libcontainer container kubepods-besteffort-pode660f608_9e83_4b76_ae9d_6598e92ef788.slice. Sep 9 04:55:15.330589 containerd[1539]: time="2025-09-09T04:55:15.330534418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6brn6,Uid:e660f608-9e83-4b76-ae9d-6598e92ef788,Namespace:calico-system,Attempt:0,}" Sep 9 04:55:15.386584 containerd[1539]: time="2025-09-09T04:55:15.386513877Z" level=error msg="Failed to destroy network for sandbox \"ce19a58ea19f8a391c03d5660cdb4157fddc3c06fd0cde9b8662942000ac360b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:15.388960 containerd[1539]: time="2025-09-09T04:55:15.388855683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6brn6,Uid:e660f608-9e83-4b76-ae9d-6598e92ef788,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce19a58ea19f8a391c03d5660cdb4157fddc3c06fd0cde9b8662942000ac360b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:15.389338 kubelet[2752]: E0909 04:55:15.389262 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce19a58ea19f8a391c03d5660cdb4157fddc3c06fd0cde9b8662942000ac360b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:15.389409 kubelet[2752]: E0909 04:55:15.389369 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce19a58ea19f8a391c03d5660cdb4157fddc3c06fd0cde9b8662942000ac360b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6brn6" Sep 9 04:55:15.389449 kubelet[2752]: E0909 04:55:15.389410 2752 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce19a58ea19f8a391c03d5660cdb4157fddc3c06fd0cde9b8662942000ac360b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6brn6" Sep 9 04:55:15.389543 kubelet[2752]: E0909 04:55:15.389481 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6brn6_calico-system(e660f608-9e83-4b76-ae9d-6598e92ef788)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6brn6_calico-system(e660f608-9e83-4b76-ae9d-6598e92ef788)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce19a58ea19f8a391c03d5660cdb4157fddc3c06fd0cde9b8662942000ac360b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6brn6" podUID="e660f608-9e83-4b76-ae9d-6598e92ef788" Sep 9 04:55:15.557544 containerd[1539]: time="2025-09-09T04:55:15.557501477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 04:55:15.752649 systemd[1]: run-netns-cni\x2dbb2618bb\x2d42fe\x2d4e98\x2d3ce3\x2d1c1f8ecc0074.mount: Deactivated successfully. Sep 9 04:55:15.752765 systemd[1]: run-netns-cni\x2d497e58aa\x2d815c\x2d19f7\x2de0fa\x2df44a800caff5.mount: Deactivated successfully. Sep 9 04:55:15.791556 kubelet[2752]: E0909 04:55:15.791334 2752 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Sep 9 04:55:15.791556 kubelet[2752]: E0909 04:55:15.791373 2752 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Sep 9 04:55:15.791556 kubelet[2752]: E0909 04:55:15.791442 2752 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Sep 9 04:55:15.791556 kubelet[2752]: E0909 04:55:15.791464 2752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31c0265a-7539-4746-89ab-c4858c3d797d-whisker-backend-key-pair podName:31c0265a-7539-4746-89ab-c4858c3d797d nodeName:}" failed. No retries permitted until 2025-09-09 04:55:16.291432194 +0000 UTC m=+40.074862371 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/31c0265a-7539-4746-89ab-c4858c3d797d-whisker-backend-key-pair") pod "whisker-557d574bcd-j2fcw" (UID: "31c0265a-7539-4746-89ab-c4858c3d797d") : failed to sync secret cache: timed out waiting for the condition Sep 9 04:55:15.791556 kubelet[2752]: E0909 04:55:15.791490 2752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/508e4faa-80f1-4c32-8a1d-6406e0cf50f0-goldmane-key-pair podName:508e4faa-80f1-4c32-8a1d-6406e0cf50f0 nodeName:}" failed. No retries permitted until 2025-09-09 04:55:16.291478192 +0000 UTC m=+40.074908329 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/508e4faa-80f1-4c32-8a1d-6406e0cf50f0-goldmane-key-pair") pod "goldmane-7988f88666-nfp9b" (UID: "508e4faa-80f1-4c32-8a1d-6406e0cf50f0") : failed to sync secret cache: timed out waiting for the condition Sep 9 04:55:15.792853 kubelet[2752]: E0909 04:55:15.791519 2752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/508e4faa-80f1-4c32-8a1d-6406e0cf50f0-config podName:508e4faa-80f1-4c32-8a1d-6406e0cf50f0 nodeName:}" failed. No retries permitted until 2025-09-09 04:55:16.291503712 +0000 UTC m=+40.074933849 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/508e4faa-80f1-4c32-8a1d-6406e0cf50f0-config") pod "goldmane-7988f88666-nfp9b" (UID: "508e4faa-80f1-4c32-8a1d-6406e0cf50f0") : failed to sync configmap cache: timed out waiting for the condition Sep 9 04:55:16.372355 containerd[1539]: time="2025-09-09T04:55:16.372272306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nfp9b,Uid:508e4faa-80f1-4c32-8a1d-6406e0cf50f0,Namespace:calico-system,Attempt:0,}" Sep 9 04:55:16.381979 containerd[1539]: time="2025-09-09T04:55:16.381805508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-557d574bcd-j2fcw,Uid:31c0265a-7539-4746-89ab-c4858c3d797d,Namespace:calico-system,Attempt:0,}" Sep 9 04:55:16.449076 containerd[1539]: time="2025-09-09T04:55:16.449017270Z" level=error msg="Failed to destroy network for sandbox \"67aa32f75a3ccf24ca23109b45663ddc20c0d5aecfda2220d426c2142bec59b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:16.452388 containerd[1539]: time="2025-09-09T04:55:16.452003303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nfp9b,Uid:508e4faa-80f1-4c32-8a1d-6406e0cf50f0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"67aa32f75a3ccf24ca23109b45663ddc20c0d5aecfda2220d426c2142bec59b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:16.452962 kubelet[2752]: E0909 04:55:16.452926 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67aa32f75a3ccf24ca23109b45663ddc20c0d5aecfda2220d426c2142bec59b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:16.453188 kubelet[2752]: E0909 04:55:16.453124 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67aa32f75a3ccf24ca23109b45663ddc20c0d5aecfda2220d426c2142bec59b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-nfp9b" Sep 9 04:55:16.453291 kubelet[2752]: E0909 04:55:16.453274 2752 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67aa32f75a3ccf24ca23109b45663ddc20c0d5aecfda2220d426c2142bec59b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-nfp9b" Sep 9 04:55:16.453506 kubelet[2752]: E0909 04:55:16.453478 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-nfp9b_calico-system(508e4faa-80f1-4c32-8a1d-6406e0cf50f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-nfp9b_calico-system(508e4faa-80f1-4c32-8a1d-6406e0cf50f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67aa32f75a3ccf24ca23109b45663ddc20c0d5aecfda2220d426c2142bec59b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-nfp9b" podUID="508e4faa-80f1-4c32-8a1d-6406e0cf50f0" Sep 9 04:55:16.459546 containerd[1539]: time="2025-09-09T04:55:16.459489844Z" level=error msg="Failed to destroy network for sandbox \"0c4cfa6f06763d9e6b263ba1358aa139573b0da00b5598d3e4973071dee000af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:16.461912 containerd[1539]: time="2025-09-09T04:55:16.461862615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-557d574bcd-j2fcw,Uid:31c0265a-7539-4746-89ab-c4858c3d797d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c4cfa6f06763d9e6b263ba1358aa139573b0da00b5598d3e4973071dee000af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:16.462561 kubelet[2752]: E0909 04:55:16.462117 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c4cfa6f06763d9e6b263ba1358aa139573b0da00b5598d3e4973071dee000af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:16.462561 kubelet[2752]: E0909 04:55:16.462172 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c4cfa6f06763d9e6b263ba1358aa139573b0da00b5598d3e4973071dee000af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-557d574bcd-j2fcw" Sep 9 04:55:16.462561 kubelet[2752]: E0909 04:55:16.462189 2752 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c4cfa6f06763d9e6b263ba1358aa139573b0da00b5598d3e4973071dee000af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-557d574bcd-j2fcw" Sep 9 04:55:16.462695 kubelet[2752]: E0909 04:55:16.462225 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-557d574bcd-j2fcw_calico-system(31c0265a-7539-4746-89ab-c4858c3d797d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-557d574bcd-j2fcw_calico-system(31c0265a-7539-4746-89ab-c4858c3d797d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c4cfa6f06763d9e6b263ba1358aa139573b0da00b5598d3e4973071dee000af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-557d574bcd-j2fcw" podUID="31c0265a-7539-4746-89ab-c4858c3d797d" Sep 9 04:55:16.748989 systemd[1]: run-netns-cni\x2dff611505\x2d3e19\x2d2980\x2d9183\x2dea81718c38dd.mount: Deactivated successfully. Sep 9 04:55:16.749146 systemd[1]: run-netns-cni\x2d4edeff0a\x2d3695\x2d3dfe\x2df120\x2d318e1219cbf2.mount: Deactivated successfully. Sep 9 04:55:22.394610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2058983784.mount: Deactivated successfully. Sep 9 04:55:22.419078 containerd[1539]: time="2025-09-09T04:55:22.418996866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:22.420628 containerd[1539]: time="2025-09-09T04:55:22.420526964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 9 04:55:22.422582 containerd[1539]: time="2025-09-09T04:55:22.421588348Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:22.424175 containerd[1539]: time="2025-09-09T04:55:22.424134430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:22.424761 containerd[1539]: time="2025-09-09T04:55:22.424716662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 6.867168066s" Sep 9 04:55:22.424832 containerd[1539]: time="2025-09-09T04:55:22.424785941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 9 04:55:22.441036 containerd[1539]: time="2025-09-09T04:55:22.440870383Z" level=info msg="CreateContainer within sandbox \"1c1d6d8028809af1cb51928469cc396739c4b880d45e37abebec47211c2edade\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 04:55:22.454160 containerd[1539]: time="2025-09-09T04:55:22.454093428Z" level=info msg="Container 37ecfa4689ad6db384ff648ab79f6ddc69fa571f593ecc56bc88f653f1062dbc: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:22.470969 containerd[1539]: time="2025-09-09T04:55:22.470805382Z" level=info msg="CreateContainer within sandbox \"1c1d6d8028809af1cb51928469cc396739c4b880d45e37abebec47211c2edade\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"37ecfa4689ad6db384ff648ab79f6ddc69fa571f593ecc56bc88f653f1062dbc\"" Sep 9 04:55:22.472539 containerd[1539]: time="2025-09-09T04:55:22.472479317Z" level=info msg="StartContainer for \"37ecfa4689ad6db384ff648ab79f6ddc69fa571f593ecc56bc88f653f1062dbc\"" Sep 9 04:55:22.475916 containerd[1539]: time="2025-09-09T04:55:22.475805508Z" level=info msg="connecting to shim 37ecfa4689ad6db384ff648ab79f6ddc69fa571f593ecc56bc88f653f1062dbc" address="unix:///run/containerd/s/f00b64b752ba23f8c71174bd10017dad8e046990d8cd13bc37a540c5d08d0e8e" protocol=ttrpc version=3 Sep 9 04:55:22.508116 systemd[1]: Started cri-containerd-37ecfa4689ad6db384ff648ab79f6ddc69fa571f593ecc56bc88f653f1062dbc.scope - libcontainer container 37ecfa4689ad6db384ff648ab79f6ddc69fa571f593ecc56bc88f653f1062dbc. Sep 9 04:55:22.569503 containerd[1539]: time="2025-09-09T04:55:22.569246168Z" level=info msg="StartContainer for \"37ecfa4689ad6db384ff648ab79f6ddc69fa571f593ecc56bc88f653f1062dbc\" returns successfully" Sep 9 04:55:22.616030 kubelet[2752]: I0909 04:55:22.614872 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bscrp" podStartSLOduration=1.005231853 podStartE2EDuration="17.614853335s" podCreationTimestamp="2025-09-09 04:55:05 +0000 UTC" firstStartedPulling="2025-09-09 04:55:05.816837674 +0000 UTC m=+29.600267811" lastFinishedPulling="2025-09-09 04:55:22.426459156 +0000 UTC m=+46.209889293" observedRunningTime="2025-09-09 04:55:22.614120466 +0000 UTC m=+46.397550603" watchObservedRunningTime="2025-09-09 04:55:22.614853335 +0000 UTC m=+46.398283472" Sep 9 04:55:22.739176 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 04:55:22.740015 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 04:55:22.747313 containerd[1539]: time="2025-09-09T04:55:22.747254981Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37ecfa4689ad6db384ff648ab79f6ddc69fa571f593ecc56bc88f653f1062dbc\" id:\"63648ad4ba5a80aa141984fbeb5e15dd333c2e27ee701db969fbce87fc18b49c\" pid:3756 exit_status:1 exited_at:{seconds:1757393722 nanos:746877306}" Sep 9 04:55:23.054530 kubelet[2752]: I0909 04:55:23.054192 2752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/31c0265a-7539-4746-89ab-c4858c3d797d-whisker-backend-key-pair\") pod \"31c0265a-7539-4746-89ab-c4858c3d797d\" (UID: \"31c0265a-7539-4746-89ab-c4858c3d797d\") " Sep 9 04:55:23.054530 kubelet[2752]: I0909 04:55:23.054260 2752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlpkj\" (UniqueName: \"kubernetes.io/projected/31c0265a-7539-4746-89ab-c4858c3d797d-kube-api-access-vlpkj\") pod \"31c0265a-7539-4746-89ab-c4858c3d797d\" (UID: \"31c0265a-7539-4746-89ab-c4858c3d797d\") " Sep 9 04:55:23.054530 kubelet[2752]: I0909 04:55:23.054353 2752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31c0265a-7539-4746-89ab-c4858c3d797d-whisker-ca-bundle\") pod \"31c0265a-7539-4746-89ab-c4858c3d797d\" (UID: \"31c0265a-7539-4746-89ab-c4858c3d797d\") " Sep 9 04:55:23.056971 kubelet[2752]: I0909 04:55:23.056864 2752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31c0265a-7539-4746-89ab-c4858c3d797d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "31c0265a-7539-4746-89ab-c4858c3d797d" (UID: "31c0265a-7539-4746-89ab-c4858c3d797d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 04:55:23.062379 kubelet[2752]: I0909 04:55:23.062204 2752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31c0265a-7539-4746-89ab-c4858c3d797d-kube-api-access-vlpkj" (OuterVolumeSpecName: "kube-api-access-vlpkj") pod "31c0265a-7539-4746-89ab-c4858c3d797d" (UID: "31c0265a-7539-4746-89ab-c4858c3d797d"). InnerVolumeSpecName "kube-api-access-vlpkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 04:55:23.063777 kubelet[2752]: I0909 04:55:23.063712 2752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31c0265a-7539-4746-89ab-c4858c3d797d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "31c0265a-7539-4746-89ab-c4858c3d797d" (UID: "31c0265a-7539-4746-89ab-c4858c3d797d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 04:55:23.156005 kubelet[2752]: I0909 04:55:23.155969 2752 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/31c0265a-7539-4746-89ab-c4858c3d797d-whisker-backend-key-pair\") on node \"ci-4452-0-0-n-1f6e10e4b9\" DevicePath \"\"" Sep 9 04:55:23.156212 kubelet[2752]: I0909 04:55:23.156198 2752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlpkj\" (UniqueName: \"kubernetes.io/projected/31c0265a-7539-4746-89ab-c4858c3d797d-kube-api-access-vlpkj\") on node \"ci-4452-0-0-n-1f6e10e4b9\" DevicePath \"\"" Sep 9 04:55:23.156289 kubelet[2752]: I0909 04:55:23.156278 2752 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31c0265a-7539-4746-89ab-c4858c3d797d-whisker-ca-bundle\") on node \"ci-4452-0-0-n-1f6e10e4b9\" DevicePath \"\"" Sep 9 04:55:23.393416 systemd[1]: var-lib-kubelet-pods-31c0265a\x2d7539\x2d4746\x2d89ab\x2dc4858c3d797d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 04:55:23.393584 systemd[1]: var-lib-kubelet-pods-31c0265a\x2d7539\x2d4746\x2d89ab\x2dc4858c3d797d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvlpkj.mount: Deactivated successfully. Sep 9 04:55:23.596172 systemd[1]: Removed slice kubepods-besteffort-pod31c0265a_7539_4746_89ab_c4858c3d797d.slice - libcontainer container kubepods-besteffort-pod31c0265a_7539_4746_89ab_c4858c3d797d.slice. Sep 9 04:55:23.682104 systemd[1]: Created slice kubepods-besteffort-podaef42756_547e_4492_87de_94ebdce6345f.slice - libcontainer container kubepods-besteffort-podaef42756_547e_4492_87de_94ebdce6345f.slice. Sep 9 04:55:23.744786 containerd[1539]: time="2025-09-09T04:55:23.744696130Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37ecfa4689ad6db384ff648ab79f6ddc69fa571f593ecc56bc88f653f1062dbc\" id:\"a70e3521e12e84782ec559ed7e363a4e7efaa909988601fea07d7a68cbf6c98e\" pid:3811 exit_status:1 exited_at:{seconds:1757393723 nanos:744382094}" Sep 9 04:55:23.861838 kubelet[2752]: I0909 04:55:23.861630 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppqpb\" (UniqueName: \"kubernetes.io/projected/aef42756-547e-4492-87de-94ebdce6345f-kube-api-access-ppqpb\") pod \"whisker-78c4ff84c7-gd6k8\" (UID: \"aef42756-547e-4492-87de-94ebdce6345f\") " pod="calico-system/whisker-78c4ff84c7-gd6k8" Sep 9 04:55:23.861838 kubelet[2752]: I0909 04:55:23.861718 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aef42756-547e-4492-87de-94ebdce6345f-whisker-ca-bundle\") pod \"whisker-78c4ff84c7-gd6k8\" (UID: \"aef42756-547e-4492-87de-94ebdce6345f\") " pod="calico-system/whisker-78c4ff84c7-gd6k8" Sep 9 04:55:23.861838 kubelet[2752]: I0909 04:55:23.861805 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aef42756-547e-4492-87de-94ebdce6345f-whisker-backend-key-pair\") pod \"whisker-78c4ff84c7-gd6k8\" (UID: \"aef42756-547e-4492-87de-94ebdce6345f\") " pod="calico-system/whisker-78c4ff84c7-gd6k8" Sep 9 04:55:23.987492 containerd[1539]: time="2025-09-09T04:55:23.987284069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78c4ff84c7-gd6k8,Uid:aef42756-547e-4492-87de-94ebdce6345f,Namespace:calico-system,Attempt:0,}" Sep 9 04:55:24.171002 systemd-networkd[1405]: calicc2b4ec855c: Link UP Sep 9 04:55:24.173137 systemd-networkd[1405]: calicc2b4ec855c: Gained carrier Sep 9 04:55:24.197650 containerd[1539]: 2025-09-09 04:55:24.015 [INFO][3825] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 04:55:24.197650 containerd[1539]: 2025-09-09 04:55:24.053 [INFO][3825] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452--0--0--n--1f6e10e4b9-k8s-whisker--78c4ff84c7--gd6k8-eth0 whisker-78c4ff84c7- calico-system aef42756-547e-4492-87de-94ebdce6345f 928 0 2025-09-09 04:55:23 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78c4ff84c7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4452-0-0-n-1f6e10e4b9 whisker-78c4ff84c7-gd6k8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicc2b4ec855c [] [] }} ContainerID="1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026" Namespace="calico-system" Pod="whisker-78c4ff84c7-gd6k8" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-whisker--78c4ff84c7--gd6k8-" Sep 9 04:55:24.197650 containerd[1539]: 2025-09-09 04:55:24.053 [INFO][3825] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026" Namespace="calico-system" Pod="whisker-78c4ff84c7-gd6k8" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-whisker--78c4ff84c7--gd6k8-eth0" Sep 9 04:55:24.197650 containerd[1539]: 2025-09-09 04:55:24.103 [INFO][3836] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026" HandleID="k8s-pod-network.1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-whisker--78c4ff84c7--gd6k8-eth0" Sep 9 04:55:24.198061 containerd[1539]: 2025-09-09 04:55:24.103 [INFO][3836] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026" HandleID="k8s-pod-network.1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-whisker--78c4ff84c7--gd6k8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d37d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452-0-0-n-1f6e10e4b9", "pod":"whisker-78c4ff84c7-gd6k8", "timestamp":"2025-09-09 04:55:24.103196179 +0000 UTC"}, Hostname:"ci-4452-0-0-n-1f6e10e4b9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:55:24.198061 containerd[1539]: 2025-09-09 04:55:24.103 [INFO][3836] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:55:24.198061 containerd[1539]: 2025-09-09 04:55:24.103 [INFO][3836] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:55:24.198061 containerd[1539]: 2025-09-09 04:55:24.103 [INFO][3836] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452-0-0-n-1f6e10e4b9' Sep 9 04:55:24.198061 containerd[1539]: 2025-09-09 04:55:24.117 [INFO][3836] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:24.198061 containerd[1539]: 2025-09-09 04:55:24.125 [INFO][3836] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:24.198061 containerd[1539]: 2025-09-09 04:55:24.132 [INFO][3836] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:24.198061 containerd[1539]: 2025-09-09 04:55:24.134 [INFO][3836] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:24.198061 containerd[1539]: 2025-09-09 04:55:24.137 [INFO][3836] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:24.198258 containerd[1539]: 2025-09-09 04:55:24.137 [INFO][3836] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:24.198258 containerd[1539]: 2025-09-09 04:55:24.139 [INFO][3836] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026 Sep 9 04:55:24.198258 containerd[1539]: 2025-09-09 04:55:24.147 [INFO][3836] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:24.198258 containerd[1539]: 2025-09-09 04:55:24.158 [INFO][3836] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.65/26] block=192.168.15.64/26 handle="k8s-pod-network.1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:24.198258 containerd[1539]: 2025-09-09 04:55:24.158 [INFO][3836] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.65/26] handle="k8s-pod-network.1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:24.198258 containerd[1539]: 2025-09-09 04:55:24.158 [INFO][3836] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:55:24.198258 containerd[1539]: 2025-09-09 04:55:24.158 [INFO][3836] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.65/26] IPv6=[] ContainerID="1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026" HandleID="k8s-pod-network.1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-whisker--78c4ff84c7--gd6k8-eth0" Sep 9 04:55:24.198392 containerd[1539]: 2025-09-09 04:55:24.161 [INFO][3825] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026" Namespace="calico-system" Pod="whisker-78c4ff84c7-gd6k8" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-whisker--78c4ff84c7--gd6k8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--n--1f6e10e4b9-k8s-whisker--78c4ff84c7--gd6k8-eth0", GenerateName:"whisker-78c4ff84c7-", Namespace:"calico-system", SelfLink:"", UID:"aef42756-547e-4492-87de-94ebdce6345f", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78c4ff84c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-n-1f6e10e4b9", ContainerID:"", Pod:"whisker-78c4ff84c7-gd6k8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicc2b4ec855c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:24.198392 containerd[1539]: 2025-09-09 04:55:24.161 [INFO][3825] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.65/32] ContainerID="1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026" Namespace="calico-system" Pod="whisker-78c4ff84c7-gd6k8" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-whisker--78c4ff84c7--gd6k8-eth0" Sep 9 04:55:24.198460 containerd[1539]: 2025-09-09 04:55:24.161 [INFO][3825] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc2b4ec855c ContainerID="1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026" Namespace="calico-system" Pod="whisker-78c4ff84c7-gd6k8" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-whisker--78c4ff84c7--gd6k8-eth0" Sep 9 04:55:24.198460 containerd[1539]: 2025-09-09 04:55:24.173 [INFO][3825] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026" Namespace="calico-system" Pod="whisker-78c4ff84c7-gd6k8" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-whisker--78c4ff84c7--gd6k8-eth0" Sep 9 04:55:24.198552 containerd[1539]: 2025-09-09 04:55:24.175 [INFO][3825] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026" Namespace="calico-system" Pod="whisker-78c4ff84c7-gd6k8" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-whisker--78c4ff84c7--gd6k8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--n--1f6e10e4b9-k8s-whisker--78c4ff84c7--gd6k8-eth0", GenerateName:"whisker-78c4ff84c7-", Namespace:"calico-system", SelfLink:"", UID:"aef42756-547e-4492-87de-94ebdce6345f", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78c4ff84c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-n-1f6e10e4b9", ContainerID:"1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026", Pod:"whisker-78c4ff84c7-gd6k8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicc2b4ec855c", MAC:"9a:09:da:e4:47:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:24.198601 containerd[1539]: 2025-09-09 04:55:24.190 [INFO][3825] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026" Namespace="calico-system" Pod="whisker-78c4ff84c7-gd6k8" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-whisker--78c4ff84c7--gd6k8-eth0" Sep 9 04:55:24.259131 containerd[1539]: time="2025-09-09T04:55:24.259016976Z" level=info msg="connecting to shim 1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026" address="unix:///run/containerd/s/b522e51d6455e98f6cc195e71faf039f42c5787c878b04f3062abb60885c693d" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:24.324598 kubelet[2752]: I0909 04:55:24.324175 2752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31c0265a-7539-4746-89ab-c4858c3d797d" path="/var/lib/kubelet/pods/31c0265a-7539-4746-89ab-c4858c3d797d/volumes" Sep 9 04:55:24.334352 systemd[1]: Started cri-containerd-1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026.scope - libcontainer container 1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026. Sep 9 04:55:24.493126 containerd[1539]: time="2025-09-09T04:55:24.493072267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78c4ff84c7-gd6k8,Uid:aef42756-547e-4492-87de-94ebdce6345f,Namespace:calico-system,Attempt:0,} returns sandbox id \"1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026\"" Sep 9 04:55:24.497550 containerd[1539]: time="2025-09-09T04:55:24.497280023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 04:55:25.233083 systemd-networkd[1405]: calicc2b4ec855c: Gained IPv6LL Sep 9 04:55:26.212789 containerd[1539]: time="2025-09-09T04:55:26.212645780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:26.214719 containerd[1539]: time="2025-09-09T04:55:26.214682046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 9 04:55:26.224786 containerd[1539]: time="2025-09-09T04:55:26.224487742Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:26.226297 containerd[1539]: time="2025-09-09T04:55:26.226258770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:26.226760 containerd[1539]: time="2025-09-09T04:55:26.226585008Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.729264265s" Sep 9 04:55:26.226760 containerd[1539]: time="2025-09-09T04:55:26.226699727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 9 04:55:26.231652 containerd[1539]: time="2025-09-09T04:55:26.231238777Z" level=info msg="CreateContainer within sandbox \"1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 04:55:26.242792 containerd[1539]: time="2025-09-09T04:55:26.241516030Z" level=info msg="Container 8c261e48f413c250bedbfc034e37268244e14d561babae53c90b38cc653854e2: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:26.253465 containerd[1539]: time="2025-09-09T04:55:26.253398951Z" level=info msg="CreateContainer within sandbox \"1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8c261e48f413c250bedbfc034e37268244e14d561babae53c90b38cc653854e2\"" Sep 9 04:55:26.255566 containerd[1539]: time="2025-09-09T04:55:26.255514057Z" level=info msg="StartContainer for \"8c261e48f413c250bedbfc034e37268244e14d561babae53c90b38cc653854e2\"" Sep 9 04:55:26.262086 containerd[1539]: time="2025-09-09T04:55:26.262012815Z" level=info msg="connecting to shim 8c261e48f413c250bedbfc034e37268244e14d561babae53c90b38cc653854e2" address="unix:///run/containerd/s/b522e51d6455e98f6cc195e71faf039f42c5787c878b04f3062abb60885c693d" protocol=ttrpc version=3 Sep 9 04:55:26.290976 systemd[1]: Started cri-containerd-8c261e48f413c250bedbfc034e37268244e14d561babae53c90b38cc653854e2.scope - libcontainer container 8c261e48f413c250bedbfc034e37268244e14d561babae53c90b38cc653854e2. Sep 9 04:55:26.321181 containerd[1539]: time="2025-09-09T04:55:26.321129825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86b766bf49-7bvkb,Uid:897ef4af-ec3b-4639-90ae-9b9b2ec794f6,Namespace:calico-apiserver,Attempt:0,}" Sep 9 04:55:26.322718 containerd[1539]: time="2025-09-09T04:55:26.322653815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86b766bf49-45pbp,Uid:c949512a-ac76-4e34-849b-f3ddd2412487,Namespace:calico-apiserver,Attempt:0,}" Sep 9 04:55:26.396716 containerd[1539]: time="2025-09-09T04:55:26.395622814Z" level=info msg="StartContainer for \"8c261e48f413c250bedbfc034e37268244e14d561babae53c90b38cc653854e2\" returns successfully" Sep 9 04:55:26.404537 containerd[1539]: time="2025-09-09T04:55:26.404322037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 04:55:26.618550 systemd-networkd[1405]: calid0be03dd4df: Link UP Sep 9 04:55:26.619669 systemd-networkd[1405]: calid0be03dd4df: Gained carrier Sep 9 04:55:26.643936 containerd[1539]: 2025-09-09 04:55:26.384 [INFO][4048] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 04:55:26.643936 containerd[1539]: 2025-09-09 04:55:26.434 [INFO][4048] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0 calico-apiserver-86b766bf49- calico-apiserver c949512a-ac76-4e34-849b-f3ddd2412487 855 0 2025-09-09 04:54:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86b766bf49 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4452-0-0-n-1f6e10e4b9 calico-apiserver-86b766bf49-45pbp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid0be03dd4df [] [] }} ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Namespace="calico-apiserver" Pod="calico-apiserver-86b766bf49-45pbp" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-" Sep 9 04:55:26.643936 containerd[1539]: 2025-09-09 04:55:26.434 [INFO][4048] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Namespace="calico-apiserver" Pod="calico-apiserver-86b766bf49-45pbp" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0" Sep 9 04:55:26.643936 containerd[1539]: 2025-09-09 04:55:26.513 [INFO][4078] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" HandleID="k8s-pod-network.fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0" Sep 9 04:55:26.645321 containerd[1539]: 2025-09-09 04:55:26.513 [INFO][4078] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" HandleID="k8s-pod-network.fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031d0d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4452-0-0-n-1f6e10e4b9", "pod":"calico-apiserver-86b766bf49-45pbp", "timestamp":"2025-09-09 04:55:26.513471278 +0000 UTC"}, Hostname:"ci-4452-0-0-n-1f6e10e4b9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:55:26.645321 containerd[1539]: 2025-09-09 04:55:26.513 [INFO][4078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:55:26.645321 containerd[1539]: 2025-09-09 04:55:26.513 [INFO][4078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:55:26.645321 containerd[1539]: 2025-09-09 04:55:26.513 [INFO][4078] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452-0-0-n-1f6e10e4b9' Sep 9 04:55:26.645321 containerd[1539]: 2025-09-09 04:55:26.538 [INFO][4078] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:26.645321 containerd[1539]: 2025-09-09 04:55:26.554 [INFO][4078] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:26.645321 containerd[1539]: 2025-09-09 04:55:26.567 [INFO][4078] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:26.645321 containerd[1539]: 2025-09-09 04:55:26.573 [INFO][4078] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:26.645321 containerd[1539]: 2025-09-09 04:55:26.579 [INFO][4078] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:26.645652 containerd[1539]: 2025-09-09 04:55:26.579 [INFO][4078] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:26.645652 containerd[1539]: 2025-09-09 04:55:26.584 [INFO][4078] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29 Sep 9 04:55:26.645652 containerd[1539]: 2025-09-09 04:55:26.598 [INFO][4078] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:26.645652 containerd[1539]: 2025-09-09 04:55:26.610 [INFO][4078] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.66/26] block=192.168.15.64/26 handle="k8s-pod-network.fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:26.645652 containerd[1539]: 2025-09-09 04:55:26.610 [INFO][4078] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.66/26] handle="k8s-pod-network.fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:26.645652 containerd[1539]: 2025-09-09 04:55:26.610 [INFO][4078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:55:26.645652 containerd[1539]: 2025-09-09 04:55:26.610 [INFO][4078] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.66/26] IPv6=[] ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" HandleID="k8s-pod-network.fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0" Sep 9 04:55:26.646237 containerd[1539]: 2025-09-09 04:55:26.613 [INFO][4048] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Namespace="calico-apiserver" Pod="calico-apiserver-86b766bf49-45pbp" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0", GenerateName:"calico-apiserver-86b766bf49-", Namespace:"calico-apiserver", SelfLink:"", UID:"c949512a-ac76-4e34-849b-f3ddd2412487", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 54, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86b766bf49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-n-1f6e10e4b9", ContainerID:"", Pod:"calico-apiserver-86b766bf49-45pbp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid0be03dd4df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:26.646493 containerd[1539]: 2025-09-09 04:55:26.613 [INFO][4048] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.66/32] ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Namespace="calico-apiserver" Pod="calico-apiserver-86b766bf49-45pbp" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0" Sep 9 04:55:26.646493 containerd[1539]: 2025-09-09 04:55:26.613 [INFO][4048] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0be03dd4df ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Namespace="calico-apiserver" Pod="calico-apiserver-86b766bf49-45pbp" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0" Sep 9 04:55:26.646493 containerd[1539]: 2025-09-09 04:55:26.620 [INFO][4048] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Namespace="calico-apiserver" Pod="calico-apiserver-86b766bf49-45pbp" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0" Sep 9 04:55:26.646566 containerd[1539]: 2025-09-09 04:55:26.623 [INFO][4048] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Namespace="calico-apiserver" Pod="calico-apiserver-86b766bf49-45pbp" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0", GenerateName:"calico-apiserver-86b766bf49-", Namespace:"calico-apiserver", SelfLink:"", UID:"c949512a-ac76-4e34-849b-f3ddd2412487", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 54, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86b766bf49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-n-1f6e10e4b9", ContainerID:"fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29", Pod:"calico-apiserver-86b766bf49-45pbp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid0be03dd4df", MAC:"5a:e4:49:05:dd:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:26.647122 containerd[1539]: 2025-09-09 04:55:26.638 [INFO][4048] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Namespace="calico-apiserver" Pod="calico-apiserver-86b766bf49-45pbp" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0" Sep 9 04:55:26.713663 systemd-networkd[1405]: cali22f9d3e81e5: Link UP Sep 9 04:55:26.714984 systemd-networkd[1405]: cali22f9d3e81e5: Gained carrier Sep 9 04:55:26.730472 containerd[1539]: time="2025-09-09T04:55:26.730158050Z" level=info msg="connecting to shim fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" address="unix:///run/containerd/s/92b86febbddc86cb55df7f08fa08ddea37d49c57f3ffbc77f077e5c48750f99e" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:26.755475 containerd[1539]: 2025-09-09 04:55:26.402 [INFO][4038] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 04:55:26.755475 containerd[1539]: 2025-09-09 04:55:26.429 [INFO][4038] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0 calico-apiserver-86b766bf49- calico-apiserver 897ef4af-ec3b-4639-90ae-9b9b2ec794f6 854 0 2025-09-09 04:54:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86b766bf49 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4452-0-0-n-1f6e10e4b9 calico-apiserver-86b766bf49-7bvkb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali22f9d3e81e5 [] [] }} ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Namespace="calico-apiserver" Pod="calico-apiserver-86b766bf49-7bvkb" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-" Sep 9 04:55:26.755475 containerd[1539]: 2025-09-09 04:55:26.432 [INFO][4038] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Namespace="calico-apiserver" Pod="calico-apiserver-86b766bf49-7bvkb" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0" Sep 9 04:55:26.755475 containerd[1539]: 2025-09-09 04:55:26.517 [INFO][4079] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" HandleID="k8s-pod-network.229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0" Sep 9 04:55:26.756099 containerd[1539]: 2025-09-09 04:55:26.518 [INFO][4079] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" HandleID="k8s-pod-network.229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000342830), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4452-0-0-n-1f6e10e4b9", "pod":"calico-apiserver-86b766bf49-7bvkb", "timestamp":"2025-09-09 04:55:26.517870089 +0000 UTC"}, Hostname:"ci-4452-0-0-n-1f6e10e4b9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:55:26.756099 containerd[1539]: 2025-09-09 04:55:26.518 [INFO][4079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:55:26.756099 containerd[1539]: 2025-09-09 04:55:26.610 [INFO][4079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:55:26.756099 containerd[1539]: 2025-09-09 04:55:26.610 [INFO][4079] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452-0-0-n-1f6e10e4b9' Sep 9 04:55:26.756099 containerd[1539]: 2025-09-09 04:55:26.644 [INFO][4079] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:26.756099 containerd[1539]: 2025-09-09 04:55:26.654 [INFO][4079] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:26.756099 containerd[1539]: 2025-09-09 04:55:26.665 [INFO][4079] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:26.756099 containerd[1539]: 2025-09-09 04:55:26.670 [INFO][4079] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:26.756099 containerd[1539]: 2025-09-09 04:55:26.677 [INFO][4079] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:26.756332 containerd[1539]: 2025-09-09 04:55:26.677 [INFO][4079] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:26.756332 containerd[1539]: 2025-09-09 04:55:26.681 [INFO][4079] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4 Sep 9 04:55:26.756332 containerd[1539]: 2025-09-09 04:55:26.689 [INFO][4079] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:26.756332 containerd[1539]: 2025-09-09 04:55:26.699 [INFO][4079] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.67/26] block=192.168.15.64/26 handle="k8s-pod-network.229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:26.756332 containerd[1539]: 2025-09-09 04:55:26.699 [INFO][4079] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.67/26] handle="k8s-pod-network.229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:26.756332 containerd[1539]: 2025-09-09 04:55:26.699 [INFO][4079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:55:26.756332 containerd[1539]: 2025-09-09 04:55:26.699 [INFO][4079] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.67/26] IPv6=[] ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" HandleID="k8s-pod-network.229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0" Sep 9 04:55:26.756472 containerd[1539]: 2025-09-09 04:55:26.706 [INFO][4038] cni-plugin/k8s.go 418: Populated endpoint ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Namespace="calico-apiserver" Pod="calico-apiserver-86b766bf49-7bvkb" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0", GenerateName:"calico-apiserver-86b766bf49-", Namespace:"calico-apiserver", SelfLink:"", UID:"897ef4af-ec3b-4639-90ae-9b9b2ec794f6", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 54, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86b766bf49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-n-1f6e10e4b9", ContainerID:"", Pod:"calico-apiserver-86b766bf49-7bvkb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali22f9d3e81e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:26.756524 containerd[1539]: 2025-09-09 04:55:26.706 [INFO][4038] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.67/32] ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Namespace="calico-apiserver" Pod="calico-apiserver-86b766bf49-7bvkb" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0" Sep 9 04:55:26.756524 containerd[1539]: 2025-09-09 04:55:26.706 [INFO][4038] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22f9d3e81e5 ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Namespace="calico-apiserver" Pod="calico-apiserver-86b766bf49-7bvkb" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0" Sep 9 04:55:26.756524 containerd[1539]: 2025-09-09 04:55:26.717 [INFO][4038] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Namespace="calico-apiserver" Pod="calico-apiserver-86b766bf49-7bvkb" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0" Sep 9 04:55:26.756936 containerd[1539]: 2025-09-09 04:55:26.718 [INFO][4038] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Namespace="calico-apiserver" Pod="calico-apiserver-86b766bf49-7bvkb" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0", GenerateName:"calico-apiserver-86b766bf49-", Namespace:"calico-apiserver", SelfLink:"", UID:"897ef4af-ec3b-4639-90ae-9b9b2ec794f6", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 54, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86b766bf49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-n-1f6e10e4b9", ContainerID:"229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4", Pod:"calico-apiserver-86b766bf49-7bvkb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali22f9d3e81e5", MAC:"6a:0b:1b:5e:58:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:26.757004 containerd[1539]: 2025-09-09 04:55:26.746 [INFO][4038] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Namespace="calico-apiserver" Pod="calico-apiserver-86b766bf49-7bvkb" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0" Sep 9 04:55:26.778199 systemd[1]: Started cri-containerd-fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29.scope - libcontainer container fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29. Sep 9 04:55:26.806118 containerd[1539]: time="2025-09-09T04:55:26.805977751Z" level=info msg="connecting to shim 229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" address="unix:///run/containerd/s/59de7abf59126e40de93041515ac603a7dc6156d0a50c75f1752bf5c6c86d2dd" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:26.845590 systemd[1]: Started cri-containerd-229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4.scope - libcontainer container 229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4. Sep 9 04:55:26.894620 containerd[1539]: time="2025-09-09T04:55:26.894494648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86b766bf49-45pbp,Uid:c949512a-ac76-4e34-849b-f3ddd2412487,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29\"" Sep 9 04:55:26.927723 containerd[1539]: time="2025-09-09T04:55:26.927641069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86b766bf49-7bvkb,Uid:897ef4af-ec3b-4639-90ae-9b9b2ec794f6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4\"" Sep 9 04:55:28.561901 systemd-networkd[1405]: cali22f9d3e81e5: Gained IPv6LL Sep 9 04:55:28.625480 systemd-networkd[1405]: calid0be03dd4df: Gained IPv6LL Sep 9 04:55:28.706490 kubelet[2752]: I0909 04:55:28.706456 2752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:55:29.041260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount307572625.mount: Deactivated successfully. Sep 9 04:55:29.062534 containerd[1539]: time="2025-09-09T04:55:29.062455382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:29.064555 containerd[1539]: time="2025-09-09T04:55:29.064480260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 9 04:55:29.067459 containerd[1539]: time="2025-09-09T04:55:29.067390977Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:29.080633 containerd[1539]: time="2025-09-09T04:55:29.080560122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:29.083597 containerd[1539]: time="2025-09-09T04:55:29.083413159Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 2.677221174s" Sep 9 04:55:29.083597 containerd[1539]: time="2025-09-09T04:55:29.083560439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 9 04:55:29.086419 containerd[1539]: time="2025-09-09T04:55:29.086164756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 04:55:29.090285 containerd[1539]: time="2025-09-09T04:55:29.089789472Z" level=info msg="CreateContainer within sandbox \"1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 04:55:29.104011 containerd[1539]: time="2025-09-09T04:55:29.103960776Z" level=info msg="Container 12eb46d9625f362659ab994120849ada276697ce30aac91740167d993a9328f9: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:29.110348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1302747155.mount: Deactivated successfully. Sep 9 04:55:29.126362 containerd[1539]: time="2025-09-09T04:55:29.126297471Z" level=info msg="CreateContainer within sandbox \"1f295655dd91b48a8e378bdfa7860cc585ae359068b57c3a12162120b8e52026\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"12eb46d9625f362659ab994120849ada276697ce30aac91740167d993a9328f9\"" Sep 9 04:55:29.130079 containerd[1539]: time="2025-09-09T04:55:29.130035987Z" level=info msg="StartContainer for \"12eb46d9625f362659ab994120849ada276697ce30aac91740167d993a9328f9\"" Sep 9 04:55:29.131731 containerd[1539]: time="2025-09-09T04:55:29.131687146Z" level=info msg="connecting to shim 12eb46d9625f362659ab994120849ada276697ce30aac91740167d993a9328f9" address="unix:///run/containerd/s/b522e51d6455e98f6cc195e71faf039f42c5787c878b04f3062abb60885c693d" protocol=ttrpc version=3 Sep 9 04:55:29.162012 systemd[1]: Started cri-containerd-12eb46d9625f362659ab994120849ada276697ce30aac91740167d993a9328f9.scope - libcontainer container 12eb46d9625f362659ab994120849ada276697ce30aac91740167d993a9328f9. Sep 9 04:55:29.243159 containerd[1539]: time="2025-09-09T04:55:29.242601463Z" level=info msg="StartContainer for \"12eb46d9625f362659ab994120849ada276697ce30aac91740167d993a9328f9\" returns successfully" Sep 9 04:55:29.322339 containerd[1539]: time="2025-09-09T04:55:29.322222695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4987m,Uid:f6077cf2-ad7b-48a7-94ba-6e46ba9a474f,Namespace:kube-system,Attempt:0,}" Sep 9 04:55:29.323129 containerd[1539]: time="2025-09-09T04:55:29.322519855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jkxm5,Uid:e2c5c76d-598f-40f5-a9cd-152b6731067e,Namespace:kube-system,Attempt:0,}" Sep 9 04:55:29.323129 containerd[1539]: time="2025-09-09T04:55:29.322585095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6brn6,Uid:e660f608-9e83-4b76-ae9d-6598e92ef788,Namespace:calico-system,Attempt:0,}" Sep 9 04:55:29.323129 containerd[1539]: time="2025-09-09T04:55:29.322618895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f49cb55df-s2tds,Uid:cf41e489-e598-4a00-bd01-7a899133e93e,Namespace:calico-system,Attempt:0,}" Sep 9 04:55:29.612670 systemd-networkd[1405]: cali7984bd4af22: Link UP Sep 9 04:55:29.614569 systemd-networkd[1405]: cali7984bd4af22: Gained carrier Sep 9 04:55:29.640654 containerd[1539]: 2025-09-09 04:55:29.423 [INFO][4307] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 04:55:29.640654 containerd[1539]: 2025-09-09 04:55:29.456 [INFO][4307] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--4987m-eth0 coredns-7c65d6cfc9- kube-system f6077cf2-ad7b-48a7-94ba-6e46ba9a474f 853 0 2025-09-09 04:54:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4452-0-0-n-1f6e10e4b9 coredns-7c65d6cfc9-4987m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7984bd4af22 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4987m" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--4987m-" Sep 9 04:55:29.640654 containerd[1539]: 2025-09-09 04:55:29.456 [INFO][4307] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4987m" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--4987m-eth0" Sep 9 04:55:29.640654 containerd[1539]: 2025-09-09 04:55:29.539 [INFO][4372] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb" HandleID="k8s-pod-network.2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--4987m-eth0" Sep 9 04:55:29.641506 containerd[1539]: 2025-09-09 04:55:29.540 [INFO][4372] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb" HandleID="k8s-pod-network.2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--4987m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000365180), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4452-0-0-n-1f6e10e4b9", "pod":"coredns-7c65d6cfc9-4987m", "timestamp":"2025-09-09 04:55:29.539934734 +0000 UTC"}, Hostname:"ci-4452-0-0-n-1f6e10e4b9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:55:29.641506 containerd[1539]: 2025-09-09 04:55:29.540 [INFO][4372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:55:29.641506 containerd[1539]: 2025-09-09 04:55:29.540 [INFO][4372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:55:29.641506 containerd[1539]: 2025-09-09 04:55:29.540 [INFO][4372] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452-0-0-n-1f6e10e4b9' Sep 9 04:55:29.641506 containerd[1539]: 2025-09-09 04:55:29.557 [INFO][4372] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.641506 containerd[1539]: 2025-09-09 04:55:29.566 [INFO][4372] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.641506 containerd[1539]: 2025-09-09 04:55:29.577 [INFO][4372] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.641506 containerd[1539]: 2025-09-09 04:55:29.580 [INFO][4372] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.641506 containerd[1539]: 2025-09-09 04:55:29.583 [INFO][4372] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.642099 containerd[1539]: 2025-09-09 04:55:29.584 [INFO][4372] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.642099 containerd[1539]: 2025-09-09 04:55:29.586 [INFO][4372] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb Sep 9 04:55:29.642099 containerd[1539]: 2025-09-09 04:55:29.591 [INFO][4372] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.642099 containerd[1539]: 2025-09-09 04:55:29.602 [INFO][4372] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.68/26] block=192.168.15.64/26 handle="k8s-pod-network.2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.642099 containerd[1539]: 2025-09-09 04:55:29.602 [INFO][4372] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.68/26] handle="k8s-pod-network.2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.642099 containerd[1539]: 2025-09-09 04:55:29.602 [INFO][4372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:55:29.642099 containerd[1539]: 2025-09-09 04:55:29.602 [INFO][4372] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.68/26] IPv6=[] ContainerID="2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb" HandleID="k8s-pod-network.2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--4987m-eth0" Sep 9 04:55:29.643140 containerd[1539]: 2025-09-09 04:55:29.608 [INFO][4307] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4987m" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--4987m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--4987m-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f6077cf2-ad7b-48a7-94ba-6e46ba9a474f", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 54, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-n-1f6e10e4b9", ContainerID:"", Pod:"coredns-7c65d6cfc9-4987m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7984bd4af22", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:29.643140 containerd[1539]: 2025-09-09 04:55:29.609 [INFO][4307] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.68/32] ContainerID="2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4987m" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--4987m-eth0" Sep 9 04:55:29.643140 containerd[1539]: 2025-09-09 04:55:29.609 [INFO][4307] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7984bd4af22 ContainerID="2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4987m" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--4987m-eth0" Sep 9 04:55:29.643140 containerd[1539]: 2025-09-09 04:55:29.612 [INFO][4307] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4987m" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--4987m-eth0" Sep 9 04:55:29.643140 containerd[1539]: 2025-09-09 04:55:29.612 [INFO][4307] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4987m" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--4987m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--4987m-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f6077cf2-ad7b-48a7-94ba-6e46ba9a474f", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 54, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-n-1f6e10e4b9", ContainerID:"2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb", Pod:"coredns-7c65d6cfc9-4987m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7984bd4af22", MAC:"56:92:09:50:40:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:29.643140 containerd[1539]: 2025-09-09 04:55:29.635 [INFO][4307] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4987m" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--4987m-eth0" Sep 9 04:55:29.676618 containerd[1539]: time="2025-09-09T04:55:29.675460905Z" level=info msg="connecting to shim 2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb" address="unix:///run/containerd/s/3fcb195180d804f17569bdef1e0c931392244e5266f2552d7d8ea2aa2ecd833d" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:29.728358 systemd[1]: Started cri-containerd-2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb.scope - libcontainer container 2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb. Sep 9 04:55:29.752222 systemd-networkd[1405]: cali1e196db9949: Link UP Sep 9 04:55:29.752826 systemd-networkd[1405]: cali1e196db9949: Gained carrier Sep 9 04:55:29.784502 kubelet[2752]: I0909 04:55:29.784416 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-78c4ff84c7-gd6k8" podStartSLOduration=2.196246734 podStartE2EDuration="6.784349704s" podCreationTimestamp="2025-09-09 04:55:23 +0000 UTC" firstStartedPulling="2025-09-09 04:55:24.496877067 +0000 UTC m=+48.280307204" lastFinishedPulling="2025-09-09 04:55:29.084980037 +0000 UTC m=+52.868410174" observedRunningTime="2025-09-09 04:55:29.655424927 +0000 UTC m=+53.438855064" watchObservedRunningTime="2025-09-09 04:55:29.784349704 +0000 UTC m=+53.567779841" Sep 9 04:55:29.791887 containerd[1539]: 2025-09-09 04:55:29.407 [INFO][4340] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 04:55:29.791887 containerd[1539]: 2025-09-09 04:55:29.445 [INFO][4340] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452--0--0--n--1f6e10e4b9-k8s-calico--kube--controllers--f49cb55df--s2tds-eth0 calico-kube-controllers-f49cb55df- calico-system cf41e489-e598-4a00-bd01-7a899133e93e 843 0 2025-09-09 04:55:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f49cb55df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4452-0-0-n-1f6e10e4b9 calico-kube-controllers-f49cb55df-s2tds eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1e196db9949 [] [] }} ContainerID="42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898" Namespace="calico-system" Pod="calico-kube-controllers-f49cb55df-s2tds" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--kube--controllers--f49cb55df--s2tds-" Sep 9 04:55:29.791887 containerd[1539]: 2025-09-09 04:55:29.446 [INFO][4340] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898" Namespace="calico-system" Pod="calico-kube-controllers-f49cb55df-s2tds" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--kube--controllers--f49cb55df--s2tds-eth0" Sep 9 04:55:29.791887 containerd[1539]: 2025-09-09 04:55:29.566 [INFO][4368] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898" HandleID="k8s-pod-network.42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--kube--controllers--f49cb55df--s2tds-eth0" Sep 9 04:55:29.791887 containerd[1539]: 2025-09-09 04:55:29.572 [INFO][4368] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898" HandleID="k8s-pod-network.42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--kube--controllers--f49cb55df--s2tds-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d37b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452-0-0-n-1f6e10e4b9", "pod":"calico-kube-controllers-f49cb55df-s2tds", "timestamp":"2025-09-09 04:55:29.563434148 +0000 UTC"}, Hostname:"ci-4452-0-0-n-1f6e10e4b9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:55:29.791887 containerd[1539]: 2025-09-09 04:55:29.572 [INFO][4368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:55:29.791887 containerd[1539]: 2025-09-09 04:55:29.602 [INFO][4368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:55:29.791887 containerd[1539]: 2025-09-09 04:55:29.602 [INFO][4368] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452-0-0-n-1f6e10e4b9' Sep 9 04:55:29.791887 containerd[1539]: 2025-09-09 04:55:29.662 [INFO][4368] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.791887 containerd[1539]: 2025-09-09 04:55:29.682 [INFO][4368] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.791887 containerd[1539]: 2025-09-09 04:55:29.702 [INFO][4368] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.791887 containerd[1539]: 2025-09-09 04:55:29.706 [INFO][4368] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.791887 containerd[1539]: 2025-09-09 04:55:29.710 [INFO][4368] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.791887 containerd[1539]: 2025-09-09 04:55:29.710 [INFO][4368] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.791887 containerd[1539]: 2025-09-09 04:55:29.713 [INFO][4368] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898 Sep 9 04:55:29.791887 containerd[1539]: 2025-09-09 04:55:29.726 [INFO][4368] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.791887 containerd[1539]: 2025-09-09 04:55:29.739 [INFO][4368] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.69/26] block=192.168.15.64/26 handle="k8s-pod-network.42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.791887 containerd[1539]: 2025-09-09 04:55:29.739 [INFO][4368] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.69/26] handle="k8s-pod-network.42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.791887 containerd[1539]: 2025-09-09 04:55:29.740 [INFO][4368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:55:29.791887 containerd[1539]: 2025-09-09 04:55:29.740 [INFO][4368] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.69/26] IPv6=[] ContainerID="42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898" HandleID="k8s-pod-network.42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--kube--controllers--f49cb55df--s2tds-eth0" Sep 9 04:55:29.793227 containerd[1539]: 2025-09-09 04:55:29.747 [INFO][4340] cni-plugin/k8s.go 418: Populated endpoint ContainerID="42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898" Namespace="calico-system" Pod="calico-kube-controllers-f49cb55df-s2tds" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--kube--controllers--f49cb55df--s2tds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--n--1f6e10e4b9-k8s-calico--kube--controllers--f49cb55df--s2tds-eth0", GenerateName:"calico-kube-controllers-f49cb55df-", Namespace:"calico-system", SelfLink:"", UID:"cf41e489-e598-4a00-bd01-7a899133e93e", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f49cb55df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-n-1f6e10e4b9", ContainerID:"", Pod:"calico-kube-controllers-f49cb55df-s2tds", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1e196db9949", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:29.793227 containerd[1539]: 2025-09-09 04:55:29.749 [INFO][4340] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.69/32] ContainerID="42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898" Namespace="calico-system" Pod="calico-kube-controllers-f49cb55df-s2tds" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--kube--controllers--f49cb55df--s2tds-eth0" Sep 9 04:55:29.793227 containerd[1539]: 2025-09-09 04:55:29.749 [INFO][4340] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e196db9949 ContainerID="42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898" Namespace="calico-system" Pod="calico-kube-controllers-f49cb55df-s2tds" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--kube--controllers--f49cb55df--s2tds-eth0" Sep 9 04:55:29.793227 containerd[1539]: 2025-09-09 04:55:29.753 [INFO][4340] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898" Namespace="calico-system" Pod="calico-kube-controllers-f49cb55df-s2tds" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--kube--controllers--f49cb55df--s2tds-eth0" Sep 9 04:55:29.793227 containerd[1539]: 2025-09-09 04:55:29.755 [INFO][4340] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898" Namespace="calico-system" Pod="calico-kube-controllers-f49cb55df-s2tds" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--kube--controllers--f49cb55df--s2tds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--n--1f6e10e4b9-k8s-calico--kube--controllers--f49cb55df--s2tds-eth0", GenerateName:"calico-kube-controllers-f49cb55df-", Namespace:"calico-system", SelfLink:"", UID:"cf41e489-e598-4a00-bd01-7a899133e93e", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f49cb55df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-n-1f6e10e4b9", ContainerID:"42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898", Pod:"calico-kube-controllers-f49cb55df-s2tds", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1e196db9949", MAC:"3a:6f:13:91:6c:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:29.793227 containerd[1539]: 2025-09-09 04:55:29.788 [INFO][4340] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898" Namespace="calico-system" Pod="calico-kube-controllers-f49cb55df-s2tds" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--kube--controllers--f49cb55df--s2tds-eth0" Sep 9 04:55:29.876895 containerd[1539]: time="2025-09-09T04:55:29.876557242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4987m,Uid:f6077cf2-ad7b-48a7-94ba-6e46ba9a474f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb\"" Sep 9 04:55:29.885423 containerd[1539]: time="2025-09-09T04:55:29.884946553Z" level=info msg="connecting to shim 42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898" address="unix:///run/containerd/s/9ef92babb9b47cc9a383ae90a53af76a288164f0cd27ffa6ba90b73a3eb8d0b1" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:29.886582 containerd[1539]: time="2025-09-09T04:55:29.886535471Z" level=info msg="CreateContainer within sandbox \"2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 04:55:29.922359 systemd-networkd[1405]: cali0881b4eef3d: Link UP Sep 9 04:55:29.925049 systemd-networkd[1405]: cali0881b4eef3d: Gained carrier Sep 9 04:55:29.967478 containerd[1539]: 2025-09-09 04:55:29.421 [INFO][4314] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 04:55:29.967478 containerd[1539]: 2025-09-09 04:55:29.454 [INFO][4314] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--jkxm5-eth0 coredns-7c65d6cfc9- kube-system e2c5c76d-598f-40f5-a9cd-152b6731067e 857 0 2025-09-09 04:54:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4452-0-0-n-1f6e10e4b9 coredns-7c65d6cfc9-jkxm5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0881b4eef3d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jkxm5" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--jkxm5-" Sep 9 04:55:29.967478 containerd[1539]: 2025-09-09 04:55:29.455 [INFO][4314] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jkxm5" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--jkxm5-eth0" Sep 9 04:55:29.967478 containerd[1539]: 2025-09-09 04:55:29.573 [INFO][4374] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18" HandleID="k8s-pod-network.97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--jkxm5-eth0" Sep 9 04:55:29.967478 containerd[1539]: 2025-09-09 04:55:29.573 [INFO][4374] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18" HandleID="k8s-pod-network.97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--jkxm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000391990), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4452-0-0-n-1f6e10e4b9", "pod":"coredns-7c65d6cfc9-jkxm5", "timestamp":"2025-09-09 04:55:29.573504617 +0000 UTC"}, Hostname:"ci-4452-0-0-n-1f6e10e4b9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:55:29.967478 containerd[1539]: 2025-09-09 04:55:29.574 [INFO][4374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:55:29.967478 containerd[1539]: 2025-09-09 04:55:29.740 [INFO][4374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:55:29.967478 containerd[1539]: 2025-09-09 04:55:29.740 [INFO][4374] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452-0-0-n-1f6e10e4b9' Sep 9 04:55:29.967478 containerd[1539]: 2025-09-09 04:55:29.779 [INFO][4374] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.967478 containerd[1539]: 2025-09-09 04:55:29.795 [INFO][4374] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.967478 containerd[1539]: 2025-09-09 04:55:29.820 [INFO][4374] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.967478 containerd[1539]: 2025-09-09 04:55:29.829 [INFO][4374] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.967478 containerd[1539]: 2025-09-09 04:55:29.844 [INFO][4374] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.967478 containerd[1539]: 2025-09-09 04:55:29.845 [INFO][4374] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.967478 containerd[1539]: 2025-09-09 04:55:29.859 [INFO][4374] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18 Sep 9 04:55:29.967478 containerd[1539]: 2025-09-09 04:55:29.868 [INFO][4374] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.967478 containerd[1539]: 2025-09-09 04:55:29.897 [INFO][4374] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.70/26] block=192.168.15.64/26 handle="k8s-pod-network.97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.967478 containerd[1539]: 2025-09-09 04:55:29.898 [INFO][4374] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.70/26] handle="k8s-pod-network.97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:29.967478 containerd[1539]: 2025-09-09 04:55:29.898 [INFO][4374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:55:29.967478 containerd[1539]: 2025-09-09 04:55:29.898 [INFO][4374] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.70/26] IPv6=[] ContainerID="97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18" HandleID="k8s-pod-network.97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--jkxm5-eth0" Sep 9 04:55:29.968627 containerd[1539]: 2025-09-09 04:55:29.913 [INFO][4314] cni-plugin/k8s.go 418: Populated endpoint ContainerID="97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jkxm5" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--jkxm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--jkxm5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"e2c5c76d-598f-40f5-a9cd-152b6731067e", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 54, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-n-1f6e10e4b9", ContainerID:"", Pod:"coredns-7c65d6cfc9-jkxm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0881b4eef3d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:29.968627 containerd[1539]: 2025-09-09 04:55:29.913 [INFO][4314] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.70/32] ContainerID="97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jkxm5" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--jkxm5-eth0" Sep 9 04:55:29.968627 containerd[1539]: 2025-09-09 04:55:29.913 [INFO][4314] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0881b4eef3d ContainerID="97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jkxm5" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--jkxm5-eth0" Sep 9 04:55:29.968627 containerd[1539]: 2025-09-09 04:55:29.929 [INFO][4314] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jkxm5" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--jkxm5-eth0" Sep 9 04:55:29.968627 containerd[1539]: 2025-09-09 04:55:29.934 [INFO][4314] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jkxm5" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--jkxm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--jkxm5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"e2c5c76d-598f-40f5-a9cd-152b6731067e", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 54, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-n-1f6e10e4b9", ContainerID:"97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18", Pod:"coredns-7c65d6cfc9-jkxm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0881b4eef3d", MAC:"2a:2f:44:eb:15:d4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:29.968627 containerd[1539]: 2025-09-09 04:55:29.960 [INFO][4314] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jkxm5" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-coredns--7c65d6cfc9--jkxm5-eth0" Sep 9 04:55:30.024649 containerd[1539]: time="2025-09-09T04:55:30.024283280Z" level=info msg="Container 2a3fe835b6b7b2f78e6861f6abcd1935f2ec33dfba5c9abd3970d6907da8a844: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:30.036772 containerd[1539]: time="2025-09-09T04:55:30.034944286Z" level=info msg="connecting to shim 97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18" address="unix:///run/containerd/s/f2ae0acc821c02ee4551928b81f04fc879676c0ff4beef0f3314534ba7af14b7" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:30.053551 containerd[1539]: time="2025-09-09T04:55:30.053249577Z" level=info msg="CreateContainer within sandbox \"2be1f3735243e253cb1f3477ae29f688dc88cac8cb060f2f55512c386451d3fb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2a3fe835b6b7b2f78e6861f6abcd1935f2ec33dfba5c9abd3970d6907da8a844\"" Sep 9 04:55:30.056866 containerd[1539]: time="2025-09-09T04:55:30.056811779Z" level=info msg="StartContainer for \"2a3fe835b6b7b2f78e6861f6abcd1935f2ec33dfba5c9abd3970d6907da8a844\"" Sep 9 04:55:30.057184 systemd[1]: Started cri-containerd-42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898.scope - libcontainer container 42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898. Sep 9 04:55:30.065350 containerd[1539]: time="2025-09-09T04:55:30.065294985Z" level=info msg="connecting to shim 2a3fe835b6b7b2f78e6861f6abcd1935f2ec33dfba5c9abd3970d6907da8a844" address="unix:///run/containerd/s/3fcb195180d804f17569bdef1e0c931392244e5266f2552d7d8ea2aa2ecd833d" protocol=ttrpc version=3 Sep 9 04:55:30.102086 systemd-networkd[1405]: caliee00cbced3a: Link UP Sep 9 04:55:30.103098 systemd-networkd[1405]: caliee00cbced3a: Gained carrier Sep 9 04:55:30.109042 systemd[1]: Started cri-containerd-97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18.scope - libcontainer container 97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18. Sep 9 04:55:30.144136 systemd[1]: Started cri-containerd-2a3fe835b6b7b2f78e6861f6abcd1935f2ec33dfba5c9abd3970d6907da8a844.scope - libcontainer container 2a3fe835b6b7b2f78e6861f6abcd1935f2ec33dfba5c9abd3970d6907da8a844. Sep 9 04:55:30.156711 containerd[1539]: 2025-09-09 04:55:29.470 [INFO][4348] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 04:55:30.156711 containerd[1539]: 2025-09-09 04:55:29.514 [INFO][4348] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452--0--0--n--1f6e10e4b9-k8s-csi--node--driver--6brn6-eth0 csi-node-driver- calico-system e660f608-9e83-4b76-ae9d-6598e92ef788 731 0 2025-09-09 04:55:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4452-0-0-n-1f6e10e4b9 csi-node-driver-6brn6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliee00cbced3a [] [] }} ContainerID="2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0" Namespace="calico-system" Pod="csi-node-driver-6brn6" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-csi--node--driver--6brn6-" Sep 9 04:55:30.156711 containerd[1539]: 2025-09-09 04:55:29.514 [INFO][4348] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0" Namespace="calico-system" Pod="csi-node-driver-6brn6" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-csi--node--driver--6brn6-eth0" Sep 9 04:55:30.156711 containerd[1539]: 2025-09-09 04:55:29.575 [INFO][4388] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0" HandleID="k8s-pod-network.2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-csi--node--driver--6brn6-eth0" Sep 9 04:55:30.156711 containerd[1539]: 2025-09-09 04:55:29.576 [INFO][4388] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0" HandleID="k8s-pod-network.2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-csi--node--driver--6brn6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452-0-0-n-1f6e10e4b9", "pod":"csi-node-driver-6brn6", "timestamp":"2025-09-09 04:55:29.575829135 +0000 UTC"}, Hostname:"ci-4452-0-0-n-1f6e10e4b9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:55:30.156711 containerd[1539]: 2025-09-09 04:55:29.576 [INFO][4388] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:55:30.156711 containerd[1539]: 2025-09-09 04:55:29.898 [INFO][4388] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:55:30.156711 containerd[1539]: 2025-09-09 04:55:29.899 [INFO][4388] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452-0-0-n-1f6e10e4b9' Sep 9 04:55:30.156711 containerd[1539]: 2025-09-09 04:55:29.948 [INFO][4388] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:30.156711 containerd[1539]: 2025-09-09 04:55:29.969 [INFO][4388] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:30.156711 containerd[1539]: 2025-09-09 04:55:29.987 [INFO][4388] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:30.156711 containerd[1539]: 2025-09-09 04:55:29.994 [INFO][4388] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:30.156711 containerd[1539]: 2025-09-09 04:55:30.013 [INFO][4388] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:30.156711 containerd[1539]: 2025-09-09 04:55:30.014 [INFO][4388] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:30.156711 containerd[1539]: 2025-09-09 04:55:30.027 [INFO][4388] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0 Sep 9 04:55:30.156711 containerd[1539]: 2025-09-09 04:55:30.051 [INFO][4388] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:30.156711 containerd[1539]: 2025-09-09 04:55:30.077 [INFO][4388] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.71/26] block=192.168.15.64/26 handle="k8s-pod-network.2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:30.156711 containerd[1539]: 2025-09-09 04:55:30.077 [INFO][4388] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.71/26] handle="k8s-pod-network.2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:30.156711 containerd[1539]: 2025-09-09 04:55:30.077 [INFO][4388] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:55:30.156711 containerd[1539]: 2025-09-09 04:55:30.077 [INFO][4388] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.71/26] IPv6=[] ContainerID="2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0" HandleID="k8s-pod-network.2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-csi--node--driver--6brn6-eth0" Sep 9 04:55:30.158179 containerd[1539]: 2025-09-09 04:55:30.092 [INFO][4348] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0" Namespace="calico-system" Pod="csi-node-driver-6brn6" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-csi--node--driver--6brn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--n--1f6e10e4b9-k8s-csi--node--driver--6brn6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e660f608-9e83-4b76-ae9d-6598e92ef788", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-n-1f6e10e4b9", ContainerID:"", Pod:"csi-node-driver-6brn6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliee00cbced3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:30.158179 containerd[1539]: 2025-09-09 04:55:30.093 [INFO][4348] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.71/32] ContainerID="2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0" Namespace="calico-system" Pod="csi-node-driver-6brn6" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-csi--node--driver--6brn6-eth0" Sep 9 04:55:30.158179 containerd[1539]: 2025-09-09 04:55:30.093 [INFO][4348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee00cbced3a ContainerID="2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0" Namespace="calico-system" Pod="csi-node-driver-6brn6" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-csi--node--driver--6brn6-eth0" Sep 9 04:55:30.158179 containerd[1539]: 2025-09-09 04:55:30.105 [INFO][4348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0" Namespace="calico-system" Pod="csi-node-driver-6brn6" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-csi--node--driver--6brn6-eth0" Sep 9 04:55:30.158179 containerd[1539]: 2025-09-09 04:55:30.107 [INFO][4348] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0" Namespace="calico-system" Pod="csi-node-driver-6brn6" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-csi--node--driver--6brn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--n--1f6e10e4b9-k8s-csi--node--driver--6brn6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e660f608-9e83-4b76-ae9d-6598e92ef788", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-n-1f6e10e4b9", ContainerID:"2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0", Pod:"csi-node-driver-6brn6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliee00cbced3a", MAC:"3e:53:a8:f1:da:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:30.158179 containerd[1539]: 2025-09-09 04:55:30.153 [INFO][4348] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0" Namespace="calico-system" Pod="csi-node-driver-6brn6" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-csi--node--driver--6brn6-eth0" Sep 9 04:55:30.206067 containerd[1539]: time="2025-09-09T04:55:30.205979510Z" level=info msg="connecting to shim 2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0" address="unix:///run/containerd/s/a344fb8e885d11c2fbc0181268d154185feae1e8a7d4f069b3b7fe0abddf910a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:30.262011 systemd[1]: Started cri-containerd-2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0.scope - libcontainer container 2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0. Sep 9 04:55:30.295998 containerd[1539]: time="2025-09-09T04:55:30.295730125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jkxm5,Uid:e2c5c76d-598f-40f5-a9cd-152b6731067e,Namespace:kube-system,Attempt:0,} returns sandbox id \"97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18\"" Sep 9 04:55:30.301775 containerd[1539]: time="2025-09-09T04:55:30.301443769Z" level=info msg="CreateContainer within sandbox \"97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 04:55:30.321781 containerd[1539]: time="2025-09-09T04:55:30.321666381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d8564c8-jpwmc,Uid:49c24938-7e87-4974-8b97-83c495fe1674,Namespace:calico-apiserver,Attempt:0,}" Sep 9 04:55:30.330618 containerd[1539]: time="2025-09-09T04:55:30.330568786Z" level=info msg="StartContainer for \"2a3fe835b6b7b2f78e6861f6abcd1935f2ec33dfba5c9abd3970d6907da8a844\" returns successfully" Sep 9 04:55:30.339391 containerd[1539]: time="2025-09-09T04:55:30.339284152Z" level=info msg="Container a55c803b89312a884b005b5ae99092a4af590d9bcceccb0099d709842e8bda15: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:30.351908 containerd[1539]: time="2025-09-09T04:55:30.351695399Z" level=info msg="CreateContainer within sandbox \"97faddf001bce810f5f32a91cd69ed9429a08c8741edc737cbe88e2167d19a18\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a55c803b89312a884b005b5ae99092a4af590d9bcceccb0099d709842e8bda15\"" Sep 9 04:55:30.353929 containerd[1539]: time="2025-09-09T04:55:30.353874001Z" level=info msg="StartContainer for \"a55c803b89312a884b005b5ae99092a4af590d9bcceccb0099d709842e8bda15\"" Sep 9 04:55:30.357263 containerd[1539]: time="2025-09-09T04:55:30.356945482Z" level=info msg="connecting to shim a55c803b89312a884b005b5ae99092a4af590d9bcceccb0099d709842e8bda15" address="unix:///run/containerd/s/f2ae0acc821c02ee4551928b81f04fc879676c0ff4beef0f3314534ba7af14b7" protocol=ttrpc version=3 Sep 9 04:55:30.432003 systemd[1]: Started cri-containerd-a55c803b89312a884b005b5ae99092a4af590d9bcceccb0099d709842e8bda15.scope - libcontainer container a55c803b89312a884b005b5ae99092a4af590d9bcceccb0099d709842e8bda15. Sep 9 04:55:30.555683 containerd[1539]: time="2025-09-09T04:55:30.554391243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f49cb55df-s2tds,Uid:cf41e489-e598-4a00-bd01-7a899133e93e,Namespace:calico-system,Attempt:0,} returns sandbox id \"42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898\"" Sep 9 04:55:30.597031 containerd[1539]: time="2025-09-09T04:55:30.596922589Z" level=info msg="StartContainer for \"a55c803b89312a884b005b5ae99092a4af590d9bcceccb0099d709842e8bda15\" returns successfully" Sep 9 04:55:30.721323 containerd[1539]: time="2025-09-09T04:55:30.721029024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6brn6,Uid:e660f608-9e83-4b76-ae9d-6598e92ef788,Namespace:calico-system,Attempt:0,} returns sandbox id \"2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0\"" Sep 9 04:55:30.736016 kubelet[2752]: I0909 04:55:30.735947 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4987m" podStartSLOduration=47.735919033 podStartE2EDuration="47.735919033s" podCreationTimestamp="2025-09-09 04:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:55:30.678455558 +0000 UTC m=+54.461885735" watchObservedRunningTime="2025-09-09 04:55:30.735919033 +0000 UTC m=+54.519349170" Sep 9 04:55:30.812227 systemd-networkd[1405]: cali6df36f91e4d: Link UP Sep 9 04:55:30.813123 systemd-networkd[1405]: cali6df36f91e4d: Gained carrier Sep 9 04:55:30.834147 kubelet[2752]: I0909 04:55:30.834084 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-jkxm5" podStartSLOduration=47.834063253 podStartE2EDuration="47.834063253s" podCreationTimestamp="2025-09-09 04:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:55:30.768497853 +0000 UTC m=+54.551927990" watchObservedRunningTime="2025-09-09 04:55:30.834063253 +0000 UTC m=+54.617493390" Sep 9 04:55:30.838520 containerd[1539]: 2025-09-09 04:55:30.428 [INFO][4634] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 04:55:30.838520 containerd[1539]: 2025-09-09 04:55:30.482 [INFO][4634] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--jpwmc-eth0 calico-apiserver-76d8564c8- calico-apiserver 49c24938-7e87-4974-8b97-83c495fe1674 858 0 2025-09-09 04:55:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76d8564c8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4452-0-0-n-1f6e10e4b9 calico-apiserver-76d8564c8-jpwmc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6df36f91e4d [] [] }} ContainerID="0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5" Namespace="calico-apiserver" Pod="calico-apiserver-76d8564c8-jpwmc" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--jpwmc-" Sep 9 04:55:30.838520 containerd[1539]: 2025-09-09 04:55:30.482 [INFO][4634] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5" Namespace="calico-apiserver" Pod="calico-apiserver-76d8564c8-jpwmc" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--jpwmc-eth0" Sep 9 04:55:30.838520 containerd[1539]: 2025-09-09 04:55:30.621 [INFO][4675] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5" HandleID="k8s-pod-network.0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--jpwmc-eth0" Sep 9 04:55:30.838520 containerd[1539]: 2025-09-09 04:55:30.622 [INFO][4675] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5" HandleID="k8s-pod-network.0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--jpwmc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4452-0-0-n-1f6e10e4b9", "pod":"calico-apiserver-76d8564c8-jpwmc", "timestamp":"2025-09-09 04:55:30.620400523 +0000 UTC"}, Hostname:"ci-4452-0-0-n-1f6e10e4b9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:55:30.838520 containerd[1539]: 2025-09-09 04:55:30.622 [INFO][4675] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:55:30.838520 containerd[1539]: 2025-09-09 04:55:30.622 [INFO][4675] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:55:30.838520 containerd[1539]: 2025-09-09 04:55:30.622 [INFO][4675] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452-0-0-n-1f6e10e4b9' Sep 9 04:55:30.838520 containerd[1539]: 2025-09-09 04:55:30.656 [INFO][4675] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:30.838520 containerd[1539]: 2025-09-09 04:55:30.747 [INFO][4675] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:30.838520 containerd[1539]: 2025-09-09 04:55:30.773 [INFO][4675] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:30.838520 containerd[1539]: 2025-09-09 04:55:30.779 [INFO][4675] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:30.838520 containerd[1539]: 2025-09-09 04:55:30.784 [INFO][4675] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:30.838520 containerd[1539]: 2025-09-09 04:55:30.784 [INFO][4675] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:30.838520 containerd[1539]: 2025-09-09 04:55:30.787 [INFO][4675] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5 Sep 9 04:55:30.838520 containerd[1539]: 2025-09-09 04:55:30.793 [INFO][4675] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:30.838520 containerd[1539]: 2025-09-09 04:55:30.804 [INFO][4675] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.72/26] block=192.168.15.64/26 handle="k8s-pod-network.0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:30.838520 containerd[1539]: 2025-09-09 04:55:30.804 [INFO][4675] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.72/26] handle="k8s-pod-network.0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:30.838520 containerd[1539]: 2025-09-09 04:55:30.805 [INFO][4675] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:55:30.838520 containerd[1539]: 2025-09-09 04:55:30.805 [INFO][4675] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.72/26] IPv6=[] ContainerID="0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5" HandleID="k8s-pod-network.0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--jpwmc-eth0" Sep 9 04:55:30.839998 containerd[1539]: 2025-09-09 04:55:30.808 [INFO][4634] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5" Namespace="calico-apiserver" Pod="calico-apiserver-76d8564c8-jpwmc" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--jpwmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--jpwmc-eth0", GenerateName:"calico-apiserver-76d8564c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"49c24938-7e87-4974-8b97-83c495fe1674", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d8564c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-n-1f6e10e4b9", ContainerID:"", Pod:"calico-apiserver-76d8564c8-jpwmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6df36f91e4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:30.839998 containerd[1539]: 2025-09-09 04:55:30.809 [INFO][4634] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.72/32] ContainerID="0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5" Namespace="calico-apiserver" Pod="calico-apiserver-76d8564c8-jpwmc" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--jpwmc-eth0" Sep 9 04:55:30.839998 containerd[1539]: 2025-09-09 04:55:30.809 [INFO][4634] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6df36f91e4d ContainerID="0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5" Namespace="calico-apiserver" Pod="calico-apiserver-76d8564c8-jpwmc" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--jpwmc-eth0" Sep 9 04:55:30.839998 containerd[1539]: 2025-09-09 04:55:30.814 [INFO][4634] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5" Namespace="calico-apiserver" Pod="calico-apiserver-76d8564c8-jpwmc" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--jpwmc-eth0" Sep 9 04:55:30.839998 containerd[1539]: 2025-09-09 04:55:30.815 [INFO][4634] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5" Namespace="calico-apiserver" Pod="calico-apiserver-76d8564c8-jpwmc" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--jpwmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--jpwmc-eth0", GenerateName:"calico-apiserver-76d8564c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"49c24938-7e87-4974-8b97-83c495fe1674", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d8564c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-n-1f6e10e4b9", ContainerID:"0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5", Pod:"calico-apiserver-76d8564c8-jpwmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6df36f91e4d", MAC:"72:b6:18:a8:4a:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:30.839998 containerd[1539]: 2025-09-09 04:55:30.834 [INFO][4634] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5" Namespace="calico-apiserver" Pod="calico-apiserver-76d8564c8-jpwmc" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--jpwmc-eth0" Sep 9 04:55:30.901353 containerd[1539]: time="2025-09-09T04:55:30.901293734Z" level=info msg="connecting to shim 0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5" address="unix:///run/containerd/s/ebe696134dae5ed3c4f2598dba967aeeec4c7cee6ee66a5c1ffe4484902e9c6e" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:30.929199 systemd-networkd[1405]: cali1e196db9949: Gained IPv6LL Sep 9 04:55:30.960046 systemd[1]: Started cri-containerd-0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5.scope - libcontainer container 0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5. Sep 9 04:55:31.037839 containerd[1539]: time="2025-09-09T04:55:31.037664119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d8564c8-jpwmc,Uid:49c24938-7e87-4974-8b97-83c495fe1674,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5\"" Sep 9 04:55:31.120966 systemd-networkd[1405]: cali0881b4eef3d: Gained IPv6LL Sep 9 04:55:31.213772 systemd-networkd[1405]: vxlan.calico: Link UP Sep 9 04:55:31.213783 systemd-networkd[1405]: vxlan.calico: Gained carrier Sep 9 04:55:31.249982 systemd-networkd[1405]: cali7984bd4af22: Gained IPv6LL Sep 9 04:55:31.312977 systemd-networkd[1405]: caliee00cbced3a: Gained IPv6LL Sep 9 04:55:31.320073 containerd[1539]: time="2025-09-09T04:55:31.320028080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nfp9b,Uid:508e4faa-80f1-4c32-8a1d-6406e0cf50f0,Namespace:calico-system,Attempt:0,}" Sep 9 04:55:31.493404 systemd-networkd[1405]: cali7b3584cde13: Link UP Sep 9 04:55:31.495437 systemd-networkd[1405]: cali7b3584cde13: Gained carrier Sep 9 04:55:31.535427 containerd[1539]: 2025-09-09 04:55:31.383 [INFO][4797] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452--0--0--n--1f6e10e4b9-k8s-goldmane--7988f88666--nfp9b-eth0 goldmane-7988f88666- calico-system 508e4faa-80f1-4c32-8a1d-6406e0cf50f0 856 0 2025-09-09 04:55:05 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4452-0-0-n-1f6e10e4b9 goldmane-7988f88666-nfp9b eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7b3584cde13 [] [] }} ContainerID="73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5" Namespace="calico-system" Pod="goldmane-7988f88666-nfp9b" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-goldmane--7988f88666--nfp9b-" Sep 9 04:55:31.535427 containerd[1539]: 2025-09-09 04:55:31.383 [INFO][4797] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5" Namespace="calico-system" Pod="goldmane-7988f88666-nfp9b" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-goldmane--7988f88666--nfp9b-eth0" Sep 9 04:55:31.535427 containerd[1539]: 2025-09-09 04:55:31.419 [INFO][4811] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5" HandleID="k8s-pod-network.73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-goldmane--7988f88666--nfp9b-eth0" Sep 9 04:55:31.535427 containerd[1539]: 2025-09-09 04:55:31.419 [INFO][4811] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5" HandleID="k8s-pod-network.73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-goldmane--7988f88666--nfp9b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb680), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452-0-0-n-1f6e10e4b9", "pod":"goldmane-7988f88666-nfp9b", "timestamp":"2025-09-09 04:55:31.419215585 +0000 UTC"}, Hostname:"ci-4452-0-0-n-1f6e10e4b9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:55:31.535427 containerd[1539]: 2025-09-09 04:55:31.419 [INFO][4811] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:55:31.535427 containerd[1539]: 2025-09-09 04:55:31.419 [INFO][4811] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:55:31.535427 containerd[1539]: 2025-09-09 04:55:31.419 [INFO][4811] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452-0-0-n-1f6e10e4b9' Sep 9 04:55:31.535427 containerd[1539]: 2025-09-09 04:55:31.435 [INFO][4811] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:31.535427 containerd[1539]: 2025-09-09 04:55:31.445 [INFO][4811] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:31.535427 containerd[1539]: 2025-09-09 04:55:31.455 [INFO][4811] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:31.535427 containerd[1539]: 2025-09-09 04:55:31.458 [INFO][4811] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:31.535427 containerd[1539]: 2025-09-09 04:55:31.463 [INFO][4811] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:31.535427 containerd[1539]: 2025-09-09 04:55:31.463 [INFO][4811] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:31.535427 containerd[1539]: 2025-09-09 04:55:31.465 [INFO][4811] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5 Sep 9 04:55:31.535427 containerd[1539]: 2025-09-09 04:55:31.471 [INFO][4811] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:31.535427 containerd[1539]: 2025-09-09 04:55:31.482 [INFO][4811] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.73/26] block=192.168.15.64/26 handle="k8s-pod-network.73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:31.535427 containerd[1539]: 2025-09-09 04:55:31.482 [INFO][4811] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.73/26] handle="k8s-pod-network.73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:31.535427 containerd[1539]: 2025-09-09 04:55:31.482 [INFO][4811] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:55:31.535427 containerd[1539]: 2025-09-09 04:55:31.482 [INFO][4811] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.73/26] IPv6=[] ContainerID="73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5" HandleID="k8s-pod-network.73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-goldmane--7988f88666--nfp9b-eth0" Sep 9 04:55:31.537922 containerd[1539]: 2025-09-09 04:55:31.487 [INFO][4797] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5" Namespace="calico-system" Pod="goldmane-7988f88666-nfp9b" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-goldmane--7988f88666--nfp9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--n--1f6e10e4b9-k8s-goldmane--7988f88666--nfp9b-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"508e4faa-80f1-4c32-8a1d-6406e0cf50f0", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-n-1f6e10e4b9", ContainerID:"", Pod:"goldmane-7988f88666-nfp9b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7b3584cde13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:31.537922 containerd[1539]: 2025-09-09 04:55:31.487 [INFO][4797] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.73/32] ContainerID="73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5" Namespace="calico-system" Pod="goldmane-7988f88666-nfp9b" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-goldmane--7988f88666--nfp9b-eth0" Sep 9 04:55:31.537922 containerd[1539]: 2025-09-09 04:55:31.487 [INFO][4797] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b3584cde13 ContainerID="73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5" Namespace="calico-system" Pod="goldmane-7988f88666-nfp9b" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-goldmane--7988f88666--nfp9b-eth0" Sep 9 04:55:31.537922 containerd[1539]: 2025-09-09 04:55:31.500 [INFO][4797] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5" Namespace="calico-system" Pod="goldmane-7988f88666-nfp9b" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-goldmane--7988f88666--nfp9b-eth0" Sep 9 04:55:31.537922 containerd[1539]: 2025-09-09 04:55:31.501 [INFO][4797] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5" Namespace="calico-system" Pod="goldmane-7988f88666-nfp9b" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-goldmane--7988f88666--nfp9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--n--1f6e10e4b9-k8s-goldmane--7988f88666--nfp9b-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"508e4faa-80f1-4c32-8a1d-6406e0cf50f0", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-n-1f6e10e4b9", ContainerID:"73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5", Pod:"goldmane-7988f88666-nfp9b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7b3584cde13", MAC:"f2:9b:63:4c:6f:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:31.537922 containerd[1539]: 2025-09-09 04:55:31.523 [INFO][4797] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5" Namespace="calico-system" Pod="goldmane-7988f88666-nfp9b" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-goldmane--7988f88666--nfp9b-eth0" Sep 9 04:55:31.572346 containerd[1539]: time="2025-09-09T04:55:31.571994212Z" level=info msg="connecting to shim 73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5" address="unix:///run/containerd/s/fbb13d3b319ec5d4446669b14d5667b58082b7c62628365a74f0577679a66b4b" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:31.626007 systemd[1]: Started cri-containerd-73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5.scope - libcontainer container 73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5. Sep 9 04:55:31.762664 containerd[1539]: time="2025-09-09T04:55:31.762617205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nfp9b,Uid:508e4faa-80f1-4c32-8a1d-6406e0cf50f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5\"" Sep 9 04:55:31.953109 systemd-networkd[1405]: cali6df36f91e4d: Gained IPv6LL Sep 9 04:55:32.821010 containerd[1539]: time="2025-09-09T04:55:32.820942928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:32.822365 containerd[1539]: time="2025-09-09T04:55:32.822316534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 9 04:55:32.824177 containerd[1539]: time="2025-09-09T04:55:32.823460898Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:32.827370 containerd[1539]: time="2025-09-09T04:55:32.827328513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:32.828082 containerd[1539]: time="2025-09-09T04:55:32.828034716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 3.74178444s" Sep 9 04:55:32.828082 containerd[1539]: time="2025-09-09T04:55:32.828077156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 04:55:32.829716 containerd[1539]: time="2025-09-09T04:55:32.829684202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 04:55:32.833323 containerd[1539]: time="2025-09-09T04:55:32.833177016Z" level=info msg="CreateContainer within sandbox \"fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 04:55:32.846461 containerd[1539]: time="2025-09-09T04:55:32.846416267Z" level=info msg="Container c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:32.859076 containerd[1539]: time="2025-09-09T04:55:32.858997476Z" level=info msg="CreateContainer within sandbox \"fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827\"" Sep 9 04:55:32.860430 containerd[1539]: time="2025-09-09T04:55:32.860014680Z" level=info msg="StartContainer for \"c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827\"" Sep 9 04:55:32.863173 containerd[1539]: time="2025-09-09T04:55:32.862704970Z" level=info msg="connecting to shim c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827" address="unix:///run/containerd/s/92b86febbddc86cb55df7f08fa08ddea37d49c57f3ffbc77f077e5c48750f99e" protocol=ttrpc version=3 Sep 9 04:55:32.894015 systemd[1]: Started cri-containerd-c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827.scope - libcontainer container c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827. Sep 9 04:55:32.955282 containerd[1539]: time="2025-09-09T04:55:32.955246049Z" level=info msg="StartContainer for \"c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827\" returns successfully" Sep 9 04:55:33.168929 systemd-networkd[1405]: vxlan.calico: Gained IPv6LL Sep 9 04:55:33.230026 containerd[1539]: time="2025-09-09T04:55:33.229968473Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:33.231944 containerd[1539]: time="2025-09-09T04:55:33.231901763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 04:55:33.233710 containerd[1539]: time="2025-09-09T04:55:33.233669093Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 403.948051ms" Sep 9 04:55:33.233781 containerd[1539]: time="2025-09-09T04:55:33.233722973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 04:55:33.234977 containerd[1539]: time="2025-09-09T04:55:33.234940260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 04:55:33.237179 containerd[1539]: time="2025-09-09T04:55:33.237135112Z" level=info msg="CreateContainer within sandbox \"229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 04:55:33.249974 containerd[1539]: time="2025-09-09T04:55:33.249914261Z" level=info msg="Container 7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:33.262119 containerd[1539]: time="2025-09-09T04:55:33.261959727Z" level=info msg="CreateContainer within sandbox \"229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7\"" Sep 9 04:55:33.263752 containerd[1539]: time="2025-09-09T04:55:33.263696976Z" level=info msg="StartContainer for \"7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7\"" Sep 9 04:55:33.273119 containerd[1539]: time="2025-09-09T04:55:33.273001467Z" level=info msg="connecting to shim 7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7" address="unix:///run/containerd/s/59de7abf59126e40de93041515ac603a7dc6156d0a50c75f1752bf5c6c86d2dd" protocol=ttrpc version=3 Sep 9 04:55:33.313022 systemd[1]: Started cri-containerd-7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7.scope - libcontainer container 7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7. Sep 9 04:55:33.386083 containerd[1539]: time="2025-09-09T04:55:33.386029722Z" level=info msg="StartContainer for \"7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7\" returns successfully" Sep 9 04:55:33.488997 systemd-networkd[1405]: cali7b3584cde13: Gained IPv6LL Sep 9 04:55:33.717519 kubelet[2752]: I0909 04:55:33.717390 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86b766bf49-45pbp" podStartSLOduration=29.789631324 podStartE2EDuration="35.717369604s" podCreationTimestamp="2025-09-09 04:54:58 +0000 UTC" firstStartedPulling="2025-09-09 04:55:26.901587161 +0000 UTC m=+50.685017298" lastFinishedPulling="2025-09-09 04:55:32.829325441 +0000 UTC m=+56.612755578" observedRunningTime="2025-09-09 04:55:33.700487112 +0000 UTC m=+57.483917249" watchObservedRunningTime="2025-09-09 04:55:33.717369604 +0000 UTC m=+57.500799741" Sep 9 04:55:33.719108 kubelet[2752]: I0909 04:55:33.719018 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86b766bf49-7bvkb" podStartSLOduration=29.414254249 podStartE2EDuration="35.719003693s" podCreationTimestamp="2025-09-09 04:54:58 +0000 UTC" firstStartedPulling="2025-09-09 04:55:26.929911814 +0000 UTC m=+50.713341951" lastFinishedPulling="2025-09-09 04:55:33.234661298 +0000 UTC m=+57.018091395" observedRunningTime="2025-09-09 04:55:33.718377129 +0000 UTC m=+57.501807266" watchObservedRunningTime="2025-09-09 04:55:33.719003693 +0000 UTC m=+57.502433830" Sep 9 04:55:34.692960 kubelet[2752]: I0909 04:55:34.692902 2752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:55:34.693825 kubelet[2752]: I0909 04:55:34.692735 2752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:55:36.881816 containerd[1539]: time="2025-09-09T04:55:36.881681166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:36.883757 containerd[1539]: time="2025-09-09T04:55:36.883621505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 9 04:55:36.885826 containerd[1539]: time="2025-09-09T04:55:36.885562604Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:36.891416 containerd[1539]: time="2025-09-09T04:55:36.890888617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:36.892968 containerd[1539]: time="2025-09-09T04:55:36.892608354Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 3.657623613s" Sep 9 04:55:36.892968 containerd[1539]: time="2025-09-09T04:55:36.892653034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 9 04:55:36.897408 containerd[1539]: time="2025-09-09T04:55:36.897352040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 04:55:36.935496 containerd[1539]: time="2025-09-09T04:55:36.935431654Z" level=info msg="CreateContainer within sandbox \"42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 04:55:36.953477 containerd[1539]: time="2025-09-09T04:55:36.953422151Z" level=info msg="Container 71ae806fefe7f4e045808d89337025a07e7c8d652af0cb2459757ca86e8db567: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:36.961711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1722850080.mount: Deactivated successfully. Sep 9 04:55:36.972932 containerd[1539]: time="2025-09-09T04:55:36.972760821Z" level=info msg="CreateContainer within sandbox \"42c02182fd9197f0504cb95f94e7e64fceee59c802b2acc211a117d099a92898\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"71ae806fefe7f4e045808d89337025a07e7c8d652af0cb2459757ca86e8db567\"" Sep 9 04:55:36.974775 containerd[1539]: time="2025-09-09T04:55:36.974481558Z" level=info msg="StartContainer for \"71ae806fefe7f4e045808d89337025a07e7c8d652af0cb2459757ca86e8db567\"" Sep 9 04:55:36.978916 containerd[1539]: time="2025-09-09T04:55:36.978786201Z" level=info msg="connecting to shim 71ae806fefe7f4e045808d89337025a07e7c8d652af0cb2459757ca86e8db567" address="unix:///run/containerd/s/9ef92babb9b47cc9a383ae90a53af76a288164f0cd27ffa6ba90b73a3eb8d0b1" protocol=ttrpc version=3 Sep 9 04:55:37.016029 systemd[1]: Started cri-containerd-71ae806fefe7f4e045808d89337025a07e7c8d652af0cb2459757ca86e8db567.scope - libcontainer container 71ae806fefe7f4e045808d89337025a07e7c8d652af0cb2459757ca86e8db567. Sep 9 04:55:37.119224 containerd[1539]: time="2025-09-09T04:55:37.119134463Z" level=info msg="StartContainer for \"71ae806fefe7f4e045808d89337025a07e7c8d652af0cb2459757ca86e8db567\" returns successfully" Sep 9 04:55:37.213153 containerd[1539]: time="2025-09-09T04:55:37.213101595Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37ecfa4689ad6db384ff648ab79f6ddc69fa571f593ecc56bc88f653f1062dbc\" id:\"d55eb9fcd7bf127b6a3511b0c9d3bfaa771ba8099504dc20dd4dc21b97dd0f26\" pid:5063 exited_at:{seconds:1757393737 nanos:212398508}" Sep 9 04:55:37.475711 kubelet[2752]: I0909 04:55:37.474693 2752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:55:37.741998 kubelet[2752]: I0909 04:55:37.741373 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-f49cb55df-s2tds" podStartSLOduration=26.404556295 podStartE2EDuration="32.741352193s" podCreationTimestamp="2025-09-09 04:55:05 +0000 UTC" firstStartedPulling="2025-09-09 04:55:30.558780765 +0000 UTC m=+54.342210902" lastFinishedPulling="2025-09-09 04:55:36.895576663 +0000 UTC m=+60.679006800" observedRunningTime="2025-09-09 04:55:37.740033419 +0000 UTC m=+61.523463556" watchObservedRunningTime="2025-09-09 04:55:37.741352193 +0000 UTC m=+61.524782330" Sep 9 04:55:37.788475 containerd[1539]: time="2025-09-09T04:55:37.788434001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71ae806fefe7f4e045808d89337025a07e7c8d652af0cb2459757ca86e8db567\" id:\"6ba1f5497a8a0ddc4610f1d416d8ba9df07b65ae0f4e4c1616bfc6c2709a934d\" pid:5102 exited_at:{seconds:1757393737 nanos:786508739}" Sep 9 04:55:38.669041 containerd[1539]: time="2025-09-09T04:55:38.668831392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:38.671218 containerd[1539]: time="2025-09-09T04:55:38.670472333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 9 04:55:38.673230 containerd[1539]: time="2025-09-09T04:55:38.673188927Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:38.676457 containerd[1539]: time="2025-09-09T04:55:38.676405967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:38.677281 containerd[1539]: time="2025-09-09T04:55:38.677234657Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.779821096s" Sep 9 04:55:38.677281 containerd[1539]: time="2025-09-09T04:55:38.677277898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 9 04:55:38.681827 containerd[1539]: time="2025-09-09T04:55:38.680818742Z" level=info msg="CreateContainer within sandbox \"2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 04:55:38.681827 containerd[1539]: time="2025-09-09T04:55:38.680922744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 04:55:38.701699 containerd[1539]: time="2025-09-09T04:55:38.701654244Z" level=info msg="Container aa3515c636d4b0f9f18dd27fa23ddac1a9ae8eadd7c45a3a662a277ef70aaea1: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:38.706015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1169402321.mount: Deactivated successfully. Sep 9 04:55:38.721497 containerd[1539]: time="2025-09-09T04:55:38.721439171Z" level=info msg="CreateContainer within sandbox \"2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"aa3515c636d4b0f9f18dd27fa23ddac1a9ae8eadd7c45a3a662a277ef70aaea1\"" Sep 9 04:55:38.723267 containerd[1539]: time="2025-09-09T04:55:38.723212034Z" level=info msg="StartContainer for \"aa3515c636d4b0f9f18dd27fa23ddac1a9ae8eadd7c45a3a662a277ef70aaea1\"" Sep 9 04:55:38.727773 containerd[1539]: time="2025-09-09T04:55:38.727690530Z" level=info msg="connecting to shim aa3515c636d4b0f9f18dd27fa23ddac1a9ae8eadd7c45a3a662a277ef70aaea1" address="unix:///run/containerd/s/a344fb8e885d11c2fbc0181268d154185feae1e8a7d4f069b3b7fe0abddf910a" protocol=ttrpc version=3 Sep 9 04:55:38.753004 systemd[1]: Started cri-containerd-aa3515c636d4b0f9f18dd27fa23ddac1a9ae8eadd7c45a3a662a277ef70aaea1.scope - libcontainer container aa3515c636d4b0f9f18dd27fa23ddac1a9ae8eadd7c45a3a662a277ef70aaea1. Sep 9 04:55:38.800999 containerd[1539]: time="2025-09-09T04:55:38.800946528Z" level=info msg="StartContainer for \"aa3515c636d4b0f9f18dd27fa23ddac1a9ae8eadd7c45a3a662a277ef70aaea1\" returns successfully" Sep 9 04:55:39.049065 containerd[1539]: time="2025-09-09T04:55:39.048196208Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:39.049065 containerd[1539]: time="2025-09-09T04:55:39.048952178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 04:55:39.051016 containerd[1539]: time="2025-09-09T04:55:39.050978486Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 370.012262ms" Sep 9 04:55:39.051322 containerd[1539]: time="2025-09-09T04:55:39.051301931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 04:55:39.052649 containerd[1539]: time="2025-09-09T04:55:39.052621029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 04:55:39.056189 containerd[1539]: time="2025-09-09T04:55:39.055981195Z" level=info msg="CreateContainer within sandbox \"0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 04:55:39.070218 containerd[1539]: time="2025-09-09T04:55:39.070112951Z" level=info msg="Container aeb51e001c7e8cc2b8cc7da1b38c761855aca9e895e1dbd136b1c4956d23f5af: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:39.087858 containerd[1539]: time="2025-09-09T04:55:39.087816955Z" level=info msg="CreateContainer within sandbox \"0f71f4025154cb3c6b2fe53c3e75729c3a4a55eca871221795f1b26032542be5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"aeb51e001c7e8cc2b8cc7da1b38c761855aca9e895e1dbd136b1c4956d23f5af\"" Sep 9 04:55:39.090851 containerd[1539]: time="2025-09-09T04:55:39.090720236Z" level=info msg="StartContainer for \"aeb51e001c7e8cc2b8cc7da1b38c761855aca9e895e1dbd136b1c4956d23f5af\"" Sep 9 04:55:39.093610 containerd[1539]: time="2025-09-09T04:55:39.093522194Z" level=info msg="connecting to shim aeb51e001c7e8cc2b8cc7da1b38c761855aca9e895e1dbd136b1c4956d23f5af" address="unix:///run/containerd/s/ebe696134dae5ed3c4f2598dba967aeeec4c7cee6ee66a5c1ffe4484902e9c6e" protocol=ttrpc version=3 Sep 9 04:55:39.117031 systemd[1]: Started cri-containerd-aeb51e001c7e8cc2b8cc7da1b38c761855aca9e895e1dbd136b1c4956d23f5af.scope - libcontainer container aeb51e001c7e8cc2b8cc7da1b38c761855aca9e895e1dbd136b1c4956d23f5af. Sep 9 04:55:39.163442 containerd[1539]: time="2025-09-09T04:55:39.163400760Z" level=info msg="StartContainer for \"aeb51e001c7e8cc2b8cc7da1b38c761855aca9e895e1dbd136b1c4956d23f5af\" returns successfully" Sep 9 04:55:40.733823 kubelet[2752]: I0909 04:55:40.733773 2752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:55:41.815566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3042220288.mount: Deactivated successfully. Sep 9 04:55:42.495458 containerd[1539]: time="2025-09-09T04:55:42.495374633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:42.497043 containerd[1539]: time="2025-09-09T04:55:42.496999581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 9 04:55:42.497954 containerd[1539]: time="2025-09-09T04:55:42.497922638Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:42.501019 containerd[1539]: time="2025-09-09T04:55:42.500961651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:42.502771 containerd[1539]: time="2025-09-09T04:55:42.502152871Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 3.449410081s" Sep 9 04:55:42.502771 containerd[1539]: time="2025-09-09T04:55:42.502194432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 9 04:55:42.506181 containerd[1539]: time="2025-09-09T04:55:42.506116341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 04:55:42.507589 containerd[1539]: time="2025-09-09T04:55:42.507529045Z" level=info msg="CreateContainer within sandbox \"73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 04:55:42.519595 containerd[1539]: time="2025-09-09T04:55:42.519536695Z" level=info msg="Container a48d15f2f2463be94c8f2fc92c7b98078e175b69a8fa1882227cadf4ef9343d6: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:42.535262 containerd[1539]: time="2025-09-09T04:55:42.535181208Z" level=info msg="CreateContainer within sandbox \"73321993e27a2961cf780c8d1da255dd19f67061e79cfab31043f736fd6d35e5\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a48d15f2f2463be94c8f2fc92c7b98078e175b69a8fa1882227cadf4ef9343d6\"" Sep 9 04:55:42.535970 containerd[1539]: time="2025-09-09T04:55:42.535939141Z" level=info msg="StartContainer for \"a48d15f2f2463be94c8f2fc92c7b98078e175b69a8fa1882227cadf4ef9343d6\"" Sep 9 04:55:42.539839 containerd[1539]: time="2025-09-09T04:55:42.538727430Z" level=info msg="connecting to shim a48d15f2f2463be94c8f2fc92c7b98078e175b69a8fa1882227cadf4ef9343d6" address="unix:///run/containerd/s/fbb13d3b319ec5d4446669b14d5667b58082b7c62628365a74f0577679a66b4b" protocol=ttrpc version=3 Sep 9 04:55:42.607707 systemd[1]: Started cri-containerd-a48d15f2f2463be94c8f2fc92c7b98078e175b69a8fa1882227cadf4ef9343d6.scope - libcontainer container a48d15f2f2463be94c8f2fc92c7b98078e175b69a8fa1882227cadf4ef9343d6. Sep 9 04:55:42.736112 containerd[1539]: time="2025-09-09T04:55:42.735977072Z" level=info msg="StartContainer for \"a48d15f2f2463be94c8f2fc92c7b98078e175b69a8fa1882227cadf4ef9343d6\" returns successfully" Sep 9 04:55:42.791358 kubelet[2752]: I0909 04:55:42.790934 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76d8564c8-jpwmc" podStartSLOduration=33.77905649 podStartE2EDuration="41.79090203s" podCreationTimestamp="2025-09-09 04:55:01 +0000 UTC" firstStartedPulling="2025-09-09 04:55:31.040509885 +0000 UTC m=+54.823940022" lastFinishedPulling="2025-09-09 04:55:39.052355465 +0000 UTC m=+62.835785562" observedRunningTime="2025-09-09 04:55:39.745575326 +0000 UTC m=+63.529005463" watchObservedRunningTime="2025-09-09 04:55:42.79090203 +0000 UTC m=+66.574332207" Sep 9 04:55:43.911170 containerd[1539]: time="2025-09-09T04:55:43.911128132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a48d15f2f2463be94c8f2fc92c7b98078e175b69a8fa1882227cadf4ef9343d6\" id:\"f4582979df78c14d4ea5eb7eb8fb08a7f3f5a4cd21263c76d295119f8ddb49a3\" pid:5249 exit_status:1 exited_at:{seconds:1757393743 nanos:910702804}" Sep 9 04:55:44.429575 containerd[1539]: time="2025-09-09T04:55:44.428695741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:44.429906 containerd[1539]: time="2025-09-09T04:55:44.429876924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 9 04:55:44.431597 containerd[1539]: time="2025-09-09T04:55:44.431570958Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:44.436054 containerd[1539]: time="2025-09-09T04:55:44.435990205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:44.436902 containerd[1539]: time="2025-09-09T04:55:44.436871422Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.930699481s" Sep 9 04:55:44.437092 containerd[1539]: time="2025-09-09T04:55:44.437072946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 9 04:55:44.441411 containerd[1539]: time="2025-09-09T04:55:44.441374551Z" level=info msg="CreateContainer within sandbox \"2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 04:55:44.460140 containerd[1539]: time="2025-09-09T04:55:44.460085959Z" level=info msg="Container 7a589eed94371e12105b176af0b457bd89d0d37ba04e3395a0a8a464dc5b7bcf: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:44.477585 containerd[1539]: time="2025-09-09T04:55:44.477486302Z" level=info msg="CreateContainer within sandbox \"2023a08cb375bc322b489c3a47f021762aab66f6283612670c34e216f2975df0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7a589eed94371e12105b176af0b457bd89d0d37ba04e3395a0a8a464dc5b7bcf\"" Sep 9 04:55:44.478932 containerd[1539]: time="2025-09-09T04:55:44.478887809Z" level=info msg="StartContainer for \"7a589eed94371e12105b176af0b457bd89d0d37ba04e3395a0a8a464dc5b7bcf\"" Sep 9 04:55:44.483569 containerd[1539]: time="2025-09-09T04:55:44.483489420Z" level=info msg="connecting to shim 7a589eed94371e12105b176af0b457bd89d0d37ba04e3395a0a8a464dc5b7bcf" address="unix:///run/containerd/s/a344fb8e885d11c2fbc0181268d154185feae1e8a7d4f069b3b7fe0abddf910a" protocol=ttrpc version=3 Sep 9 04:55:44.521046 systemd[1]: Started cri-containerd-7a589eed94371e12105b176af0b457bd89d0d37ba04e3395a0a8a464dc5b7bcf.scope - libcontainer container 7a589eed94371e12105b176af0b457bd89d0d37ba04e3395a0a8a464dc5b7bcf. Sep 9 04:55:44.620601 containerd[1539]: time="2025-09-09T04:55:44.620469356Z" level=info msg="StartContainer for \"7a589eed94371e12105b176af0b457bd89d0d37ba04e3395a0a8a464dc5b7bcf\" returns successfully" Sep 9 04:55:44.824935 kubelet[2752]: I0909 04:55:44.824293 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6brn6" podStartSLOduration=26.110933948 podStartE2EDuration="39.824208367s" podCreationTimestamp="2025-09-09 04:55:05 +0000 UTC" firstStartedPulling="2025-09-09 04:55:30.724817267 +0000 UTC m=+54.508247404" lastFinishedPulling="2025-09-09 04:55:44.438091686 +0000 UTC m=+68.221521823" observedRunningTime="2025-09-09 04:55:44.818113287 +0000 UTC m=+68.601543504" watchObservedRunningTime="2025-09-09 04:55:44.824208367 +0000 UTC m=+68.607638504" Sep 9 04:55:44.827824 kubelet[2752]: I0909 04:55:44.827634 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-nfp9b" podStartSLOduration=29.091239912 podStartE2EDuration="39.827615514s" podCreationTimestamp="2025-09-09 04:55:05 +0000 UTC" firstStartedPulling="2025-09-09 04:55:31.767289536 +0000 UTC m=+55.550719673" lastFinishedPulling="2025-09-09 04:55:42.503665138 +0000 UTC m=+66.287095275" observedRunningTime="2025-09-09 04:55:42.793129789 +0000 UTC m=+66.576559966" watchObservedRunningTime="2025-09-09 04:55:44.827615514 +0000 UTC m=+68.611045691" Sep 9 04:55:44.914111 containerd[1539]: time="2025-09-09T04:55:44.914001734Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71ae806fefe7f4e045808d89337025a07e7c8d652af0cb2459757ca86e8db567\" id:\"5d156f10441e20a0d689ae0a8ff1a9d34d6e087625aa4f5836399cf9811fc0e4\" pid:5327 exited_at:{seconds:1757393744 nanos:912990074}" Sep 9 04:55:44.948826 containerd[1539]: time="2025-09-09T04:55:44.947709718Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a48d15f2f2463be94c8f2fc92c7b98078e175b69a8fa1882227cadf4ef9343d6\" id:\"b009c15c4297835c81283b34612e2ddade53e7f86855e5fe849bb0c7a1bcff9b\" pid:5310 exit_status:1 exited_at:{seconds:1757393744 nanos:946890702}" Sep 9 04:55:45.464407 kubelet[2752]: I0909 04:55:45.464350 2752 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 04:55:45.464407 kubelet[2752]: I0909 04:55:45.464402 2752 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 04:55:46.494774 containerd[1539]: time="2025-09-09T04:55:46.494673710Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a48d15f2f2463be94c8f2fc92c7b98078e175b69a8fa1882227cadf4ef9343d6\" id:\"f010a7f098a625a8fdd821baacbe038e390f87ca6c2c29bfbb962fa30edd2a6b\" pid:5356 exited_at:{seconds:1757393746 nanos:493371641}" Sep 9 04:55:49.780807 kubelet[2752]: I0909 04:55:49.780051 2752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:55:49.910813 kubelet[2752]: I0909 04:55:49.910415 2752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:55:49.913326 containerd[1539]: time="2025-09-09T04:55:49.913081105Z" level=info msg="StopContainer for \"7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7\" with timeout 30 (s)" Sep 9 04:55:49.914096 containerd[1539]: time="2025-09-09T04:55:49.913944887Z" level=info msg="Stop container \"7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7\" with signal terminated" Sep 9 04:55:49.975091 systemd[1]: Created slice kubepods-besteffort-poda91add35_9b06_46c6_924b_45edba24337e.slice - libcontainer container kubepods-besteffort-poda91add35_9b06_46c6_924b_45edba24337e.slice. Sep 9 04:55:50.027396 systemd[1]: cri-containerd-7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7.scope: Deactivated successfully. Sep 9 04:55:50.027712 systemd[1]: cri-containerd-7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7.scope: Consumed 2.079s CPU time, 43.5M memory peak, 465K read from disk. Sep 9 04:55:50.033811 containerd[1539]: time="2025-09-09T04:55:50.033617791Z" level=info msg="received exit event container_id:\"7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7\" id:\"7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7\" pid:4973 exit_status:1 exited_at:{seconds:1757393750 nanos:32995215}" Sep 9 04:55:50.033957 containerd[1539]: time="2025-09-09T04:55:50.033828436Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7\" id:\"7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7\" pid:4973 exit_status:1 exited_at:{seconds:1757393750 nanos:32995215}" Sep 9 04:55:50.070341 kubelet[2752]: I0909 04:55:50.070306 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmd4l\" (UniqueName: \"kubernetes.io/projected/a91add35-9b06-46c6-924b-45edba24337e-kube-api-access-hmd4l\") pod \"calico-apiserver-76d8564c8-n2wsn\" (UID: \"a91add35-9b06-46c6-924b-45edba24337e\") " pod="calico-apiserver/calico-apiserver-76d8564c8-n2wsn" Sep 9 04:55:50.071077 kubelet[2752]: I0909 04:55:50.071010 2752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a91add35-9b06-46c6-924b-45edba24337e-calico-apiserver-certs\") pod \"calico-apiserver-76d8564c8-n2wsn\" (UID: \"a91add35-9b06-46c6-924b-45edba24337e\") " pod="calico-apiserver/calico-apiserver-76d8564c8-n2wsn" Sep 9 04:55:50.075270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7-rootfs.mount: Deactivated successfully. Sep 9 04:55:50.175229 containerd[1539]: time="2025-09-09T04:55:50.174296192Z" level=info msg="StopContainer for \"7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7\" returns successfully" Sep 9 04:55:50.186568 containerd[1539]: time="2025-09-09T04:55:50.186258418Z" level=info msg="StopPodSandbox for \"229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4\"" Sep 9 04:55:50.207452 containerd[1539]: time="2025-09-09T04:55:50.205700155Z" level=info msg="Container to stop \"7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 04:55:50.218437 systemd[1]: cri-containerd-229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4.scope: Deactivated successfully. Sep 9 04:55:50.221581 containerd[1539]: time="2025-09-09T04:55:50.221524240Z" level=info msg="TaskExit event in podsandbox handler container_id:\"229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4\" id:\"229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4\" pid:4190 exit_status:137 exited_at:{seconds:1757393750 nanos:220253048}" Sep 9 04:55:50.263627 containerd[1539]: time="2025-09-09T04:55:50.263574757Z" level=info msg="shim disconnected" id=229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4 namespace=k8s.io Sep 9 04:55:50.263764 containerd[1539]: time="2025-09-09T04:55:50.263616398Z" level=warning msg="cleaning up after shim disconnected" id=229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4 namespace=k8s.io Sep 9 04:55:50.263764 containerd[1539]: time="2025-09-09T04:55:50.263647599Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 04:55:50.266426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4-rootfs.mount: Deactivated successfully. Sep 9 04:55:50.282531 containerd[1539]: time="2025-09-09T04:55:50.282146912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d8564c8-n2wsn,Uid:a91add35-9b06-46c6-924b-45edba24337e,Namespace:calico-apiserver,Attempt:0,}" Sep 9 04:55:50.319716 containerd[1539]: time="2025-09-09T04:55:50.318551764Z" level=info msg="received exit event sandbox_id:\"229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4\" exit_status:137 exited_at:{seconds:1757393750 nanos:220253048}" Sep 9 04:55:50.424648 systemd-networkd[1405]: cali22f9d3e81e5: Link DOWN Sep 9 04:55:50.425106 systemd-networkd[1405]: cali22f9d3e81e5: Lost carrier Sep 9 04:55:50.586538 systemd-networkd[1405]: cali756f703eb63: Link UP Sep 9 04:55:50.587944 systemd-networkd[1405]: cali756f703eb63: Gained carrier Sep 9 04:55:50.615002 containerd[1539]: 2025-09-09 04:55:50.394 [INFO][5434] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--n2wsn-eth0 calico-apiserver-76d8564c8- calico-apiserver a91add35-9b06-46c6-924b-45edba24337e 1161 0 2025-09-09 04:55:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76d8564c8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4452-0-0-n-1f6e10e4b9 calico-apiserver-76d8564c8-n2wsn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali756f703eb63 [] [] }} ContainerID="3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e" Namespace="calico-apiserver" Pod="calico-apiserver-76d8564c8-n2wsn" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--n2wsn-" Sep 9 04:55:50.615002 containerd[1539]: 2025-09-09 04:55:50.394 [INFO][5434] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e" Namespace="calico-apiserver" Pod="calico-apiserver-76d8564c8-n2wsn" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--n2wsn-eth0" Sep 9 04:55:50.615002 containerd[1539]: 2025-09-09 04:55:50.480 [INFO][5462] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e" HandleID="k8s-pod-network.3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--n2wsn-eth0" Sep 9 04:55:50.615002 containerd[1539]: 2025-09-09 04:55:50.481 [INFO][5462] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e" HandleID="k8s-pod-network.3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--n2wsn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2eb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4452-0-0-n-1f6e10e4b9", "pod":"calico-apiserver-76d8564c8-n2wsn", "timestamp":"2025-09-09 04:55:50.480301104 +0000 UTC"}, Hostname:"ci-4452-0-0-n-1f6e10e4b9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:55:50.615002 containerd[1539]: 2025-09-09 04:55:50.481 [INFO][5462] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:55:50.615002 containerd[1539]: 2025-09-09 04:55:50.481 [INFO][5462] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:55:50.615002 containerd[1539]: 2025-09-09 04:55:50.481 [INFO][5462] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452-0-0-n-1f6e10e4b9' Sep 9 04:55:50.615002 containerd[1539]: 2025-09-09 04:55:50.502 [INFO][5462] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:50.615002 containerd[1539]: 2025-09-09 04:55:50.511 [INFO][5462] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:50.615002 containerd[1539]: 2025-09-09 04:55:50.519 [INFO][5462] ipam/ipam.go 511: Trying affinity for 192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:50.615002 containerd[1539]: 2025-09-09 04:55:50.522 [INFO][5462] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:50.615002 containerd[1539]: 2025-09-09 04:55:50.529 [INFO][5462] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.64/26 host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:50.615002 containerd[1539]: 2025-09-09 04:55:50.531 [INFO][5462] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.15.64/26 handle="k8s-pod-network.3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:50.615002 containerd[1539]: 2025-09-09 04:55:50.538 [INFO][5462] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e Sep 9 04:55:50.615002 containerd[1539]: 2025-09-09 04:55:50.546 [INFO][5462] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.15.64/26 handle="k8s-pod-network.3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:50.615002 containerd[1539]: 2025-09-09 04:55:50.556 [INFO][5462] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.15.74/26] block=192.168.15.64/26 handle="k8s-pod-network.3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:50.615002 containerd[1539]: 2025-09-09 04:55:50.557 [INFO][5462] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.74/26] handle="k8s-pod-network.3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e" host="ci-4452-0-0-n-1f6e10e4b9" Sep 9 04:55:50.615002 containerd[1539]: 2025-09-09 04:55:50.557 [INFO][5462] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:55:50.615002 containerd[1539]: 2025-09-09 04:55:50.558 [INFO][5462] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.74/26] IPv6=[] ContainerID="3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e" HandleID="k8s-pod-network.3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--n2wsn-eth0" Sep 9 04:55:50.616052 containerd[1539]: 2025-09-09 04:55:50.563 [INFO][5434] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e" Namespace="calico-apiserver" Pod="calico-apiserver-76d8564c8-n2wsn" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--n2wsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--n2wsn-eth0", GenerateName:"calico-apiserver-76d8564c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"a91add35-9b06-46c6-924b-45edba24337e", ResourceVersion:"1161", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d8564c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-n-1f6e10e4b9", ContainerID:"", Pod:"calico-apiserver-76d8564c8-n2wsn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.74/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali756f703eb63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:50.616052 containerd[1539]: 2025-09-09 04:55:50.564 [INFO][5434] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.74/32] ContainerID="3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e" Namespace="calico-apiserver" Pod="calico-apiserver-76d8564c8-n2wsn" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--n2wsn-eth0" Sep 9 04:55:50.616052 containerd[1539]: 2025-09-09 04:55:50.564 [INFO][5434] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali756f703eb63 ContainerID="3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e" Namespace="calico-apiserver" Pod="calico-apiserver-76d8564c8-n2wsn" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--n2wsn-eth0" Sep 9 04:55:50.616052 containerd[1539]: 2025-09-09 04:55:50.589 [INFO][5434] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e" Namespace="calico-apiserver" Pod="calico-apiserver-76d8564c8-n2wsn" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--n2wsn-eth0" Sep 9 04:55:50.616052 containerd[1539]: 2025-09-09 04:55:50.591 [INFO][5434] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e" Namespace="calico-apiserver" Pod="calico-apiserver-76d8564c8-n2wsn" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--n2wsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--n2wsn-eth0", GenerateName:"calico-apiserver-76d8564c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"a91add35-9b06-46c6-924b-45edba24337e", ResourceVersion:"1161", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d8564c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452-0-0-n-1f6e10e4b9", ContainerID:"3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e", Pod:"calico-apiserver-76d8564c8-n2wsn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.74/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali756f703eb63", MAC:"86:b3:d1:bb:ff:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:50.616052 containerd[1539]: 2025-09-09 04:55:50.612 [INFO][5434] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e" Namespace="calico-apiserver" Pod="calico-apiserver-76d8564c8-n2wsn" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--76d8564c8--n2wsn-eth0" Sep 9 04:55:50.652093 containerd[1539]: time="2025-09-09T04:55:50.651990459Z" level=info msg="connecting to shim 3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e" address="unix:///run/containerd/s/63ca51fb94c60b35d4ad2429c89706e99c46f90c2a9126dc4d0b4193502184f5" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:50.686977 systemd[1]: Started cri-containerd-3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e.scope - libcontainer container 3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e. Sep 9 04:55:50.747122 containerd[1539]: 2025-09-09 04:55:50.413 [INFO][5452] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Sep 9 04:55:50.747122 containerd[1539]: 2025-09-09 04:55:50.414 [INFO][5452] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" iface="eth0" netns="/var/run/netns/cni-7ba8bb87-c17e-e883-9db4-2a4fb16ef25c" Sep 9 04:55:50.747122 containerd[1539]: 2025-09-09 04:55:50.415 [INFO][5452] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" iface="eth0" netns="/var/run/netns/cni-7ba8bb87-c17e-e883-9db4-2a4fb16ef25c" Sep 9 04:55:50.747122 containerd[1539]: 2025-09-09 04:55:50.437 [INFO][5452] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" after=22.401893ms iface="eth0" netns="/var/run/netns/cni-7ba8bb87-c17e-e883-9db4-2a4fb16ef25c" Sep 9 04:55:50.747122 containerd[1539]: 2025-09-09 04:55:50.439 [INFO][5452] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Sep 9 04:55:50.747122 containerd[1539]: 2025-09-09 04:55:50.440 [INFO][5452] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Sep 9 04:55:50.747122 containerd[1539]: 2025-09-09 04:55:50.510 [INFO][5471] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" HandleID="k8s-pod-network.229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0" Sep 9 04:55:50.747122 containerd[1539]: 2025-09-09 04:55:50.510 [INFO][5471] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:55:50.747122 containerd[1539]: 2025-09-09 04:55:50.557 [INFO][5471] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:55:50.747122 containerd[1539]: 2025-09-09 04:55:50.739 [INFO][5471] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" HandleID="k8s-pod-network.229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0" Sep 9 04:55:50.747122 containerd[1539]: 2025-09-09 04:55:50.739 [INFO][5471] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" HandleID="k8s-pod-network.229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0" Sep 9 04:55:50.747122 containerd[1539]: 2025-09-09 04:55:50.741 [INFO][5471] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:55:50.747122 containerd[1539]: 2025-09-09 04:55:50.744 [INFO][5452] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Sep 9 04:55:50.749770 containerd[1539]: time="2025-09-09T04:55:50.747831752Z" level=info msg="TearDown network for sandbox \"229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4\" successfully" Sep 9 04:55:50.749955 containerd[1539]: time="2025-09-09T04:55:50.749922405Z" level=info msg="StopPodSandbox for \"229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4\" returns successfully" Sep 9 04:55:50.794721 kubelet[2752]: I0909 04:55:50.794680 2752 scope.go:117] "RemoveContainer" containerID="7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7" Sep 9 04:55:50.800721 containerd[1539]: time="2025-09-09T04:55:50.800186772Z" level=info msg="RemoveContainer for \"7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7\"" Sep 9 04:55:50.833399 containerd[1539]: time="2025-09-09T04:55:50.833356981Z" level=info msg="RemoveContainer for \"7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7\" returns successfully" Sep 9 04:55:50.835024 kubelet[2752]: I0909 04:55:50.834977 2752 scope.go:117] "RemoveContainer" containerID="7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7" Sep 9 04:55:50.835357 containerd[1539]: time="2025-09-09T04:55:50.835305231Z" level=error msg="ContainerStatus for \"7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7\": not found" Sep 9 04:55:50.835581 kubelet[2752]: E0909 04:55:50.835554 2752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7\": not found" containerID="7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7" Sep 9 04:55:50.838582 kubelet[2752]: I0909 04:55:50.836904 2752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7"} err="failed to get container status \"7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"7426277333cdef960e6e558728d6d69dc12e5288caf85dc8f65477a3e076e7a7\": not found" Sep 9 04:55:50.878029 kubelet[2752]: I0909 04:55:50.877977 2752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/897ef4af-ec3b-4639-90ae-9b9b2ec794f6-calico-apiserver-certs\") pod \"897ef4af-ec3b-4639-90ae-9b9b2ec794f6\" (UID: \"897ef4af-ec3b-4639-90ae-9b9b2ec794f6\") " Sep 9 04:55:50.878185 kubelet[2752]: I0909 04:55:50.878042 2752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhmjr\" (UniqueName: \"kubernetes.io/projected/897ef4af-ec3b-4639-90ae-9b9b2ec794f6-kube-api-access-jhmjr\") pod \"897ef4af-ec3b-4639-90ae-9b9b2ec794f6\" (UID: \"897ef4af-ec3b-4639-90ae-9b9b2ec794f6\") " Sep 9 04:55:50.885676 kubelet[2752]: I0909 04:55:50.885286 2752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/897ef4af-ec3b-4639-90ae-9b9b2ec794f6-kube-api-access-jhmjr" (OuterVolumeSpecName: "kube-api-access-jhmjr") pod "897ef4af-ec3b-4639-90ae-9b9b2ec794f6" (UID: "897ef4af-ec3b-4639-90ae-9b9b2ec794f6"). InnerVolumeSpecName "kube-api-access-jhmjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 04:55:50.889794 kubelet[2752]: I0909 04:55:50.889438 2752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/897ef4af-ec3b-4639-90ae-9b9b2ec794f6-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "897ef4af-ec3b-4639-90ae-9b9b2ec794f6" (UID: "897ef4af-ec3b-4639-90ae-9b9b2ec794f6"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 04:55:50.933301 containerd[1539]: time="2025-09-09T04:55:50.933262058Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71ae806fefe7f4e045808d89337025a07e7c8d652af0cb2459757ca86e8db567\" id:\"df0a4692b75ea9b4a04bc22e016e44f82a1add442c9fc5087f19b96bdc935ca9\" pid:5536 exited_at:{seconds:1757393750 nanos:932688683}" Sep 9 04:55:50.977895 containerd[1539]: time="2025-09-09T04:55:50.977837439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d8564c8-n2wsn,Uid:a91add35-9b06-46c6-924b-45edba24337e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e\"" Sep 9 04:55:50.978567 kubelet[2752]: I0909 04:55:50.978533 2752 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/897ef4af-ec3b-4639-90ae-9b9b2ec794f6-calico-apiserver-certs\") on node \"ci-4452-0-0-n-1f6e10e4b9\" DevicePath \"\"" Sep 9 04:55:50.978567 kubelet[2752]: I0909 04:55:50.978565 2752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhmjr\" (UniqueName: \"kubernetes.io/projected/897ef4af-ec3b-4639-90ae-9b9b2ec794f6-kube-api-access-jhmjr\") on node \"ci-4452-0-0-n-1f6e10e4b9\" DevicePath \"\"" Sep 9 04:55:50.984992 containerd[1539]: time="2025-09-09T04:55:50.984936141Z" level=info msg="CreateContainer within sandbox \"3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 04:55:50.995484 containerd[1539]: time="2025-09-09T04:55:50.995424329Z" level=info msg="Container 0247d1263eda2020e45d044fed21a2662d67f635caa0b8fd655fcc160b4f00a7: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:51.005295 containerd[1539]: time="2025-09-09T04:55:51.005237944Z" level=info msg="CreateContainer within sandbox \"3e9b43021ff99575a80f4c2a3bd29c81a66993c3676354f993216a4e32ca827e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0247d1263eda2020e45d044fed21a2662d67f635caa0b8fd655fcc160b4f00a7\"" Sep 9 04:55:51.006992 containerd[1539]: time="2025-09-09T04:55:51.006956910Z" level=info msg="StartContainer for \"0247d1263eda2020e45d044fed21a2662d67f635caa0b8fd655fcc160b4f00a7\"" Sep 9 04:55:51.009802 containerd[1539]: time="2025-09-09T04:55:51.008946162Z" level=info msg="connecting to shim 0247d1263eda2020e45d044fed21a2662d67f635caa0b8fd655fcc160b4f00a7" address="unix:///run/containerd/s/63ca51fb94c60b35d4ad2429c89706e99c46f90c2a9126dc4d0b4193502184f5" protocol=ttrpc version=3 Sep 9 04:55:51.048171 systemd[1]: Started cri-containerd-0247d1263eda2020e45d044fed21a2662d67f635caa0b8fd655fcc160b4f00a7.scope - libcontainer container 0247d1263eda2020e45d044fed21a2662d67f635caa0b8fd655fcc160b4f00a7. Sep 9 04:55:51.078328 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4-shm.mount: Deactivated successfully. Sep 9 04:55:51.078637 systemd[1]: run-netns-cni\x2d7ba8bb87\x2dc17e\x2de883\x2d9db4\x2d2a4fb16ef25c.mount: Deactivated successfully. Sep 9 04:55:51.078804 systemd[1]: var-lib-kubelet-pods-897ef4af\x2dec3b\x2d4639\x2d90ae\x2d9b9b2ec794f6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djhmjr.mount: Deactivated successfully. Sep 9 04:55:51.078944 systemd[1]: var-lib-kubelet-pods-897ef4af\x2dec3b\x2d4639\x2d90ae\x2d9b9b2ec794f6-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 9 04:55:51.110802 systemd[1]: Removed slice kubepods-besteffort-pod897ef4af_ec3b_4639_90ae_9b9b2ec794f6.slice - libcontainer container kubepods-besteffort-pod897ef4af_ec3b_4639_90ae_9b9b2ec794f6.slice. Sep 9 04:55:51.110922 systemd[1]: kubepods-besteffort-pod897ef4af_ec3b_4639_90ae_9b9b2ec794f6.slice: Consumed 2.100s CPU time, 43.8M memory peak, 465K read from disk. Sep 9 04:55:51.122219 containerd[1539]: time="2025-09-09T04:55:51.121225735Z" level=info msg="StartContainer for \"0247d1263eda2020e45d044fed21a2662d67f635caa0b8fd655fcc160b4f00a7\" returns successfully" Sep 9 04:55:51.821096 kubelet[2752]: I0909 04:55:51.821022 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76d8564c8-n2wsn" podStartSLOduration=2.821002942 podStartE2EDuration="2.821002942s" podCreationTimestamp="2025-09-09 04:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:55:51.819529223 +0000 UTC m=+75.602959360" watchObservedRunningTime="2025-09-09 04:55:51.821002942 +0000 UTC m=+75.604433079" Sep 9 04:55:52.322173 kubelet[2752]: I0909 04:55:52.322123 2752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="897ef4af-ec3b-4639-90ae-9b9b2ec794f6" path="/var/lib/kubelet/pods/897ef4af-ec3b-4639-90ae-9b9b2ec794f6/volumes" Sep 9 04:55:52.625005 systemd-networkd[1405]: cali756f703eb63: Gained IPv6LL Sep 9 04:55:53.813118 kubelet[2752]: I0909 04:55:53.813080 2752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:55:54.076558 containerd[1539]: time="2025-09-09T04:55:54.076219210Z" level=info msg="StopContainer for \"c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827\" with timeout 30 (s)" Sep 9 04:55:54.078152 containerd[1539]: time="2025-09-09T04:55:54.078097025Z" level=info msg="Stop container \"c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827\" with signal terminated" Sep 9 04:55:54.145076 systemd[1]: cri-containerd-c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827.scope: Deactivated successfully. Sep 9 04:55:54.148287 systemd[1]: cri-containerd-c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827.scope: Consumed 1.485s CPU time, 56.6M memory peak, 4K read from disk. Sep 9 04:55:54.159561 containerd[1539]: time="2025-09-09T04:55:54.159510862Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827\" id:\"c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827\" pid:4940 exit_status:1 exited_at:{seconds:1757393754 nanos:158627357}" Sep 9 04:55:54.159992 containerd[1539]: time="2025-09-09T04:55:54.159573664Z" level=info msg="received exit event container_id:\"c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827\" id:\"c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827\" pid:4940 exit_status:1 exited_at:{seconds:1757393754 nanos:158627357}" Sep 9 04:55:54.193822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827-rootfs.mount: Deactivated successfully. Sep 9 04:55:54.216837 containerd[1539]: time="2025-09-09T04:55:54.216789401Z" level=info msg="StopContainer for \"c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827\" returns successfully" Sep 9 04:55:54.217336 containerd[1539]: time="2025-09-09T04:55:54.217285335Z" level=info msg="StopPodSandbox for \"fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29\"" Sep 9 04:55:54.217378 containerd[1539]: time="2025-09-09T04:55:54.217346337Z" level=info msg="Container to stop \"c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 04:55:54.238513 systemd[1]: cri-containerd-fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29.scope: Deactivated successfully. Sep 9 04:55:54.242059 containerd[1539]: time="2025-09-09T04:55:54.242010811Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29\" id:\"fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29\" pid:4150 exit_status:137 exited_at:{seconds:1757393754 nanos:241485196}" Sep 9 04:55:54.284404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29-rootfs.mount: Deactivated successfully. Sep 9 04:55:54.287363 containerd[1539]: time="2025-09-09T04:55:54.287179679Z" level=info msg="shim disconnected" id=fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29 namespace=k8s.io Sep 9 04:55:54.287905 containerd[1539]: time="2025-09-09T04:55:54.287843218Z" level=warning msg="cleaning up after shim disconnected" id=fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29 namespace=k8s.io Sep 9 04:55:54.287969 containerd[1539]: time="2025-09-09T04:55:54.287919900Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 04:55:54.309497 containerd[1539]: time="2025-09-09T04:55:54.309434403Z" level=info msg="received exit event sandbox_id:\"fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29\" exit_status:137 exited_at:{seconds:1757393754 nanos:241485196}" Sep 9 04:55:54.316278 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29-shm.mount: Deactivated successfully. Sep 9 04:55:54.386892 systemd-networkd[1405]: calid0be03dd4df: Link DOWN Sep 9 04:55:54.386902 systemd-networkd[1405]: calid0be03dd4df: Lost carrier Sep 9 04:55:54.567267 containerd[1539]: 2025-09-09 04:55:54.384 [INFO][5676] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Sep 9 04:55:54.567267 containerd[1539]: 2025-09-09 04:55:54.384 [INFO][5676] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" iface="eth0" netns="/var/run/netns/cni-db7837bf-3599-0262-dcee-fca6e8fdecf6" Sep 9 04:55:54.567267 containerd[1539]: 2025-09-09 04:55:54.385 [INFO][5676] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" iface="eth0" netns="/var/run/netns/cni-db7837bf-3599-0262-dcee-fca6e8fdecf6" Sep 9 04:55:54.567267 containerd[1539]: 2025-09-09 04:55:54.398 [INFO][5676] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" after=13.949964ms iface="eth0" netns="/var/run/netns/cni-db7837bf-3599-0262-dcee-fca6e8fdecf6" Sep 9 04:55:54.567267 containerd[1539]: 2025-09-09 04:55:54.398 [INFO][5676] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Sep 9 04:55:54.567267 containerd[1539]: 2025-09-09 04:55:54.398 [INFO][5676] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Sep 9 04:55:54.567267 containerd[1539]: 2025-09-09 04:55:54.445 [INFO][5683] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" HandleID="k8s-pod-network.fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0" Sep 9 04:55:54.567267 containerd[1539]: 2025-09-09 04:55:54.446 [INFO][5683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:55:54.567267 containerd[1539]: 2025-09-09 04:55:54.446 [INFO][5683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:55:54.567267 containerd[1539]: 2025-09-09 04:55:54.559 [INFO][5683] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" HandleID="k8s-pod-network.fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0" Sep 9 04:55:54.567267 containerd[1539]: 2025-09-09 04:55:54.559 [INFO][5683] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" HandleID="k8s-pod-network.fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0" Sep 9 04:55:54.567267 containerd[1539]: 2025-09-09 04:55:54.562 [INFO][5683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:55:54.567267 containerd[1539]: 2025-09-09 04:55:54.564 [INFO][5676] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Sep 9 04:55:54.570944 containerd[1539]: time="2025-09-09T04:55:54.570882093Z" level=info msg="TearDown network for sandbox \"fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29\" successfully" Sep 9 04:55:54.570944 containerd[1539]: time="2025-09-09T04:55:54.570935895Z" level=info msg="StopPodSandbox for \"fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29\" returns successfully" Sep 9 04:55:54.574906 systemd[1]: run-netns-cni\x2ddb7837bf\x2d3599\x2d0262\x2ddcee\x2dfca6e8fdecf6.mount: Deactivated successfully. Sep 9 04:55:54.706997 kubelet[2752]: I0909 04:55:54.706955 2752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c949512a-ac76-4e34-849b-f3ddd2412487-calico-apiserver-certs\") pod \"c949512a-ac76-4e34-849b-f3ddd2412487\" (UID: \"c949512a-ac76-4e34-849b-f3ddd2412487\") " Sep 9 04:55:54.706997 kubelet[2752]: I0909 04:55:54.707003 2752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxcqd\" (UniqueName: \"kubernetes.io/projected/c949512a-ac76-4e34-849b-f3ddd2412487-kube-api-access-qxcqd\") pod \"c949512a-ac76-4e34-849b-f3ddd2412487\" (UID: \"c949512a-ac76-4e34-849b-f3ddd2412487\") " Sep 9 04:55:54.713646 kubelet[2752]: I0909 04:55:54.713593 2752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c949512a-ac76-4e34-849b-f3ddd2412487-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "c949512a-ac76-4e34-849b-f3ddd2412487" (UID: "c949512a-ac76-4e34-849b-f3ddd2412487"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 04:55:54.715639 systemd[1]: var-lib-kubelet-pods-c949512a\x2dac76\x2d4e34\x2d849b\x2df3ddd2412487-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 9 04:55:54.718844 kubelet[2752]: I0909 04:55:54.716737 2752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c949512a-ac76-4e34-849b-f3ddd2412487-kube-api-access-qxcqd" (OuterVolumeSpecName: "kube-api-access-qxcqd") pod "c949512a-ac76-4e34-849b-f3ddd2412487" (UID: "c949512a-ac76-4e34-849b-f3ddd2412487"). InnerVolumeSpecName "kube-api-access-qxcqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 04:55:54.808710 kubelet[2752]: I0909 04:55:54.808212 2752 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c949512a-ac76-4e34-849b-f3ddd2412487-calico-apiserver-certs\") on node \"ci-4452-0-0-n-1f6e10e4b9\" DevicePath \"\"" Sep 9 04:55:54.808710 kubelet[2752]: I0909 04:55:54.808262 2752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxcqd\" (UniqueName: \"kubernetes.io/projected/c949512a-ac76-4e34-849b-f3ddd2412487-kube-api-access-qxcqd\") on node \"ci-4452-0-0-n-1f6e10e4b9\" DevicePath \"\"" Sep 9 04:55:54.821367 kubelet[2752]: I0909 04:55:54.821325 2752 scope.go:117] "RemoveContainer" containerID="c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827" Sep 9 04:55:54.833462 containerd[1539]: time="2025-09-09T04:55:54.833375254Z" level=info msg="RemoveContainer for \"c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827\"" Sep 9 04:55:54.841292 containerd[1539]: time="2025-09-09T04:55:54.841240082Z" level=info msg="RemoveContainer for \"c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827\" returns successfully" Sep 9 04:55:54.841815 systemd[1]: Removed slice kubepods-besteffort-podc949512a_ac76_4e34_849b_f3ddd2412487.slice - libcontainer container kubepods-besteffort-podc949512a_ac76_4e34_849b_f3ddd2412487.slice. Sep 9 04:55:54.841953 systemd[1]: kubepods-besteffort-podc949512a_ac76_4e34_849b_f3ddd2412487.slice: Consumed 1.508s CPU time, 56.8M memory peak, 4K read from disk. Sep 9 04:55:54.842451 kubelet[2752]: I0909 04:55:54.842389 2752 scope.go:117] "RemoveContainer" containerID="c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827" Sep 9 04:55:54.843504 containerd[1539]: time="2025-09-09T04:55:54.843411665Z" level=error msg="ContainerStatus for \"c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827\": not found" Sep 9 04:55:54.844652 kubelet[2752]: E0909 04:55:54.844622 2752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827\": not found" containerID="c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827" Sep 9 04:55:54.844843 kubelet[2752]: I0909 04:55:54.844655 2752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827"} err="failed to get container status \"c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827\": rpc error: code = NotFound desc = an error occurred when try to find container \"c0f7e7bc142b741af6d5d03baa28aac532b0656416a8a670ac4ab87a163e7827\": not found" Sep 9 04:55:55.194881 systemd[1]: var-lib-kubelet-pods-c949512a\x2dac76\x2d4e34\x2d849b\x2df3ddd2412487-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqxcqd.mount: Deactivated successfully. Sep 9 04:55:56.323671 kubelet[2752]: I0909 04:55:56.323142 2752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c949512a-ac76-4e34-849b-f3ddd2412487" path="/var/lib/kubelet/pods/c949512a-ac76-4e34-849b-f3ddd2412487/volumes" Sep 9 04:56:04.829524 containerd[1539]: time="2025-09-09T04:56:04.829311891Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a48d15f2f2463be94c8f2fc92c7b98078e175b69a8fa1882227cadf4ef9343d6\" id:\"6dd4ce8b5a974868bc11a8334e82c52e6a9dfda992f01c2a4b9178bd06d5ad1e\" pid:5719 exited_at:{seconds:1757393764 nanos:828952798}" Sep 9 04:56:07.432078 containerd[1539]: time="2025-09-09T04:56:07.431969198Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37ecfa4689ad6db384ff648ab79f6ddc69fa571f593ecc56bc88f653f1062dbc\" id:\"8029cf0f801644dfb580fd1632ac24bc42b6878b494c1303fcfb9c434a06f967\" pid:5741 exited_at:{seconds:1757393767 nanos:431521942}" Sep 9 04:56:14.842622 containerd[1539]: time="2025-09-09T04:56:14.842438166Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71ae806fefe7f4e045808d89337025a07e7c8d652af0cb2459757ca86e8db567\" id:\"af2037b319f4f0db990abd789468caf6050ece4848003ad03c76645587259953\" pid:5775 exited_at:{seconds:1757393774 nanos:842118673}" Sep 9 04:56:16.501774 containerd[1539]: time="2025-09-09T04:56:16.501672575Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a48d15f2f2463be94c8f2fc92c7b98078e175b69a8fa1882227cadf4ef9343d6\" id:\"f82d1e25d7f7bdc227d2b1a85459ecbaf08c9b8ece779dbc0b28ac92dc518bfb\" pid:5797 exited_at:{seconds:1757393776 nanos:500413363}" Sep 9 04:56:32.046142 systemd[1]: Started sshd@7-128.140.114.243:22-34.71.82.96:38922.service - OpenSSH per-connection server daemon (34.71.82.96:38922). Sep 9 04:56:32.573244 sshd[5811]: Invalid user from 34.71.82.96 port 38922 Sep 9 04:56:36.358734 containerd[1539]: time="2025-09-09T04:56:36.358315964Z" level=info msg="StopPodSandbox for \"229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4\"" Sep 9 04:56:36.457023 containerd[1539]: 2025-09-09 04:56:36.409 [WARNING][5827] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0" Sep 9 04:56:36.457023 containerd[1539]: 2025-09-09 04:56:36.409 [INFO][5827] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Sep 9 04:56:36.457023 containerd[1539]: 2025-09-09 04:56:36.409 [INFO][5827] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" iface="eth0" netns="" Sep 9 04:56:36.457023 containerd[1539]: 2025-09-09 04:56:36.409 [INFO][5827] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Sep 9 04:56:36.457023 containerd[1539]: 2025-09-09 04:56:36.409 [INFO][5827] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Sep 9 04:56:36.457023 containerd[1539]: 2025-09-09 04:56:36.431 [INFO][5834] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" HandleID="k8s-pod-network.229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0" Sep 9 04:56:36.457023 containerd[1539]: 2025-09-09 04:56:36.431 [INFO][5834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:56:36.457023 containerd[1539]: 2025-09-09 04:56:36.431 [INFO][5834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:56:36.457023 containerd[1539]: 2025-09-09 04:56:36.444 [WARNING][5834] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" HandleID="k8s-pod-network.229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0" Sep 9 04:56:36.457023 containerd[1539]: 2025-09-09 04:56:36.444 [INFO][5834] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" HandleID="k8s-pod-network.229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0" Sep 9 04:56:36.457023 containerd[1539]: 2025-09-09 04:56:36.449 [INFO][5834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:56:36.457023 containerd[1539]: 2025-09-09 04:56:36.453 [INFO][5827] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Sep 9 04:56:36.457023 containerd[1539]: time="2025-09-09T04:56:36.456918660Z" level=info msg="TearDown network for sandbox \"229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4\" successfully" Sep 9 04:56:36.457023 containerd[1539]: time="2025-09-09T04:56:36.456943701Z" level=info msg="StopPodSandbox for \"229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4\" returns successfully" Sep 9 04:56:36.458167 containerd[1539]: time="2025-09-09T04:56:36.458093596Z" level=info msg="RemovePodSandbox for \"229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4\"" Sep 9 04:56:36.458344 containerd[1539]: time="2025-09-09T04:56:36.458149678Z" level=info msg="Forcibly stopping sandbox \"229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4\"" Sep 9 04:56:36.567175 containerd[1539]: 2025-09-09 04:56:36.515 [WARNING][5848] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0" Sep 9 04:56:36.567175 containerd[1539]: 2025-09-09 04:56:36.516 [INFO][5848] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Sep 9 04:56:36.567175 containerd[1539]: 2025-09-09 04:56:36.516 [INFO][5848] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" iface="eth0" netns="" Sep 9 04:56:36.567175 containerd[1539]: 2025-09-09 04:56:36.516 [INFO][5848] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Sep 9 04:56:36.567175 containerd[1539]: 2025-09-09 04:56:36.516 [INFO][5848] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Sep 9 04:56:36.567175 containerd[1539]: 2025-09-09 04:56:36.545 [INFO][5855] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" HandleID="k8s-pod-network.229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0" Sep 9 04:56:36.567175 containerd[1539]: 2025-09-09 04:56:36.545 [INFO][5855] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:56:36.567175 containerd[1539]: 2025-09-09 04:56:36.545 [INFO][5855] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:56:36.567175 containerd[1539]: 2025-09-09 04:56:36.558 [WARNING][5855] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" HandleID="k8s-pod-network.229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0" Sep 9 04:56:36.567175 containerd[1539]: 2025-09-09 04:56:36.558 [INFO][5855] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" HandleID="k8s-pod-network.229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--7bvkb-eth0" Sep 9 04:56:36.567175 containerd[1539]: 2025-09-09 04:56:36.561 [INFO][5855] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:56:36.567175 containerd[1539]: 2025-09-09 04:56:36.564 [INFO][5848] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4" Sep 9 04:56:36.567175 containerd[1539]: time="2025-09-09T04:56:36.566998219Z" level=info msg="TearDown network for sandbox \"229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4\" successfully" Sep 9 04:56:36.571065 containerd[1539]: time="2025-09-09T04:56:36.571023209Z" level=info msg="Ensure that sandbox 229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4 in task-service has been cleanup successfully" Sep 9 04:56:36.577118 containerd[1539]: time="2025-09-09T04:56:36.577062414Z" level=info msg="RemovePodSandbox \"229ce94dce646708b89f73af8b72705d9c882b8eb4f9d4d96ae42a9de241c2d4\" returns successfully" Sep 9 04:56:36.578499 containerd[1539]: time="2025-09-09T04:56:36.578427479Z" level=info msg="StopPodSandbox for \"fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29\"" Sep 9 04:56:36.669954 containerd[1539]: 2025-09-09 04:56:36.622 [WARNING][5869] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0" Sep 9 04:56:36.669954 containerd[1539]: 2025-09-09 04:56:36.622 [INFO][5869] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Sep 9 04:56:36.669954 containerd[1539]: 2025-09-09 04:56:36.622 [INFO][5869] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" iface="eth0" netns="" Sep 9 04:56:36.669954 containerd[1539]: 2025-09-09 04:56:36.622 [INFO][5869] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Sep 9 04:56:36.669954 containerd[1539]: 2025-09-09 04:56:36.623 [INFO][5869] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Sep 9 04:56:36.669954 containerd[1539]: 2025-09-09 04:56:36.646 [INFO][5876] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" HandleID="k8s-pod-network.fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0" Sep 9 04:56:36.669954 containerd[1539]: 2025-09-09 04:56:36.646 [INFO][5876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:56:36.669954 containerd[1539]: 2025-09-09 04:56:36.646 [INFO][5876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:56:36.669954 containerd[1539]: 2025-09-09 04:56:36.660 [WARNING][5876] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" HandleID="k8s-pod-network.fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0" Sep 9 04:56:36.669954 containerd[1539]: 2025-09-09 04:56:36.660 [INFO][5876] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" HandleID="k8s-pod-network.fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0" Sep 9 04:56:36.669954 containerd[1539]: 2025-09-09 04:56:36.665 [INFO][5876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:56:36.669954 containerd[1539]: 2025-09-09 04:56:36.667 [INFO][5869] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Sep 9 04:56:36.670508 containerd[1539]: time="2025-09-09T04:56:36.670023444Z" level=info msg="TearDown network for sandbox \"fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29\" successfully" Sep 9 04:56:36.670508 containerd[1539]: time="2025-09-09T04:56:36.670056646Z" level=info msg="StopPodSandbox for \"fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29\" returns successfully" Sep 9 04:56:36.671437 containerd[1539]: time="2025-09-09T04:56:36.671233021Z" level=info msg="RemovePodSandbox for \"fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29\"" Sep 9 04:56:36.671437 containerd[1539]: time="2025-09-09T04:56:36.671293424Z" level=info msg="Forcibly stopping sandbox \"fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29\"" Sep 9 04:56:36.762663 containerd[1539]: 2025-09-09 04:56:36.721 [WARNING][5890] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" WorkloadEndpoint="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0" Sep 9 04:56:36.762663 containerd[1539]: 2025-09-09 04:56:36.721 [INFO][5890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Sep 9 04:56:36.762663 containerd[1539]: 2025-09-09 04:56:36.721 [INFO][5890] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" iface="eth0" netns="" Sep 9 04:56:36.762663 containerd[1539]: 2025-09-09 04:56:36.721 [INFO][5890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Sep 9 04:56:36.762663 containerd[1539]: 2025-09-09 04:56:36.721 [INFO][5890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Sep 9 04:56:36.762663 containerd[1539]: 2025-09-09 04:56:36.747 [INFO][5897] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" HandleID="k8s-pod-network.fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0" Sep 9 04:56:36.762663 containerd[1539]: 2025-09-09 04:56:36.747 [INFO][5897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:56:36.762663 containerd[1539]: 2025-09-09 04:56:36.747 [INFO][5897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:56:36.762663 containerd[1539]: 2025-09-09 04:56:36.757 [WARNING][5897] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" HandleID="k8s-pod-network.fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0" Sep 9 04:56:36.762663 containerd[1539]: 2025-09-09 04:56:36.757 [INFO][5897] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" HandleID="k8s-pod-network.fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Workload="ci--4452--0--0--n--1f6e10e4b9-k8s-calico--apiserver--86b766bf49--45pbp-eth0" Sep 9 04:56:36.762663 containerd[1539]: 2025-09-09 04:56:36.759 [INFO][5897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:56:36.762663 containerd[1539]: 2025-09-09 04:56:36.761 [INFO][5890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29" Sep 9 04:56:36.763308 containerd[1539]: time="2025-09-09T04:56:36.762751383Z" level=info msg="TearDown network for sandbox \"fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29\" successfully" Sep 9 04:56:36.764444 containerd[1539]: time="2025-09-09T04:56:36.764409502Z" level=info msg="Ensure that sandbox fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29 in task-service has been cleanup successfully" Sep 9 04:56:36.767647 containerd[1539]: time="2025-09-09T04:56:36.767599972Z" level=info msg="RemovePodSandbox \"fe56c86df6353d5d7b74ab6d8800311fb7c4409b534992370f323aa5bcd9ee29\" returns successfully" Sep 9 04:56:37.170090 containerd[1539]: time="2025-09-09T04:56:37.170045053Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37ecfa4689ad6db384ff648ab79f6ddc69fa571f593ecc56bc88f653f1062dbc\" id:\"3e16f77d1bccd762f2a97e518282067fced54832e683dabc70d857f15233456f\" pid:5916 exited_at:{seconds:1757393797 nanos:169682115}" Sep 9 04:56:40.033009 sshd[5811]: Connection closed by invalid user 34.71.82.96 port 38922 [preauth] Sep 9 04:56:40.036789 systemd[1]: sshd@7-128.140.114.243:22-34.71.82.96:38922.service: Deactivated successfully. Sep 9 04:56:41.450729 systemd[1]: Started sshd@8-128.140.114.243:22-147.75.109.163:49338.service - OpenSSH per-connection server daemon (147.75.109.163:49338). Sep 9 04:56:42.451830 sshd[5933]: Accepted publickey for core from 147.75.109.163 port 49338 ssh2: RSA SHA256:BJJsF+/R5vtVbpUFCd789n0h3+dB8+hZ951f8X5U2so Sep 9 04:56:42.456294 sshd-session[5933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:56:42.465733 systemd-logind[1515]: New session 8 of user core. Sep 9 04:56:42.475022 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 04:56:43.227848 sshd[5936]: Connection closed by 147.75.109.163 port 49338 Sep 9 04:56:43.229034 sshd-session[5933]: pam_unix(sshd:session): session closed for user core Sep 9 04:56:43.235811 systemd[1]: sshd@8-128.140.114.243:22-147.75.109.163:49338.service: Deactivated successfully. Sep 9 04:56:43.238472 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 04:56:43.239948 systemd-logind[1515]: Session 8 logged out. Waiting for processes to exit. Sep 9 04:56:43.242478 systemd-logind[1515]: Removed session 8. Sep 9 04:56:44.860773 containerd[1539]: time="2025-09-09T04:56:44.860599087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71ae806fefe7f4e045808d89337025a07e7c8d652af0cb2459757ca86e8db567\" id:\"1fd303a16c01e6d347ac98f7de467cf1fee9890afcea90beb132f8f3d8bf7d6a\" pid:5966 exited_at:{seconds:1757393804 nanos:860223588}" Sep 9 04:56:46.501477 containerd[1539]: time="2025-09-09T04:56:46.501211606Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a48d15f2f2463be94c8f2fc92c7b98078e175b69a8fa1882227cadf4ef9343d6\" id:\"78bae301f41e86072808c0e875f12aa818f0620b02f7a095ef84452e54210078\" pid:5986 exited_at:{seconds:1757393806 nanos:500864029}" Sep 9 04:56:48.406031 systemd[1]: Started sshd@9-128.140.114.243:22-147.75.109.163:49350.service - OpenSSH per-connection server daemon (147.75.109.163:49350). Sep 9 04:56:49.432980 sshd[5997]: Accepted publickey for core from 147.75.109.163 port 49350 ssh2: RSA SHA256:BJJsF+/R5vtVbpUFCd789n0h3+dB8+hZ951f8X5U2so Sep 9 04:56:49.435949 sshd-session[5997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:56:49.444531 systemd-logind[1515]: New session 9 of user core. Sep 9 04:56:49.450074 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 04:56:50.236534 sshd[6000]: Connection closed by 147.75.109.163 port 49350 Sep 9 04:56:50.236439 sshd-session[5997]: pam_unix(sshd:session): session closed for user core Sep 9 04:56:50.246291 systemd[1]: sshd@9-128.140.114.243:22-147.75.109.163:49350.service: Deactivated successfully. Sep 9 04:56:50.246482 systemd-logind[1515]: Session 9 logged out. Waiting for processes to exit. Sep 9 04:56:50.249729 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 04:56:50.255955 systemd-logind[1515]: Removed session 9. Sep 9 04:56:50.406885 systemd[1]: Started sshd@10-128.140.114.243:22-147.75.109.163:52482.service - OpenSSH per-connection server daemon (147.75.109.163:52482). Sep 9 04:56:50.834192 containerd[1539]: time="2025-09-09T04:56:50.834142124Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71ae806fefe7f4e045808d89337025a07e7c8d652af0cb2459757ca86e8db567\" id:\"92c07c9842c621cf37af1dbb71d10bff3ad3c843c5cdb629c74ea371b80672b8\" pid:6029 exited_at:{seconds:1757393810 nanos:833874431}" Sep 9 04:56:51.405516 sshd[6014]: Accepted publickey for core from 147.75.109.163 port 52482 ssh2: RSA SHA256:BJJsF+/R5vtVbpUFCd789n0h3+dB8+hZ951f8X5U2so Sep 9 04:56:51.408096 sshd-session[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:56:51.414238 systemd-logind[1515]: New session 10 of user core. Sep 9 04:56:51.431098 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 04:56:52.200465 sshd[6039]: Connection closed by 147.75.109.163 port 52482 Sep 9 04:56:52.201084 sshd-session[6014]: pam_unix(sshd:session): session closed for user core Sep 9 04:56:52.209945 systemd[1]: sshd@10-128.140.114.243:22-147.75.109.163:52482.service: Deactivated successfully. Sep 9 04:56:52.212487 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 04:56:52.213388 systemd-logind[1515]: Session 10 logged out. Waiting for processes to exit. Sep 9 04:56:52.215667 systemd-logind[1515]: Removed session 10. Sep 9 04:56:52.376534 systemd[1]: Started sshd@11-128.140.114.243:22-147.75.109.163:52486.service - OpenSSH per-connection server daemon (147.75.109.163:52486). Sep 9 04:56:53.404127 sshd[6054]: Accepted publickey for core from 147.75.109.163 port 52486 ssh2: RSA SHA256:BJJsF+/R5vtVbpUFCd789n0h3+dB8+hZ951f8X5U2so Sep 9 04:56:53.406164 sshd-session[6054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:56:53.414876 systemd-logind[1515]: New session 11 of user core. Sep 9 04:56:53.421119 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 04:56:54.188674 sshd[6057]: Connection closed by 147.75.109.163 port 52486 Sep 9 04:56:54.189432 sshd-session[6054]: pam_unix(sshd:session): session closed for user core Sep 9 04:56:54.194316 systemd[1]: sshd@11-128.140.114.243:22-147.75.109.163:52486.service: Deactivated successfully. Sep 9 04:56:54.196886 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 04:56:54.198119 systemd-logind[1515]: Session 11 logged out. Waiting for processes to exit. Sep 9 04:56:54.201196 systemd-logind[1515]: Removed session 11. Sep 9 04:56:59.357184 systemd[1]: Started sshd@12-128.140.114.243:22-147.75.109.163:52496.service - OpenSSH per-connection server daemon (147.75.109.163:52496). Sep 9 04:57:00.351986 sshd[6073]: Accepted publickey for core from 147.75.109.163 port 52496 ssh2: RSA SHA256:BJJsF+/R5vtVbpUFCd789n0h3+dB8+hZ951f8X5U2so Sep 9 04:57:00.354324 sshd-session[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:57:00.359889 systemd-logind[1515]: New session 12 of user core. Sep 9 04:57:00.363929 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 04:57:01.115693 sshd[6077]: Connection closed by 147.75.109.163 port 52496 Sep 9 04:57:01.116549 sshd-session[6073]: pam_unix(sshd:session): session closed for user core Sep 9 04:57:01.123431 systemd[1]: sshd@12-128.140.114.243:22-147.75.109.163:52496.service: Deactivated successfully. Sep 9 04:57:01.127650 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 04:57:01.130194 systemd-logind[1515]: Session 12 logged out. Waiting for processes to exit. Sep 9 04:57:01.133337 systemd-logind[1515]: Removed session 12. Sep 9 04:57:04.774154 containerd[1539]: time="2025-09-09T04:57:04.773955462Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a48d15f2f2463be94c8f2fc92c7b98078e175b69a8fa1882227cadf4ef9343d6\" id:\"71d593a8d4dcf6724a4c707313258167d7257344a4424632c11de8296100308f\" pid:6100 exited_at:{seconds:1757393824 nanos:773360752}" Sep 9 04:57:06.290610 systemd[1]: Started sshd@13-128.140.114.243:22-147.75.109.163:34404.service - OpenSSH per-connection server daemon (147.75.109.163:34404). Sep 9 04:57:07.166536 containerd[1539]: time="2025-09-09T04:57:07.166491756Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37ecfa4689ad6db384ff648ab79f6ddc69fa571f593ecc56bc88f653f1062dbc\" id:\"c11cf4c649cc045128fab0dc9a5ce8ddc1a4f3b03d4184df43f07a24baab2d68\" pid:6141 exited_at:{seconds:1757393827 nanos:166007819}" Sep 9 04:57:07.295968 sshd[6126]: Accepted publickey for core from 147.75.109.163 port 34404 ssh2: RSA SHA256:BJJsF+/R5vtVbpUFCd789n0h3+dB8+hZ951f8X5U2so Sep 9 04:57:07.298631 sshd-session[6126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:57:07.305151 systemd-logind[1515]: New session 13 of user core. Sep 9 04:57:07.310140 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 04:57:08.056867 sshd[6151]: Connection closed by 147.75.109.163 port 34404 Sep 9 04:57:08.057806 sshd-session[6126]: pam_unix(sshd:session): session closed for user core Sep 9 04:57:08.064489 systemd[1]: sshd@13-128.140.114.243:22-147.75.109.163:34404.service: Deactivated successfully. Sep 9 04:57:08.067278 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 04:57:08.068360 systemd-logind[1515]: Session 13 logged out. Waiting for processes to exit. Sep 9 04:57:08.070499 systemd-logind[1515]: Removed session 13. Sep 9 04:57:13.229960 systemd[1]: Started sshd@14-128.140.114.243:22-147.75.109.163:59220.service - OpenSSH per-connection server daemon (147.75.109.163:59220). Sep 9 04:57:14.240653 sshd[6170]: Accepted publickey for core from 147.75.109.163 port 59220 ssh2: RSA SHA256:BJJsF+/R5vtVbpUFCd789n0h3+dB8+hZ951f8X5U2so Sep 9 04:57:14.243315 sshd-session[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:57:14.252660 systemd-logind[1515]: New session 14 of user core. Sep 9 04:57:14.257007 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 04:57:14.949671 containerd[1539]: time="2025-09-09T04:57:14.949461206Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71ae806fefe7f4e045808d89337025a07e7c8d652af0cb2459757ca86e8db567\" id:\"12c37c0244ba0052c37aecdde29cdd99f24d36d4b4fa8517767fae2cfd3bf7a8\" pid:6196 exited_at:{seconds:1757393834 nanos:949039423}" Sep 9 04:57:15.087996 sshd[6175]: Connection closed by 147.75.109.163 port 59220 Sep 9 04:57:15.088478 sshd-session[6170]: pam_unix(sshd:session): session closed for user core Sep 9 04:57:15.095893 systemd[1]: sshd@14-128.140.114.243:22-147.75.109.163:59220.service: Deactivated successfully. Sep 9 04:57:15.101311 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 04:57:15.104099 systemd-logind[1515]: Session 14 logged out. Waiting for processes to exit. Sep 9 04:57:15.105916 systemd-logind[1515]: Removed session 14. Sep 9 04:57:15.261321 systemd[1]: Started sshd@15-128.140.114.243:22-147.75.109.163:59234.service - OpenSSH per-connection server daemon (147.75.109.163:59234). Sep 9 04:57:16.278403 sshd[6211]: Accepted publickey for core from 147.75.109.163 port 59234 ssh2: RSA SHA256:BJJsF+/R5vtVbpUFCd789n0h3+dB8+hZ951f8X5U2so Sep 9 04:57:16.280517 sshd-session[6211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:57:16.287738 systemd-logind[1515]: New session 15 of user core. Sep 9 04:57:16.296118 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 04:57:16.540351 containerd[1539]: time="2025-09-09T04:57:16.540068255Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a48d15f2f2463be94c8f2fc92c7b98078e175b69a8fa1882227cadf4ef9343d6\" id:\"519cf3b6b7ea5d7832e83b6155e4b75e3e4e4cad232932b493c719857bfd4ec5\" pid:6227 exited_at:{seconds:1757393836 nanos:538621508}" Sep 9 04:57:17.199476 sshd[6214]: Connection closed by 147.75.109.163 port 59234 Sep 9 04:57:17.201619 sshd-session[6211]: pam_unix(sshd:session): session closed for user core Sep 9 04:57:17.205720 systemd[1]: sshd@15-128.140.114.243:22-147.75.109.163:59234.service: Deactivated successfully. Sep 9 04:57:17.210866 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 04:57:17.213932 systemd-logind[1515]: Session 15 logged out. Waiting for processes to exit. Sep 9 04:57:17.217168 systemd-logind[1515]: Removed session 15. Sep 9 04:57:17.377034 systemd[1]: Started sshd@16-128.140.114.243:22-147.75.109.163:59246.service - OpenSSH per-connection server daemon (147.75.109.163:59246). Sep 9 04:57:18.388259 sshd[6246]: Accepted publickey for core from 147.75.109.163 port 59246 ssh2: RSA SHA256:BJJsF+/R5vtVbpUFCd789n0h3+dB8+hZ951f8X5U2so Sep 9 04:57:18.390321 sshd-session[6246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:57:18.396799 systemd-logind[1515]: New session 16 of user core. Sep 9 04:57:18.403207 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 04:57:21.075626 sshd[6249]: Connection closed by 147.75.109.163 port 59246 Sep 9 04:57:21.076452 sshd-session[6246]: pam_unix(sshd:session): session closed for user core Sep 9 04:57:21.083011 systemd-logind[1515]: Session 16 logged out. Waiting for processes to exit. Sep 9 04:57:21.083660 systemd[1]: sshd@16-128.140.114.243:22-147.75.109.163:59246.service: Deactivated successfully. Sep 9 04:57:21.086893 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 04:57:21.087262 systemd[1]: session-16.scope: Consumed 603ms CPU time, 70.1M memory peak. Sep 9 04:57:21.090703 systemd-logind[1515]: Removed session 16. Sep 9 04:57:21.260054 systemd[1]: Started sshd@17-128.140.114.243:22-147.75.109.163:39934.service - OpenSSH per-connection server daemon (147.75.109.163:39934). Sep 9 04:57:22.348767 sshd[6270]: Accepted publickey for core from 147.75.109.163 port 39934 ssh2: RSA SHA256:BJJsF+/R5vtVbpUFCd789n0h3+dB8+hZ951f8X5U2so Sep 9 04:57:22.351690 sshd-session[6270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:57:22.359106 systemd-logind[1515]: New session 17 of user core. Sep 9 04:57:22.371147 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 04:57:23.279309 sshd[6273]: Connection closed by 147.75.109.163 port 39934 Sep 9 04:57:23.279821 sshd-session[6270]: pam_unix(sshd:session): session closed for user core Sep 9 04:57:23.287277 systemd[1]: sshd@17-128.140.114.243:22-147.75.109.163:39934.service: Deactivated successfully. Sep 9 04:57:23.290715 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 04:57:23.292821 systemd-logind[1515]: Session 17 logged out. Waiting for processes to exit. Sep 9 04:57:23.295344 systemd-logind[1515]: Removed session 17. Sep 9 04:57:23.454296 systemd[1]: Started sshd@18-128.140.114.243:22-147.75.109.163:39944.service - OpenSSH per-connection server daemon (147.75.109.163:39944). Sep 9 04:57:24.464737 sshd[6283]: Accepted publickey for core from 147.75.109.163 port 39944 ssh2: RSA SHA256:BJJsF+/R5vtVbpUFCd789n0h3+dB8+hZ951f8X5U2so Sep 9 04:57:24.466927 sshd-session[6283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:57:24.474268 systemd-logind[1515]: New session 18 of user core. Sep 9 04:57:24.478959 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 04:57:25.226713 sshd[6286]: Connection closed by 147.75.109.163 port 39944 Sep 9 04:57:25.227627 sshd-session[6283]: pam_unix(sshd:session): session closed for user core Sep 9 04:57:25.234554 systemd[1]: sshd@18-128.140.114.243:22-147.75.109.163:39944.service: Deactivated successfully. Sep 9 04:57:25.234680 systemd-logind[1515]: Session 18 logged out. Waiting for processes to exit. Sep 9 04:57:25.239230 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 04:57:25.242425 systemd-logind[1515]: Removed session 18. Sep 9 04:57:30.396281 systemd[1]: Started sshd@19-128.140.114.243:22-147.75.109.163:39894.service - OpenSSH per-connection server daemon (147.75.109.163:39894). Sep 9 04:57:31.398532 sshd[6301]: Accepted publickey for core from 147.75.109.163 port 39894 ssh2: RSA SHA256:BJJsF+/R5vtVbpUFCd789n0h3+dB8+hZ951f8X5U2so Sep 9 04:57:31.401027 sshd-session[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:57:31.407533 systemd-logind[1515]: New session 19 of user core. Sep 9 04:57:31.414078 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 04:57:32.151193 sshd[6305]: Connection closed by 147.75.109.163 port 39894 Sep 9 04:57:32.152288 sshd-session[6301]: pam_unix(sshd:session): session closed for user core Sep 9 04:57:32.156914 systemd[1]: sshd@19-128.140.114.243:22-147.75.109.163:39894.service: Deactivated successfully. Sep 9 04:57:32.159651 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 04:57:32.162064 systemd-logind[1515]: Session 19 logged out. Waiting for processes to exit. Sep 9 04:57:32.164253 systemd-logind[1515]: Removed session 19. Sep 9 04:57:37.178165 containerd[1539]: time="2025-09-09T04:57:37.178095247Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37ecfa4689ad6db384ff648ab79f6ddc69fa571f593ecc56bc88f653f1062dbc\" id:\"20ba57498779b084ad3096430591873932de9f4af12f01c4566cbe76a0e452a4\" pid:6331 exited_at:{seconds:1757393857 nanos:176461511}" Sep 9 04:57:37.329325 systemd[1]: Started sshd@20-128.140.114.243:22-147.75.109.163:39902.service - OpenSSH per-connection server daemon (147.75.109.163:39902). Sep 9 04:57:38.336535 sshd[6342]: Accepted publickey for core from 147.75.109.163 port 39902 ssh2: RSA SHA256:BJJsF+/R5vtVbpUFCd789n0h3+dB8+hZ951f8X5U2so Sep 9 04:57:38.338911 sshd-session[6342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:57:38.344820 systemd-logind[1515]: New session 20 of user core. Sep 9 04:57:38.350057 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 04:57:39.103311 sshd[6345]: Connection closed by 147.75.109.163 port 39902 Sep 9 04:57:39.105110 sshd-session[6342]: pam_unix(sshd:session): session closed for user core Sep 9 04:57:39.111153 systemd-logind[1515]: Session 20 logged out. Waiting for processes to exit. Sep 9 04:57:39.111646 systemd[1]: sshd@20-128.140.114.243:22-147.75.109.163:39902.service: Deactivated successfully. Sep 9 04:57:39.115874 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 04:57:39.122418 systemd-logind[1515]: Removed session 20. Sep 9 04:57:44.834594 containerd[1539]: time="2025-09-09T04:57:44.834547436Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71ae806fefe7f4e045808d89337025a07e7c8d652af0cb2459757ca86e8db567\" id:\"5f60924c40f8e9f06159ddbd77b78bd8f7eee3ff264c38b30b78c4d7fde9d328\" pid:6370 exited_at:{seconds:1757393864 nanos:833996560}" Sep 9 04:57:46.448661 containerd[1539]: time="2025-09-09T04:57:46.448569132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a48d15f2f2463be94c8f2fc92c7b98078e175b69a8fa1882227cadf4ef9343d6\" id:\"d43fe84a8cb084a207857956bd69f30cd8bf1139ee6c47c00fccf2060028915b\" pid:6390 exited_at:{seconds:1757393866 nanos:447862537}" Sep 9 04:57:50.842863 containerd[1539]: time="2025-09-09T04:57:50.842523115Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71ae806fefe7f4e045808d89337025a07e7c8d652af0cb2459757ca86e8db567\" id:\"5d4aab7f4c65061c3935844e4765e97331f5ac43e557b4a700ccbb8fa8f0e907\" pid:6414 exited_at:{seconds:1757393870 nanos:842132597}" Sep 9 04:57:53.753522 systemd[1]: cri-containerd-97557d872fe402a7698436de8a42cd3f550bf933b8b3427d5f5972e9d6df603a.scope: Deactivated successfully. Sep 9 04:57:53.754442 systemd[1]: cri-containerd-97557d872fe402a7698436de8a42cd3f550bf933b8b3427d5f5972e9d6df603a.scope: Consumed 24.405s CPU time, 103M memory peak, 3.3M read from disk. Sep 9 04:57:53.758219 containerd[1539]: time="2025-09-09T04:57:53.758079394Z" level=info msg="received exit event container_id:\"97557d872fe402a7698436de8a42cd3f550bf933b8b3427d5f5972e9d6df603a\" id:\"97557d872fe402a7698436de8a42cd3f550bf933b8b3427d5f5972e9d6df603a\" pid:3072 exit_status:1 exited_at:{seconds:1757393873 nanos:757541435}" Sep 9 04:57:53.758910 containerd[1539]: time="2025-09-09T04:57:53.758139674Z" level=info msg="TaskExit event in podsandbox handler container_id:\"97557d872fe402a7698436de8a42cd3f550bf933b8b3427d5f5972e9d6df603a\" id:\"97557d872fe402a7698436de8a42cd3f550bf933b8b3427d5f5972e9d6df603a\" pid:3072 exit_status:1 exited_at:{seconds:1757393873 nanos:757541435}" Sep 9 04:57:53.783449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97557d872fe402a7698436de8a42cd3f550bf933b8b3427d5f5972e9d6df603a-rootfs.mount: Deactivated successfully. Sep 9 04:57:54.112942 systemd[1]: cri-containerd-813c3f00306f9e225aec0fcccb595b7e456a781f40338a1b015f51095d07f91e.scope: Deactivated successfully. Sep 9 04:57:54.113321 systemd[1]: cri-containerd-813c3f00306f9e225aec0fcccb595b7e456a781f40338a1b015f51095d07f91e.scope: Consumed 4.517s CPU time, 69.7M memory peak, 2.7M read from disk. Sep 9 04:57:54.116410 containerd[1539]: time="2025-09-09T04:57:54.115430559Z" level=info msg="received exit event container_id:\"813c3f00306f9e225aec0fcccb595b7e456a781f40338a1b015f51095d07f91e\" id:\"813c3f00306f9e225aec0fcccb595b7e456a781f40338a1b015f51095d07f91e\" pid:2579 exit_status:1 exited_at:{seconds:1757393874 nanos:113598281}" Sep 9 04:57:54.116410 containerd[1539]: time="2025-09-09T04:57:54.116155518Z" level=info msg="TaskExit event in podsandbox handler container_id:\"813c3f00306f9e225aec0fcccb595b7e456a781f40338a1b015f51095d07f91e\" id:\"813c3f00306f9e225aec0fcccb595b7e456a781f40338a1b015f51095d07f91e\" pid:2579 exit_status:1 exited_at:{seconds:1757393874 nanos:113598281}" Sep 9 04:57:54.148588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-813c3f00306f9e225aec0fcccb595b7e456a781f40338a1b015f51095d07f91e-rootfs.mount: Deactivated successfully. Sep 9 04:57:54.164146 kubelet[2752]: E0909 04:57:54.164093 2752 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:58152->10.0.0.2:2379: read: connection timed out" Sep 9 04:57:54.220366 kubelet[2752]: I0909 04:57:54.220047 2752 scope.go:117] "RemoveContainer" containerID="813c3f00306f9e225aec0fcccb595b7e456a781f40338a1b015f51095d07f91e" Sep 9 04:57:54.223555 kubelet[2752]: I0909 04:57:54.223528 2752 scope.go:117] "RemoveContainer" containerID="97557d872fe402a7698436de8a42cd3f550bf933b8b3427d5f5972e9d6df603a" Sep 9 04:57:54.228047 containerd[1539]: time="2025-09-09T04:57:54.227991052Z" level=info msg="CreateContainer within sandbox \"f7c419e06946d6db0714dda7294a8551194b4b53bd9f5e283392ef6aca719e3f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 9 04:57:54.228438 containerd[1539]: time="2025-09-09T04:57:54.228400091Z" level=info msg="CreateContainer within sandbox \"01c3b4174e58dcc59dbd16d67bc4082503b8bfb654017ddee2aa314d17b5c95c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 9 04:57:54.243306 containerd[1539]: time="2025-09-09T04:57:54.243256312Z" level=info msg="Container bbfc9d18927535080a9a5bda6816fd912fe9babe5dba61623d062434282db05a: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:57:54.248046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2839182767.mount: Deactivated successfully. Sep 9 04:57:54.249331 containerd[1539]: time="2025-09-09T04:57:54.248391345Z" level=info msg="Container 76bddcd0371119e0ddfc33a851dc7fae584bbef01f755a4a61e7f5d564127889: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:57:54.260196 containerd[1539]: time="2025-09-09T04:57:54.260102410Z" level=info msg="CreateContainer within sandbox \"f7c419e06946d6db0714dda7294a8551194b4b53bd9f5e283392ef6aca719e3f\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"bbfc9d18927535080a9a5bda6816fd912fe9babe5dba61623d062434282db05a\"" Sep 9 04:57:54.262765 containerd[1539]: time="2025-09-09T04:57:54.261728727Z" level=info msg="StartContainer for \"bbfc9d18927535080a9a5bda6816fd912fe9babe5dba61623d062434282db05a\"" Sep 9 04:57:54.262765 containerd[1539]: time="2025-09-09T04:57:54.262658246Z" level=info msg="connecting to shim bbfc9d18927535080a9a5bda6816fd912fe9babe5dba61623d062434282db05a" address="unix:///run/containerd/s/0b6be4896e6509e395089e41c9f5415c1c5a093f97f827117f3a71f54e84bbd7" protocol=ttrpc version=3 Sep 9 04:57:54.264187 containerd[1539]: time="2025-09-09T04:57:54.264137804Z" level=info msg="CreateContainer within sandbox \"01c3b4174e58dcc59dbd16d67bc4082503b8bfb654017ddee2aa314d17b5c95c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"76bddcd0371119e0ddfc33a851dc7fae584bbef01f755a4a61e7f5d564127889\"" Sep 9 04:57:54.265138 containerd[1539]: time="2025-09-09T04:57:54.265101563Z" level=info msg="StartContainer for \"76bddcd0371119e0ddfc33a851dc7fae584bbef01f755a4a61e7f5d564127889\"" Sep 9 04:57:54.266599 containerd[1539]: time="2025-09-09T04:57:54.266557441Z" level=info msg="connecting to shim 76bddcd0371119e0ddfc33a851dc7fae584bbef01f755a4a61e7f5d564127889" address="unix:///run/containerd/s/1bd8d63ba1034314f7dab6c6d209c1563ffeb04032fbafd9a7453b90abc0786b" protocol=ttrpc version=3 Sep 9 04:57:54.287024 systemd[1]: Started cri-containerd-bbfc9d18927535080a9a5bda6816fd912fe9babe5dba61623d062434282db05a.scope - libcontainer container bbfc9d18927535080a9a5bda6816fd912fe9babe5dba61623d062434282db05a. Sep 9 04:57:54.295284 systemd[1]: Started cri-containerd-76bddcd0371119e0ddfc33a851dc7fae584bbef01f755a4a61e7f5d564127889.scope - libcontainer container 76bddcd0371119e0ddfc33a851dc7fae584bbef01f755a4a61e7f5d564127889. Sep 9 04:57:54.344815 containerd[1539]: time="2025-09-09T04:57:54.344766299Z" level=info msg="StartContainer for \"bbfc9d18927535080a9a5bda6816fd912fe9babe5dba61623d062434282db05a\" returns successfully" Sep 9 04:57:54.372357 containerd[1539]: time="2025-09-09T04:57:54.371448384Z" level=info msg="StartContainer for \"76bddcd0371119e0ddfc33a851dc7fae584bbef01f755a4a61e7f5d564127889\" returns successfully" Sep 9 04:57:54.995660 kubelet[2752]: I0909 04:57:54.995607 2752 status_manager.go:851] "Failed to get status for pod" podUID="37dcac0bab3381dafeb47cb52e4dca4a" pod="kube-system/kube-controller-manager-ci-4452-0-0-n-1f6e10e4b9" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:58066->10.0.0.2:2379: read: connection timed out" Sep 9 04:57:59.032336 kubelet[2752]: E0909 04:57:59.031758 2752 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57952->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4452-0-0-n-1f6e10e4b9.186384699082f99a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4452-0-0-n-1f6e10e4b9,UID:687a54f3106bb66e0d5e6125b138a2bd,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4452-0-0-n-1f6e10e4b9,},FirstTimestamp:2025-09-09 04:57:48.535560602 +0000 UTC m=+192.318990779,LastTimestamp:2025-09-09 04:57:48.535560602 +0000 UTC m=+192.318990779,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4452-0-0-n-1f6e10e4b9,}" Sep 9 04:57:59.737918 systemd[1]: cri-containerd-466a25b14111b30a30335246947d685a07ec318145ad2cf045474d59c47b84d9.scope: Deactivated successfully. Sep 9 04:57:59.739906 systemd[1]: cri-containerd-466a25b14111b30a30335246947d685a07ec318145ad2cf045474d59c47b84d9.scope: Consumed 1.752s CPU time, 23.3M memory peak, 2.7M read from disk. Sep 9 04:57:59.747832 containerd[1539]: time="2025-09-09T04:57:59.745522497Z" level=info msg="TaskExit event in podsandbox handler container_id:\"466a25b14111b30a30335246947d685a07ec318145ad2cf045474d59c47b84d9\" id:\"466a25b14111b30a30335246947d685a07ec318145ad2cf045474d59c47b84d9\" pid:2609 exit_status:1 exited_at:{seconds:1757393879 nanos:744490935}" Sep 9 04:57:59.750721 containerd[1539]: time="2025-09-09T04:57:59.749006543Z" level=info msg="received exit event container_id:\"466a25b14111b30a30335246947d685a07ec318145ad2cf045474d59c47b84d9\" id:\"466a25b14111b30a30335246947d685a07ec318145ad2cf045474d59c47b84d9\" pid:2609 exit_status:1 exited_at:{seconds:1757393879 nanos:744490935}" Sep 9 04:57:59.805487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-466a25b14111b30a30335246947d685a07ec318145ad2cf045474d59c47b84d9-rootfs.mount: Deactivated successfully. Sep 9 04:58:00.257092 kubelet[2752]: I0909 04:58:00.256999 2752 scope.go:117] "RemoveContainer" containerID="466a25b14111b30a30335246947d685a07ec318145ad2cf045474d59c47b84d9" Sep 9 04:58:00.260215 containerd[1539]: time="2025-09-09T04:58:00.260152961Z" level=info msg="CreateContainer within sandbox \"581b9049a0f3e26fa9baa3cc967de7b68dba9c04405ba26cc57d3d2c39f4b4d2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 9 04:58:00.279394 containerd[1539]: time="2025-09-09T04:58:00.276512884Z" level=info msg="Container 17a9ce3127842b7644e296b344bc14d00d3a002ec34812ac3dc657f534d90328: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:58:00.285592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2263642111.mount: Deactivated successfully. Sep 9 04:58:00.290970 containerd[1539]: time="2025-09-09T04:58:00.290859921Z" level=info msg="CreateContainer within sandbox \"581b9049a0f3e26fa9baa3cc967de7b68dba9c04405ba26cc57d3d2c39f4b4d2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"17a9ce3127842b7644e296b344bc14d00d3a002ec34812ac3dc657f534d90328\"" Sep 9 04:58:00.292021 containerd[1539]: time="2025-09-09T04:58:00.291976804Z" level=info msg="StartContainer for \"17a9ce3127842b7644e296b344bc14d00d3a002ec34812ac3dc657f534d90328\"" Sep 9 04:58:00.294051 containerd[1539]: time="2025-09-09T04:58:00.293873489Z" level=info msg="connecting to shim 17a9ce3127842b7644e296b344bc14d00d3a002ec34812ac3dc657f534d90328" address="unix:///run/containerd/s/aaeb5e2a6fa05bf5a7f710d4a0c2d6ec3c20fb18cef9fdaf1bd80b966e1a21f6" protocol=ttrpc version=3 Sep 9 04:58:00.325097 systemd[1]: Started cri-containerd-17a9ce3127842b7644e296b344bc14d00d3a002ec34812ac3dc657f534d90328.scope - libcontainer container 17a9ce3127842b7644e296b344bc14d00d3a002ec34812ac3dc657f534d90328. Sep 9 04:58:00.385939 containerd[1539]: time="2025-09-09T04:58:00.385900730Z" level=info msg="StartContainer for \"17a9ce3127842b7644e296b344bc14d00d3a002ec34812ac3dc657f534d90328\" returns successfully" Sep 9 04:58:04.165773 kubelet[2752]: E0909 04:58:04.164842 2752 controller.go:195] "Failed to update lease" err="Put \"https://128.140.114.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452-0-0-n-1f6e10e4b9?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 9 04:58:04.772629 containerd[1539]: time="2025-09-09T04:58:04.772578411Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a48d15f2f2463be94c8f2fc92c7b98078e175b69a8fa1882227cadf4ef9343d6\" id:\"4ed22904e2de8854e00c0f1c1180f9975a54bb98e95b76ba31b6729152c3db4d\" pid:6568 exited_at:{seconds:1757393884 nanos:771760287}" Sep 9 04:58:05.647509 systemd[1]: cri-containerd-bbfc9d18927535080a9a5bda6816fd912fe9babe5dba61623d062434282db05a.scope: Deactivated successfully. Sep 9 04:58:05.648629 systemd[1]: cri-containerd-bbfc9d18927535080a9a5bda6816fd912fe9babe5dba61623d062434282db05a.scope: Consumed 266ms CPU time, 40M memory peak, 1.9M read from disk. Sep 9 04:58:05.652258 containerd[1539]: time="2025-09-09T04:58:05.651113927Z" level=info msg="received exit event container_id:\"bbfc9d18927535080a9a5bda6816fd912fe9babe5dba61623d062434282db05a\" id:\"bbfc9d18927535080a9a5bda6816fd912fe9babe5dba61623d062434282db05a\" pid:6472 exit_status:1 exited_at:{seconds:1757393885 nanos:650635685}" Sep 9 04:58:05.653433 containerd[1539]: time="2025-09-09T04:58:05.652904537Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbfc9d18927535080a9a5bda6816fd912fe9babe5dba61623d062434282db05a\" id:\"bbfc9d18927535080a9a5bda6816fd912fe9babe5dba61623d062434282db05a\" pid:6472 exit_status:1 exited_at:{seconds:1757393885 nanos:650635685}" Sep 9 04:58:05.690423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbfc9d18927535080a9a5bda6816fd912fe9babe5dba61623d062434282db05a-rootfs.mount: Deactivated successfully. Sep 9 04:58:06.290001 kubelet[2752]: I0909 04:58:06.289966 2752 scope.go:117] "RemoveContainer" containerID="97557d872fe402a7698436de8a42cd3f550bf933b8b3427d5f5972e9d6df603a" Sep 9 04:58:06.290552 kubelet[2752]: I0909 04:58:06.290490 2752 scope.go:117] "RemoveContainer" containerID="bbfc9d18927535080a9a5bda6816fd912fe9babe5dba61623d062434282db05a" Sep 9 04:58:06.290960 kubelet[2752]: E0909 04:58:06.290926 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-58fc44c59b-sjl9f_tigera-operator(2cebccc6-7f81-4211-bc76-13c25e966f00)\"" pod="tigera-operator/tigera-operator-58fc44c59b-sjl9f" podUID="2cebccc6-7f81-4211-bc76-13c25e966f00" Sep 9 04:58:06.293043 containerd[1539]: time="2025-09-09T04:58:06.292974335Z" level=info msg="RemoveContainer for \"97557d872fe402a7698436de8a42cd3f550bf933b8b3427d5f5972e9d6df603a\"" Sep 9 04:58:06.299533 containerd[1539]: time="2025-09-09T04:58:06.299430655Z" level=info msg="RemoveContainer for \"97557d872fe402a7698436de8a42cd3f550bf933b8b3427d5f5972e9d6df603a\" returns successfully" Sep 9 04:58:07.160653 containerd[1539]: time="2025-09-09T04:58:07.160600788Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37ecfa4689ad6db384ff648ab79f6ddc69fa571f593ecc56bc88f653f1062dbc\" id:\"4595735e0f7d92abc9b0b3327c47eb5022b7c5b6179e5ed408e71e51f962ca04\" pid:6603 exited_at:{seconds:1757393887 nanos:160241426}" Sep 9 04:58:14.166923 kubelet[2752]: E0909 04:58:14.166852 2752 controller.go:195] "Failed to update lease" err="Put \"https://128.140.114.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452-0-0-n-1f6e10e4b9?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 9 04:58:14.833293 containerd[1539]: time="2025-09-09T04:58:14.833058344Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71ae806fefe7f4e045808d89337025a07e7c8d652af0cb2459757ca86e8db567\" id:\"c3f0e21acdb7fe792ad6e68a7c117abfdd1531ec8d2bd7860f40455a1bc920bd\" pid:6635 exit_status:1 exited_at:{seconds:1757393894 nanos:832496538}" Sep 9 04:58:16.445408 containerd[1539]: time="2025-09-09T04:58:16.445361436Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a48d15f2f2463be94c8f2fc92c7b98078e175b69a8fa1882227cadf4ef9343d6\" id:\"c78b04280dd10ca5fc516e41a09213317260e6ced324b419c26070962982f065\" pid:6656 exited_at:{seconds:1757393896 nanos:445037833}" Sep 9 04:58:18.321384 kubelet[2752]: I0909 04:58:18.321319 2752 scope.go:117] "RemoveContainer" containerID="bbfc9d18927535080a9a5bda6816fd912fe9babe5dba61623d062434282db05a" Sep 9 04:58:18.324470 containerd[1539]: time="2025-09-09T04:58:18.324414340Z" level=info msg="CreateContainer within sandbox \"f7c419e06946d6db0714dda7294a8551194b4b53bd9f5e283392ef6aca719e3f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:2,}" Sep 9 04:58:18.336542 containerd[1539]: time="2025-09-09T04:58:18.336405888Z" level=info msg="Container 01933967bb3968ef2c924cf4b01b511b41205d409ba986b30db2a365f8845eb2: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:58:18.344003 containerd[1539]: time="2025-09-09T04:58:18.343956542Z" level=info msg="CreateContainer within sandbox \"f7c419e06946d6db0714dda7294a8551194b4b53bd9f5e283392ef6aca719e3f\" for &ContainerMetadata{Name:tigera-operator,Attempt:2,} returns container id \"01933967bb3968ef2c924cf4b01b511b41205d409ba986b30db2a365f8845eb2\"" Sep 9 04:58:18.344809 containerd[1539]: time="2025-09-09T04:58:18.344579909Z" level=info msg="StartContainer for \"01933967bb3968ef2c924cf4b01b511b41205d409ba986b30db2a365f8845eb2\"" Sep 9 04:58:18.345831 containerd[1539]: time="2025-09-09T04:58:18.345802364Z" level=info msg="connecting to shim 01933967bb3968ef2c924cf4b01b511b41205d409ba986b30db2a365f8845eb2" address="unix:///run/containerd/s/0b6be4896e6509e395089e41c9f5415c1c5a093f97f827117f3a71f54e84bbd7" protocol=ttrpc version=3 Sep 9 04:58:18.376091 systemd[1]: Started cri-containerd-01933967bb3968ef2c924cf4b01b511b41205d409ba986b30db2a365f8845eb2.scope - libcontainer container 01933967bb3968ef2c924cf4b01b511b41205d409ba986b30db2a365f8845eb2. Sep 9 04:58:18.417458 containerd[1539]: time="2025-09-09T04:58:18.417412011Z" level=info msg="StartContainer for \"01933967bb3968ef2c924cf4b01b511b41205d409ba986b30db2a365f8845eb2\" returns successfully" Sep 9 04:58:20.815106 systemd[1]: Started sshd@21-128.140.114.243:22-8.137.121.98:54792.service - OpenSSH per-connection server daemon (8.137.121.98:54792). Sep 9 04:58:21.765966 sshd[6700]: Invalid user from 8.137.121.98 port 54792 Sep 9 04:58:24.168763 kubelet[2752]: E0909 04:58:24.168048 2752 controller.go:195] "Failed to update lease" err="Put \"https://128.140.114.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452-0-0-n-1f6e10e4b9?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 9 04:58:28.795559 sshd[6700]: Connection closed by invalid user 8.137.121.98 port 54792 [preauth] Sep 9 04:58:28.799256 systemd[1]: sshd@21-128.140.114.243:22-8.137.121.98:54792.service: Deactivated successfully. Sep 9 04:58:29.654458 systemd[1]: cri-containerd-01933967bb3968ef2c924cf4b01b511b41205d409ba986b30db2a365f8845eb2.scope: Deactivated successfully. Sep 9 04:58:29.663620 containerd[1539]: time="2025-09-09T04:58:29.663571807Z" level=info msg="TaskExit event in podsandbox handler container_id:\"01933967bb3968ef2c924cf4b01b511b41205d409ba986b30db2a365f8845eb2\" id:\"01933967bb3968ef2c924cf4b01b511b41205d409ba986b30db2a365f8845eb2\" pid:6679 exit_status:1 exited_at:{seconds:1757393909 nanos:663193041}" Sep 9 04:58:29.664326 containerd[1539]: time="2025-09-09T04:58:29.664153617Z" level=info msg="received exit event container_id:\"01933967bb3968ef2c924cf4b01b511b41205d409ba986b30db2a365f8845eb2\" id:\"01933967bb3968ef2c924cf4b01b511b41205d409ba986b30db2a365f8845eb2\" pid:6679 exit_status:1 exited_at:{seconds:1757393909 nanos:663193041}" Sep 9 04:58:29.689451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01933967bb3968ef2c924cf4b01b511b41205d409ba986b30db2a365f8845eb2-rootfs.mount: Deactivated successfully. Sep 9 04:58:30.379144 kubelet[2752]: I0909 04:58:30.379091 2752 scope.go:117] "RemoveContainer" containerID="bbfc9d18927535080a9a5bda6816fd912fe9babe5dba61623d062434282db05a" Sep 9 04:58:30.380357 kubelet[2752]: I0909 04:58:30.380051 2752 scope.go:117] "RemoveContainer" containerID="01933967bb3968ef2c924cf4b01b511b41205d409ba986b30db2a365f8845eb2" Sep 9 04:58:30.380357 kubelet[2752]: E0909 04:58:30.380251 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=tigera-operator pod=tigera-operator-58fc44c59b-sjl9f_tigera-operator(2cebccc6-7f81-4211-bc76-13c25e966f00)\"" pod="tigera-operator/tigera-operator-58fc44c59b-sjl9f" podUID="2cebccc6-7f81-4211-bc76-13c25e966f00" Sep 9 04:58:30.382178 containerd[1539]: time="2025-09-09T04:58:30.381972457Z" level=info msg="RemoveContainer for \"bbfc9d18927535080a9a5bda6816fd912fe9babe5dba61623d062434282db05a\"" Sep 9 04:58:30.388597 containerd[1539]: time="2025-09-09T04:58:30.388422970Z" level=info msg="RemoveContainer for \"bbfc9d18927535080a9a5bda6816fd912fe9babe5dba61623d062434282db05a\" returns successfully" Sep 9 04:58:33.035242 kubelet[2752]: E0909 04:58:33.035056 2752 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-apiserver-ci-4452-0-0-n-1f6e10e4b9.186384699082f99a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4452-0-0-n-1f6e10e4b9,UID:687a54f3106bb66e0d5e6125b138a2bd,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4452-0-0-n-1f6e10e4b9,},FirstTimestamp:2025-09-09 04:57:48.535560602 +0000 UTC m=+192.318990779,LastTimestamp:2025-09-09 04:57:52.54815169 +0000 UTC m=+196.331581867,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4452-0-0-n-1f6e10e4b9,}" Sep 9 04:58:34.179918 kubelet[2752]: E0909 04:58:34.179801 2752 request.go:1255] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body) Sep 9 04:58:34.180489 kubelet[2752]: E0909 04:58:34.179967 2752 controller.go:195] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" Sep 9 04:58:34.180489 kubelet[2752]: I0909 04:58:34.179995 2752 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Sep 9 04:58:37.160849 containerd[1539]: time="2025-09-09T04:58:37.160797682Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37ecfa4689ad6db384ff648ab79f6ddc69fa571f593ecc56bc88f653f1062dbc\" id:\"314d94e7b1e03a826c580d2b4d55149e13d856bd499ba896c2633bdc10ae329f\" pid:6736 exited_at:{seconds:1757393917 nanos:160160909}" Sep 9 04:58:42.320018 kubelet[2752]: I0909 04:58:42.319577 2752 scope.go:117] "RemoveContainer" containerID="01933967bb3968ef2c924cf4b01b511b41205d409ba986b30db2a365f8845eb2" Sep 9 04:58:42.320682 kubelet[2752]: E0909 04:58:42.320187 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=tigera-operator pod=tigera-operator-58fc44c59b-sjl9f_tigera-operator(2cebccc6-7f81-4211-bc76-13c25e966f00)\"" pod="tigera-operator/tigera-operator-58fc44c59b-sjl9f" podUID="2cebccc6-7f81-4211-bc76-13c25e966f00" Sep 9 04:58:44.180888 kubelet[2752]: E0909 04:58:44.180584 2752 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.114.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452-0-0-n-1f6e10e4b9?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Sep 9 04:58:44.829321 containerd[1539]: time="2025-09-09T04:58:44.829263595Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71ae806fefe7f4e045808d89337025a07e7c8d652af0cb2459757ca86e8db567\" id:\"646c992ac0fa9d81f16317a7abd39facf82239150a296fbce9ffe40cb40d6141\" pid:6784 exit_status:1 exited_at:{seconds:1757393924 nanos:828912187}" Sep 9 04:58:46.448822 containerd[1539]: time="2025-09-09T04:58:46.448706768Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a48d15f2f2463be94c8f2fc92c7b98078e175b69a8fa1882227cadf4ef9343d6\" id:\"63c59f216935e145b525415170e357b52f462e010b54d374341786424c72bfa9\" pid:6805 exited_at:{seconds:1757393926 nanos:448404321}"