Sep 10 23:47:11.802529 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 10 23:47:11.802552 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Sep 10 22:24:03 -00 2025 Sep 10 23:47:11.802563 kernel: KASLR enabled Sep 10 23:47:11.802569 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Sep 10 23:47:11.802574 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Sep 10 23:47:11.802580 kernel: random: crng init done Sep 10 23:47:11.802587 kernel: secureboot: Secure boot disabled Sep 10 23:47:11.802593 kernel: ACPI: Early table checksum verification disabled Sep 10 23:47:11.802599 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Sep 10 23:47:11.802605 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Sep 10 23:47:11.802613 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:47:11.802618 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:47:11.802624 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:47:11.802630 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:47:11.802637 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:47:11.804373 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:47:11.804382 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:47:11.804388 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:47:11.804395 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:47:11.804401 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Sep 10 23:47:11.804407 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Sep 10 23:47:11.804414 kernel: ACPI: Use ACPI SPCR as default console: No Sep 10 23:47:11.804420 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Sep 10 23:47:11.804426 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Sep 10 23:47:11.804432 kernel: Zone ranges: Sep 10 23:47:11.804441 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 10 23:47:11.804447 kernel: DMA32 empty Sep 10 23:47:11.804453 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Sep 10 23:47:11.804459 kernel: Device empty Sep 10 23:47:11.804466 kernel: Movable zone start for each node Sep 10 23:47:11.804472 kernel: Early memory node ranges Sep 10 23:47:11.804478 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Sep 10 23:47:11.804484 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Sep 10 23:47:11.804490 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Sep 10 23:47:11.804497 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Sep 10 23:47:11.804503 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Sep 10 23:47:11.804509 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Sep 10 23:47:11.804515 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Sep 10 23:47:11.804523 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Sep 10 23:47:11.804529 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Sep 10 23:47:11.804538 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Sep 10 23:47:11.804545 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Sep 10 23:47:11.804552 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Sep 10 23:47:11.804560 kernel: psci: probing for conduit method from ACPI. Sep 10 23:47:11.804567 kernel: psci: PSCIv1.1 detected in firmware. Sep 10 23:47:11.804573 kernel: psci: Using standard PSCI v0.2 function IDs Sep 10 23:47:11.804580 kernel: psci: Trusted OS migration not required Sep 10 23:47:11.804586 kernel: psci: SMC Calling Convention v1.1 Sep 10 23:47:11.804593 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 10 23:47:11.804599 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 10 23:47:11.804606 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 10 23:47:11.804613 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 10 23:47:11.804620 kernel: Detected PIPT I-cache on CPU0 Sep 10 23:47:11.804626 kernel: CPU features: detected: GIC system register CPU interface Sep 10 23:47:11.804634 kernel: CPU features: detected: Spectre-v4 Sep 10 23:47:11.804682 kernel: CPU features: detected: Spectre-BHB Sep 10 23:47:11.804690 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 10 23:47:11.804697 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 10 23:47:11.804704 kernel: CPU features: detected: ARM erratum 1418040 Sep 10 23:47:11.804711 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 10 23:47:11.804717 kernel: alternatives: applying boot alternatives Sep 10 23:47:11.804725 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=dd9c14cce645c634e06a91b09405eea80057f02909b9267c482dc457df1cddec Sep 10 23:47:11.804732 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 23:47:11.804739 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 23:47:11.804749 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 23:47:11.804755 kernel: Fallback order for Node 0: 0 Sep 10 23:47:11.804762 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Sep 10 23:47:11.804768 kernel: Policy zone: Normal Sep 10 23:47:11.804775 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 23:47:11.804782 kernel: software IO TLB: area num 2. Sep 10 23:47:11.804789 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Sep 10 23:47:11.804795 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 10 23:47:11.804802 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 23:47:11.804810 kernel: rcu: RCU event tracing is enabled. Sep 10 23:47:11.804817 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 10 23:47:11.804823 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 23:47:11.804832 kernel: Tracing variant of Tasks RCU enabled. Sep 10 23:47:11.804839 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 23:47:11.804846 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 10 23:47:11.804852 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 10 23:47:11.804859 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 10 23:47:11.804866 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 10 23:47:11.804873 kernel: GICv3: 256 SPIs implemented Sep 10 23:47:11.804879 kernel: GICv3: 0 Extended SPIs implemented Sep 10 23:47:11.804886 kernel: Root IRQ handler: gic_handle_irq Sep 10 23:47:11.804892 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 10 23:47:11.804899 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 10 23:47:11.804906 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 10 23:47:11.804914 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 10 23:47:11.804921 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Sep 10 23:47:11.804927 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Sep 10 23:47:11.804934 kernel: GICv3: using LPI property table @0x0000000100120000 Sep 10 23:47:11.804941 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Sep 10 23:47:11.804947 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 23:47:11.804954 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:47:11.804961 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 10 23:47:11.804967 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 10 23:47:11.804987 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 10 23:47:11.804997 kernel: Console: colour dummy device 80x25 Sep 10 23:47:11.805006 kernel: ACPI: Core revision 20240827 Sep 10 23:47:11.805014 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 10 23:47:11.805021 kernel: pid_max: default: 32768 minimum: 301 Sep 10 23:47:11.805028 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 10 23:47:11.805034 kernel: landlock: Up and running. Sep 10 23:47:11.805041 kernel: SELinux: Initializing. Sep 10 23:47:11.805048 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:47:11.805055 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:47:11.805063 kernel: rcu: Hierarchical SRCU implementation. Sep 10 23:47:11.805071 kernel: rcu: Max phase no-delay instances is 400. Sep 10 23:47:11.805078 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 10 23:47:11.805085 kernel: Remapping and enabling EFI services. Sep 10 23:47:11.805092 kernel: smp: Bringing up secondary CPUs ... Sep 10 23:47:11.805099 kernel: Detected PIPT I-cache on CPU1 Sep 10 23:47:11.805106 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 10 23:47:11.805113 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Sep 10 23:47:11.805120 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:47:11.805127 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 10 23:47:11.805135 kernel: smp: Brought up 1 node, 2 CPUs Sep 10 23:47:11.805147 kernel: SMP: Total of 2 processors activated. Sep 10 23:47:11.805154 kernel: CPU: All CPU(s) started at EL1 Sep 10 23:47:11.805162 kernel: CPU features: detected: 32-bit EL0 Support Sep 10 23:47:11.805170 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 10 23:47:11.805177 kernel: CPU features: detected: Common not Private translations Sep 10 23:47:11.805184 kernel: CPU features: detected: CRC32 instructions Sep 10 23:47:11.805192 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 10 23:47:11.805201 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 10 23:47:11.805209 kernel: CPU features: detected: LSE atomic instructions Sep 10 23:47:11.805216 kernel: CPU features: detected: Privileged Access Never Sep 10 23:47:11.805224 kernel: CPU features: detected: RAS Extension Support Sep 10 23:47:11.805231 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 10 23:47:11.805239 kernel: alternatives: applying system-wide alternatives Sep 10 23:47:11.805246 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Sep 10 23:47:11.805253 kernel: Memory: 3859556K/4096000K available (11136K kernel code, 2436K rwdata, 9084K rodata, 38976K init, 1038K bss, 214964K reserved, 16384K cma-reserved) Sep 10 23:47:11.805261 kernel: devtmpfs: initialized Sep 10 23:47:11.805269 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 23:47:11.805277 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 10 23:47:11.805285 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 10 23:47:11.805292 kernel: 0 pages in range for non-PLT usage Sep 10 23:47:11.805299 kernel: 508560 pages in range for PLT usage Sep 10 23:47:11.805306 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 23:47:11.805314 kernel: SMBIOS 3.0.0 present. Sep 10 23:47:11.805321 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Sep 10 23:47:11.805328 kernel: DMI: Memory slots populated: 1/1 Sep 10 23:47:11.805337 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 23:47:11.805345 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 10 23:47:11.805352 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 10 23:47:11.805360 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 10 23:47:11.805367 kernel: audit: initializing netlink subsys (disabled) Sep 10 23:47:11.805374 kernel: audit: type=2000 audit(0.017:1): state=initialized audit_enabled=0 res=1 Sep 10 23:47:11.805382 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 23:47:11.805389 kernel: cpuidle: using governor menu Sep 10 23:47:11.805396 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 10 23:47:11.805405 kernel: ASID allocator initialised with 32768 entries Sep 10 23:47:11.805412 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 23:47:11.805420 kernel: Serial: AMBA PL011 UART driver Sep 10 23:47:11.805427 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 23:47:11.805434 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 23:47:11.805441 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 10 23:47:11.805449 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 10 23:47:11.805456 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 23:47:11.805463 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 23:47:11.805472 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 10 23:47:11.805479 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 10 23:47:11.805486 kernel: ACPI: Added _OSI(Module Device) Sep 10 23:47:11.805493 kernel: ACPI: Added _OSI(Processor Device) Sep 10 23:47:11.805501 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 23:47:11.805508 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 23:47:11.805515 kernel: ACPI: Interpreter enabled Sep 10 23:47:11.805522 kernel: ACPI: Using GIC for interrupt routing Sep 10 23:47:11.805529 kernel: ACPI: MCFG table detected, 1 entries Sep 10 23:47:11.805538 kernel: ACPI: CPU0 has been hot-added Sep 10 23:47:11.805545 kernel: ACPI: CPU1 has been hot-added Sep 10 23:47:11.805552 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 10 23:47:11.805559 kernel: printk: legacy console [ttyAMA0] enabled Sep 10 23:47:11.805567 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 23:47:11.805766 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 23:47:11.805837 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 10 23:47:11.805895 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 10 23:47:11.805955 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 10 23:47:11.806029 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 10 23:47:11.806040 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 10 23:47:11.806048 kernel: PCI host bridge to bus 0000:00 Sep 10 23:47:11.806116 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 10 23:47:11.806169 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 10 23:47:11.806221 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 10 23:47:11.806279 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 23:47:11.806358 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 10 23:47:11.806429 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Sep 10 23:47:11.806490 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Sep 10 23:47:11.806551 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Sep 10 23:47:11.806618 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 10 23:47:11.806866 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Sep 10 23:47:11.806937 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 10 23:47:11.807059 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Sep 10 23:47:11.807141 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Sep 10 23:47:11.807216 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 10 23:47:11.807276 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Sep 10 23:47:11.807335 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 10 23:47:11.807402 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Sep 10 23:47:11.807474 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 10 23:47:11.807539 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Sep 10 23:47:11.807601 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 10 23:47:11.807684 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Sep 10 23:47:11.807771 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Sep 10 23:47:11.807840 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 10 23:47:11.807905 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Sep 10 23:47:11.807963 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 10 23:47:11.808039 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Sep 10 23:47:11.808101 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Sep 10 23:47:11.808169 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 10 23:47:11.808230 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Sep 10 23:47:11.808291 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 10 23:47:11.808355 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 10 23:47:11.808417 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Sep 10 23:47:11.808482 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 10 23:47:11.808542 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Sep 10 23:47:11.808600 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 10 23:47:11.808692 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Sep 10 23:47:11.808757 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Sep 10 23:47:11.808831 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 10 23:47:11.808893 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Sep 10 23:47:11.808952 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 10 23:47:11.809027 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Sep 10 23:47:11.809089 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Sep 10 23:47:11.809155 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 10 23:47:11.809215 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Sep 10 23:47:11.809278 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 10 23:47:11.809336 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Sep 10 23:47:11.809402 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 10 23:47:11.809462 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Sep 10 23:47:11.809521 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 10 23:47:11.809579 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Sep 10 23:47:11.809661 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Sep 10 23:47:11.809728 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Sep 10 23:47:11.809800 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Sep 10 23:47:11.809863 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Sep 10 23:47:11.809925 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 10 23:47:11.810027 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Sep 10 23:47:11.810115 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Sep 10 23:47:11.810184 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Sep 10 23:47:11.810256 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Sep 10 23:47:11.810318 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Sep 10 23:47:11.810379 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Sep 10 23:47:11.810451 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Sep 10 23:47:11.810514 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Sep 10 23:47:11.810582 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Sep 10 23:47:11.810703 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Sep 10 23:47:11.810793 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Sep 10 23:47:11.810857 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Sep 10 23:47:11.810918 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Sep 10 23:47:11.811002 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Sep 10 23:47:11.811072 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Sep 10 23:47:11.811140 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Sep 10 23:47:11.811201 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Sep 10 23:47:11.811263 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Sep 10 23:47:11.811322 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Sep 10 23:47:11.811381 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Sep 10 23:47:11.811443 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Sep 10 23:47:11.811502 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Sep 10 23:47:11.811567 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Sep 10 23:47:11.811628 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 10 23:47:11.812017 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Sep 10 23:47:11.812104 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Sep 10 23:47:11.812169 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 10 23:47:11.812609 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Sep 10 23:47:11.812752 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Sep 10 23:47:11.812821 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 10 23:47:11.812888 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Sep 10 23:47:11.812956 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Sep 10 23:47:11.813041 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 10 23:47:11.813111 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Sep 10 23:47:11.813175 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Sep 10 23:47:11.813249 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 10 23:47:11.813310 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Sep 10 23:47:11.813371 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Sep 10 23:47:11.813434 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 10 23:47:11.813495 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Sep 10 23:47:11.813554 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Sep 10 23:47:11.813617 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 10 23:47:11.815792 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Sep 10 23:47:11.815868 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Sep 10 23:47:11.815933 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Sep 10 23:47:11.816048 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Sep 10 23:47:11.816118 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Sep 10 23:47:11.816180 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Sep 10 23:47:11.816243 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Sep 10 23:47:11.816312 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Sep 10 23:47:11.816400 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Sep 10 23:47:11.816465 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Sep 10 23:47:11.816527 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Sep 10 23:47:11.816587 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Sep 10 23:47:11.816685 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Sep 10 23:47:11.816756 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Sep 10 23:47:11.816820 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Sep 10 23:47:11.816884 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Sep 10 23:47:11.816947 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Sep 10 23:47:11.817027 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Sep 10 23:47:11.817093 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Sep 10 23:47:11.817164 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Sep 10 23:47:11.817236 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Sep 10 23:47:11.817297 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Sep 10 23:47:11.817360 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Sep 10 23:47:11.817422 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Sep 10 23:47:11.817482 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Sep 10 23:47:11.817542 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Sep 10 23:47:11.817603 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Sep 10 23:47:11.817731 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Sep 10 23:47:11.817797 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Sep 10 23:47:11.817857 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Sep 10 23:47:11.817917 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Sep 10 23:47:11.818032 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Sep 10 23:47:11.818109 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Sep 10 23:47:11.818170 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Sep 10 23:47:11.818230 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Sep 10 23:47:11.818304 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Sep 10 23:47:11.818366 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Sep 10 23:47:11.818425 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Sep 10 23:47:11.818485 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Sep 10 23:47:11.818549 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Sep 10 23:47:11.818614 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Sep 10 23:47:11.819465 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Sep 10 23:47:11.819547 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 10 23:47:11.819619 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Sep 10 23:47:11.819709 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 10 23:47:11.819773 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 10 23:47:11.819832 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Sep 10 23:47:11.819890 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Sep 10 23:47:11.819956 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Sep 10 23:47:11.820037 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 10 23:47:11.820104 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 10 23:47:11.820164 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Sep 10 23:47:11.820223 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Sep 10 23:47:11.820291 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Sep 10 23:47:11.820353 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Sep 10 23:47:11.820414 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 10 23:47:11.820473 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 10 23:47:11.820534 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Sep 10 23:47:11.820593 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Sep 10 23:47:11.820678 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Sep 10 23:47:11.820743 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 10 23:47:11.820806 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 10 23:47:11.820865 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Sep 10 23:47:11.820952 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Sep 10 23:47:11.821091 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Sep 10 23:47:11.821166 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 10 23:47:11.821228 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 10 23:47:11.821287 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 10 23:47:11.821347 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Sep 10 23:47:11.821413 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Sep 10 23:47:11.821475 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Sep 10 23:47:11.821539 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 10 23:47:11.821602 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 10 23:47:11.823349 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Sep 10 23:47:11.823428 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 10 23:47:11.823499 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Sep 10 23:47:11.823563 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Sep 10 23:47:11.823626 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Sep 10 23:47:11.823751 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 10 23:47:11.823818 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 10 23:47:11.823883 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Sep 10 23:47:11.823944 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 10 23:47:11.824030 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 10 23:47:11.824096 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 10 23:47:11.824157 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Sep 10 23:47:11.824217 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 10 23:47:11.824279 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 10 23:47:11.824339 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Sep 10 23:47:11.824398 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Sep 10 23:47:11.824461 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Sep 10 23:47:11.824523 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 10 23:47:11.824576 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 10 23:47:11.824628 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 10 23:47:11.825504 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 10 23:47:11.825577 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Sep 10 23:47:11.825633 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Sep 10 23:47:11.825765 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Sep 10 23:47:11.825825 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Sep 10 23:47:11.825905 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Sep 10 23:47:11.826031 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Sep 10 23:47:11.826099 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Sep 10 23:47:11.826155 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Sep 10 23:47:11.826223 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 10 23:47:11.826279 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Sep 10 23:47:11.826333 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Sep 10 23:47:11.826398 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Sep 10 23:47:11.826452 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Sep 10 23:47:11.826507 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Sep 10 23:47:11.826568 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Sep 10 23:47:11.826626 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Sep 10 23:47:11.826728 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 10 23:47:11.826795 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Sep 10 23:47:11.826856 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Sep 10 23:47:11.826910 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 10 23:47:11.826986 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Sep 10 23:47:11.827071 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Sep 10 23:47:11.827137 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 10 23:47:11.827201 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Sep 10 23:47:11.827256 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Sep 10 23:47:11.827312 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Sep 10 23:47:11.827322 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 10 23:47:11.827330 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 10 23:47:11.827338 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 10 23:47:11.827349 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 10 23:47:11.827357 kernel: iommu: Default domain type: Translated Sep 10 23:47:11.827365 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 10 23:47:11.827373 kernel: efivars: Registered efivars operations Sep 10 23:47:11.827381 kernel: vgaarb: loaded Sep 10 23:47:11.827389 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 10 23:47:11.827397 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 23:47:11.827405 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 23:47:11.827412 kernel: pnp: PnP ACPI init Sep 10 23:47:11.827756 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 10 23:47:11.827775 kernel: pnp: PnP ACPI: found 1 devices Sep 10 23:47:11.827783 kernel: NET: Registered PF_INET protocol family Sep 10 23:47:11.827791 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 23:47:11.827799 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 23:47:11.827807 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 23:47:11.827815 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 23:47:11.827822 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 23:47:11.827835 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 23:47:11.827843 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:47:11.827851 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:47:11.827858 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 23:47:11.827933 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Sep 10 23:47:11.827944 kernel: PCI: CLS 0 bytes, default 64 Sep 10 23:47:11.827952 kernel: kvm [1]: HYP mode not available Sep 10 23:47:11.827960 kernel: Initialise system trusted keyrings Sep 10 23:47:11.827967 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 23:47:11.828016 kernel: Key type asymmetric registered Sep 10 23:47:11.828025 kernel: Asymmetric key parser 'x509' registered Sep 10 23:47:11.828033 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 10 23:47:11.828041 kernel: io scheduler mq-deadline registered Sep 10 23:47:11.828049 kernel: io scheduler kyber registered Sep 10 23:47:11.828056 kernel: io scheduler bfq registered Sep 10 23:47:11.828065 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 10 23:47:11.828147 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Sep 10 23:47:11.828544 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Sep 10 23:47:11.828630 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 10 23:47:11.828764 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Sep 10 23:47:11.828828 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Sep 10 23:47:11.828889 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 10 23:47:11.828952 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Sep 10 23:47:11.829036 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Sep 10 23:47:11.829098 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 10 23:47:11.829160 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Sep 10 23:47:11.829225 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Sep 10 23:47:11.829284 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 10 23:47:11.829346 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Sep 10 23:47:11.829406 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Sep 10 23:47:11.829465 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 10 23:47:11.829528 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Sep 10 23:47:11.829587 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Sep 10 23:47:11.829675 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 10 23:47:11.829750 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Sep 10 23:47:11.829812 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Sep 10 23:47:11.829873 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 10 23:47:11.829937 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Sep 10 23:47:11.830015 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Sep 10 23:47:11.830081 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 10 23:47:11.830092 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Sep 10 23:47:11.830155 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Sep 10 23:47:11.830216 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Sep 10 23:47:11.830276 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 10 23:47:11.830286 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 10 23:47:11.830295 kernel: ACPI: button: Power Button [PWRB] Sep 10 23:47:11.830303 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 10 23:47:11.830370 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Sep 10 23:47:11.830437 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Sep 10 23:47:11.830448 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 23:47:11.830458 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 10 23:47:11.830519 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Sep 10 23:47:11.830529 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Sep 10 23:47:11.830537 kernel: thunder_xcv, ver 1.0 Sep 10 23:47:11.830545 kernel: thunder_bgx, ver 1.0 Sep 10 23:47:11.830552 kernel: nicpf, ver 1.0 Sep 10 23:47:11.830560 kernel: nicvf, ver 1.0 Sep 10 23:47:11.830631 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 10 23:47:11.830725 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-10T23:47:11 UTC (1757548031) Sep 10 23:47:11.830736 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 10 23:47:11.830744 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 10 23:47:11.830752 kernel: watchdog: NMI not fully supported Sep 10 23:47:11.830759 kernel: watchdog: Hard watchdog permanently disabled Sep 10 23:47:11.830767 kernel: NET: Registered PF_INET6 protocol family Sep 10 23:47:11.830775 kernel: Segment Routing with IPv6 Sep 10 23:47:11.830783 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 23:47:11.830790 kernel: NET: Registered PF_PACKET protocol family Sep 10 23:47:11.830801 kernel: Key type dns_resolver registered Sep 10 23:47:11.830808 kernel: registered taskstats version 1 Sep 10 23:47:11.830816 kernel: Loading compiled-in X.509 certificates Sep 10 23:47:11.830824 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 3c20aab1105575c84ea94c1a59a27813fcebdea7' Sep 10 23:47:11.830832 kernel: Demotion targets for Node 0: null Sep 10 23:47:11.830840 kernel: Key type .fscrypt registered Sep 10 23:47:11.830847 kernel: Key type fscrypt-provisioning registered Sep 10 23:47:11.830855 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 23:47:11.830864 kernel: ima: Allocated hash algorithm: sha1 Sep 10 23:47:11.830872 kernel: ima: No architecture policies found Sep 10 23:47:11.830879 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 10 23:47:11.830887 kernel: clk: Disabling unused clocks Sep 10 23:47:11.830895 kernel: PM: genpd: Disabling unused power domains Sep 10 23:47:11.831686 kernel: Warning: unable to open an initial console. Sep 10 23:47:11.831706 kernel: Freeing unused kernel memory: 38976K Sep 10 23:47:11.831715 kernel: Run /init as init process Sep 10 23:47:11.831723 kernel: with arguments: Sep 10 23:47:11.831735 kernel: /init Sep 10 23:47:11.831743 kernel: with environment: Sep 10 23:47:11.831750 kernel: HOME=/ Sep 10 23:47:11.831758 kernel: TERM=linux Sep 10 23:47:11.831766 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 23:47:11.831775 systemd[1]: Successfully made /usr/ read-only. Sep 10 23:47:11.831786 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:47:11.831795 systemd[1]: Detected virtualization kvm. Sep 10 23:47:11.831804 systemd[1]: Detected architecture arm64. Sep 10 23:47:11.831812 systemd[1]: Running in initrd. Sep 10 23:47:11.831820 systemd[1]: No hostname configured, using default hostname. Sep 10 23:47:11.831829 systemd[1]: Hostname set to . Sep 10 23:47:11.831837 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:47:11.831845 systemd[1]: Queued start job for default target initrd.target. Sep 10 23:47:11.831853 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:47:11.831862 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:47:11.831871 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 23:47:11.831880 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:47:11.831888 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 23:47:11.831897 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 23:47:11.831906 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 23:47:11.831915 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 23:47:11.831925 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:47:11.831934 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:47:11.831942 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:47:11.831951 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:47:11.831959 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:47:11.831967 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:47:11.831997 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:47:11.832008 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:47:11.832016 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 23:47:11.832027 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 10 23:47:11.832035 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:47:11.832044 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:47:11.832052 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:47:11.832060 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:47:11.832068 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 23:47:11.832076 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:47:11.832084 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 23:47:11.832093 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 10 23:47:11.832102 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 23:47:11.832110 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:47:11.832118 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:47:11.832127 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:47:11.832135 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 23:47:11.832144 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:47:11.832153 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 23:47:11.832161 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 23:47:11.832202 systemd-journald[244]: Collecting audit messages is disabled. Sep 10 23:47:11.832226 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 23:47:11.832234 kernel: Bridge firewalling registered Sep 10 23:47:11.832242 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:47:11.832251 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:47:11.832259 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:47:11.832268 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 23:47:11.832276 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:47:11.832286 systemd-journald[244]: Journal started Sep 10 23:47:11.832306 systemd-journald[244]: Runtime Journal (/run/log/journal/80dc27c55e9f4159929c315c2e35c461) is 8M, max 76.5M, 68.5M free. Sep 10 23:47:11.785878 systemd-modules-load[246]: Inserted module 'overlay' Sep 10 23:47:11.833768 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:47:11.807486 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 10 23:47:11.835943 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:47:11.841538 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:47:11.842525 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:47:11.854841 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:47:11.862481 systemd-tmpfiles[264]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 10 23:47:11.864812 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:47:11.866972 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:47:11.869278 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 23:47:11.871153 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:47:11.903835 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=dd9c14cce645c634e06a91b09405eea80057f02909b9267c482dc457df1cddec Sep 10 23:47:11.915740 systemd-resolved[285]: Positive Trust Anchors: Sep 10 23:47:11.915755 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:47:11.915789 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:47:11.923388 systemd-resolved[285]: Defaulting to hostname 'linux'. Sep 10 23:47:11.924936 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:47:11.926147 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:47:12.011697 kernel: SCSI subsystem initialized Sep 10 23:47:12.016682 kernel: Loading iSCSI transport class v2.0-870. Sep 10 23:47:12.023676 kernel: iscsi: registered transport (tcp) Sep 10 23:47:12.036707 kernel: iscsi: registered transport (qla4xxx) Sep 10 23:47:12.036796 kernel: QLogic iSCSI HBA Driver Sep 10 23:47:12.057461 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:47:12.082832 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:47:12.087929 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:47:12.130365 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 23:47:12.133274 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 23:47:12.203770 kernel: raid6: neonx8 gen() 15632 MB/s Sep 10 23:47:12.220679 kernel: raid6: neonx4 gen() 15628 MB/s Sep 10 23:47:12.237683 kernel: raid6: neonx2 gen() 13117 MB/s Sep 10 23:47:12.254705 kernel: raid6: neonx1 gen() 10349 MB/s Sep 10 23:47:12.271705 kernel: raid6: int64x8 gen() 6856 MB/s Sep 10 23:47:12.288696 kernel: raid6: int64x4 gen() 7312 MB/s Sep 10 23:47:12.305771 kernel: raid6: int64x2 gen() 6068 MB/s Sep 10 23:47:12.322691 kernel: raid6: int64x1 gen() 5021 MB/s Sep 10 23:47:12.322767 kernel: raid6: using algorithm neonx8 gen() 15632 MB/s Sep 10 23:47:12.339725 kernel: raid6: .... xor() 11527 MB/s, rmw enabled Sep 10 23:47:12.339798 kernel: raid6: using neon recovery algorithm Sep 10 23:47:12.344741 kernel: xor: measuring software checksum speed Sep 10 23:47:12.344817 kernel: 8regs : 21613 MB/sec Sep 10 23:47:12.344829 kernel: 32regs : 18812 MB/sec Sep 10 23:47:12.345726 kernel: arm64_neon : 26778 MB/sec Sep 10 23:47:12.345764 kernel: xor: using function: arm64_neon (26778 MB/sec) Sep 10 23:47:12.400690 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 23:47:12.409740 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:47:12.411931 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:47:12.441838 systemd-udevd[493]: Using default interface naming scheme 'v255'. Sep 10 23:47:12.446360 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:47:12.451005 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 23:47:12.476845 dracut-pre-trigger[501]: rd.md=0: removing MD RAID activation Sep 10 23:47:12.502730 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:47:12.505375 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:47:12.567239 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:47:12.572022 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 23:47:12.659673 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Sep 10 23:47:12.672055 kernel: scsi host0: Virtio SCSI HBA Sep 10 23:47:12.683740 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 10 23:47:12.684953 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 10 23:47:12.689994 kernel: ACPI: bus type USB registered Sep 10 23:47:12.690051 kernel: usbcore: registered new interface driver usbfs Sep 10 23:47:12.691150 kernel: usbcore: registered new interface driver hub Sep 10 23:47:12.692126 kernel: usbcore: registered new device driver usb Sep 10 23:47:12.704845 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:47:12.705794 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:47:12.707670 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:47:12.710980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:47:12.723043 kernel: sd 0:0:0:1: Power-on or device reset occurred Sep 10 23:47:12.723290 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 10 23:47:12.723411 kernel: sd 0:0:0:1: [sda] Write Protect is off Sep 10 23:47:12.723507 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Sep 10 23:47:12.723583 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 10 23:47:12.736801 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 23:47:12.736865 kernel: GPT:17805311 != 80003071 Sep 10 23:47:12.736878 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 23:47:12.736890 kernel: GPT:17805311 != 80003071 Sep 10 23:47:12.736911 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 23:47:12.737976 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 10 23:47:12.738009 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Sep 10 23:47:12.742661 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 10 23:47:12.742846 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 10 23:47:12.744810 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 10 23:47:12.744009 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:47:12.746629 kernel: sr 0:0:0:0: Power-on or device reset occurred Sep 10 23:47:12.748369 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 10 23:47:12.748466 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 10 23:47:12.748543 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Sep 10 23:47:12.748618 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 10 23:47:12.749183 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 10 23:47:12.749961 kernel: hub 1-0:1.0: USB hub found Sep 10 23:47:12.750668 kernel: hub 1-0:1.0: 4 ports detected Sep 10 23:47:12.750797 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Sep 10 23:47:12.752033 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 10 23:47:12.752939 kernel: hub 2-0:1.0: USB hub found Sep 10 23:47:12.753109 kernel: hub 2-0:1.0: 4 ports detected Sep 10 23:47:12.829202 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 10 23:47:12.840110 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 10 23:47:12.858304 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 10 23:47:12.860173 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 23:47:12.871862 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 10 23:47:12.872527 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 10 23:47:12.878905 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:47:12.879539 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:47:12.881278 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:47:12.883379 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 23:47:12.884703 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 23:47:12.908739 disk-uuid[599]: Primary Header is updated. Sep 10 23:47:12.908739 disk-uuid[599]: Secondary Entries is updated. Sep 10 23:47:12.908739 disk-uuid[599]: Secondary Header is updated. Sep 10 23:47:12.918236 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:47:12.921678 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 10 23:47:12.933703 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 10 23:47:12.989678 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 10 23:47:13.129144 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Sep 10 23:47:13.129209 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 10 23:47:13.129589 kernel: usbcore: registered new interface driver usbhid Sep 10 23:47:13.129613 kernel: usbhid: USB HID core driver Sep 10 23:47:13.234023 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Sep 10 23:47:13.361688 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Sep 10 23:47:13.414701 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Sep 10 23:47:13.948678 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 10 23:47:13.950426 disk-uuid[602]: The operation has completed successfully. Sep 10 23:47:14.008073 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 23:47:14.009480 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 23:47:14.042352 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 23:47:14.070146 sh[624]: Success Sep 10 23:47:14.085803 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 23:47:14.085888 kernel: device-mapper: uevent: version 1.0.3 Sep 10 23:47:14.085916 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 10 23:47:14.095666 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 10 23:47:14.156408 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 23:47:14.164732 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 23:47:14.171184 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 23:47:14.190689 kernel: BTRFS: device fsid 3b17f37f-d395-4116-a46d-e07f86112ade devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (637) Sep 10 23:47:14.191763 kernel: BTRFS info (device dm-0): first mount of filesystem 3b17f37f-d395-4116-a46d-e07f86112ade Sep 10 23:47:14.191787 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:47:14.198659 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 10 23:47:14.198730 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 23:47:14.198748 kernel: BTRFS info (device dm-0): enabling free space tree Sep 10 23:47:14.199996 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 23:47:14.201016 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:47:14.201916 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 23:47:14.203007 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 23:47:14.205770 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 23:47:14.233708 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (670) Sep 10 23:47:14.235660 kernel: BTRFS info (device sda6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:47:14.235704 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:47:14.240839 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 10 23:47:14.240908 kernel: BTRFS info (device sda6): turning on async discard Sep 10 23:47:14.240922 kernel: BTRFS info (device sda6): enabling free space tree Sep 10 23:47:14.245682 kernel: BTRFS info (device sda6): last unmount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:47:14.248632 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 23:47:14.250250 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 23:47:14.339597 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:47:14.344591 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:47:14.383007 systemd-networkd[811]: lo: Link UP Sep 10 23:47:14.383018 systemd-networkd[811]: lo: Gained carrier Sep 10 23:47:14.384583 systemd-networkd[811]: Enumeration completed Sep 10 23:47:14.384706 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:47:14.385323 systemd[1]: Reached target network.target - Network. Sep 10 23:47:14.386741 systemd-networkd[811]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:47:14.386745 systemd-networkd[811]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:47:14.387262 systemd-networkd[811]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:47:14.387265 systemd-networkd[811]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:47:14.394293 ignition[718]: Ignition 2.21.0 Sep 10 23:47:14.387520 systemd-networkd[811]: eth0: Link UP Sep 10 23:47:14.394300 ignition[718]: Stage: fetch-offline Sep 10 23:47:14.387706 systemd-networkd[811]: eth1: Link UP Sep 10 23:47:14.394332 ignition[718]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:47:14.388847 systemd-networkd[811]: eth0: Gained carrier Sep 10 23:47:14.394340 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 10 23:47:14.388859 systemd-networkd[811]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:47:14.394507 ignition[718]: parsed url from cmdline: "" Sep 10 23:47:14.397357 systemd-networkd[811]: eth1: Gained carrier Sep 10 23:47:14.394510 ignition[718]: no config URL provided Sep 10 23:47:14.397380 systemd-networkd[811]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:47:14.394514 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 23:47:14.400695 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:47:14.394520 ignition[718]: no config at "/usr/lib/ignition/user.ign" Sep 10 23:47:14.394524 ignition[718]: failed to fetch config: resource requires networking Sep 10 23:47:14.396103 ignition[718]: Ignition finished successfully Sep 10 23:47:14.405840 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 10 23:47:14.435230 systemd-networkd[811]: eth0: DHCPv4 address 157.90.149.201/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 10 23:47:14.450034 ignition[816]: Ignition 2.21.0 Sep 10 23:47:14.450051 ignition[816]: Stage: fetch Sep 10 23:47:14.450197 ignition[816]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:47:14.450207 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 10 23:47:14.450287 ignition[816]: parsed url from cmdline: "" Sep 10 23:47:14.450290 ignition[816]: no config URL provided Sep 10 23:47:14.450294 ignition[816]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 23:47:14.450300 ignition[816]: no config at "/usr/lib/ignition/user.ign" Sep 10 23:47:14.450401 ignition[816]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 10 23:47:14.459409 ignition[816]: GET result: OK Sep 10 23:47:14.460234 ignition[816]: parsing config with SHA512: c35915cc6e89190eddd877d51b7d9f2fc9182c89bb8496b9fa5afcda32652f5b0862d73be7f2a5420329cd96844fe074dfacaa33c699be1836d1403a0a0f0b3c Sep 10 23:47:14.467114 unknown[816]: fetched base config from "system" Sep 10 23:47:14.467258 unknown[816]: fetched base config from "system" Sep 10 23:47:14.467624 ignition[816]: fetch: fetch complete Sep 10 23:47:14.467264 unknown[816]: fetched user config from "hetzner" Sep 10 23:47:14.467629 ignition[816]: fetch: fetch passed Sep 10 23:47:14.467705 ignition[816]: Ignition finished successfully Sep 10 23:47:14.471923 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 10 23:47:14.475177 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 23:47:14.488845 systemd-networkd[811]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 10 23:47:14.505329 ignition[824]: Ignition 2.21.0 Sep 10 23:47:14.505351 ignition[824]: Stage: kargs Sep 10 23:47:14.505493 ignition[824]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:47:14.505502 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 10 23:47:14.506318 ignition[824]: kargs: kargs passed Sep 10 23:47:14.506370 ignition[824]: Ignition finished successfully Sep 10 23:47:14.510342 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 23:47:14.514790 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 23:47:14.543282 ignition[831]: Ignition 2.21.0 Sep 10 23:47:14.543302 ignition[831]: Stage: disks Sep 10 23:47:14.543449 ignition[831]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:47:14.543459 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 10 23:47:14.545497 ignition[831]: disks: disks passed Sep 10 23:47:14.545573 ignition[831]: Ignition finished successfully Sep 10 23:47:14.548204 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 23:47:14.549430 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 23:47:14.550804 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 23:47:14.552107 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:47:14.553289 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:47:14.553824 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:47:14.556799 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 23:47:14.606430 systemd-fsck[840]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 10 23:47:14.611478 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 23:47:14.613635 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 23:47:14.682932 kernel: EXT4-fs (sda9): mounted filesystem fcae628f-5f9a-4539-a638-93fb1399b5d7 r/w with ordered data mode. Quota mode: none. Sep 10 23:47:14.683712 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 23:47:14.684824 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 23:47:14.687123 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:47:14.690755 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 23:47:14.697527 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 10 23:47:14.700538 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 23:47:14.700577 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:47:14.704139 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 23:47:14.707443 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 23:47:14.715172 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (849) Sep 10 23:47:14.715223 kernel: BTRFS info (device sda6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:47:14.715237 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:47:14.725348 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 10 23:47:14.725420 kernel: BTRFS info (device sda6): turning on async discard Sep 10 23:47:14.725433 kernel: BTRFS info (device sda6): enabling free space tree Sep 10 23:47:14.730283 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:47:14.780667 initrd-setup-root[877]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 23:47:14.785892 coreos-metadata[851]: Sep 10 23:47:14.785 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 10 23:47:14.788479 coreos-metadata[851]: Sep 10 23:47:14.788 INFO Fetch successful Sep 10 23:47:14.789727 coreos-metadata[851]: Sep 10 23:47:14.789 INFO wrote hostname ci-4372-1-0-n-474f3036a8 to /sysroot/etc/hostname Sep 10 23:47:14.790863 initrd-setup-root[884]: cut: /sysroot/etc/group: No such file or directory Sep 10 23:47:14.793715 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 10 23:47:14.799404 initrd-setup-root[892]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 23:47:14.804030 initrd-setup-root[899]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 23:47:14.904470 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 23:47:14.906677 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 23:47:14.907885 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 23:47:14.923662 kernel: BTRFS info (device sda6): last unmount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:47:14.941817 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 23:47:14.952066 ignition[967]: INFO : Ignition 2.21.0 Sep 10 23:47:14.952066 ignition[967]: INFO : Stage: mount Sep 10 23:47:14.953319 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:47:14.953319 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 10 23:47:14.953319 ignition[967]: INFO : mount: mount passed Sep 10 23:47:14.953319 ignition[967]: INFO : Ignition finished successfully Sep 10 23:47:14.955214 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 23:47:14.956826 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 23:47:15.192600 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 23:47:15.195541 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:47:15.217719 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (979) Sep 10 23:47:15.220030 kernel: BTRFS info (device sda6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:47:15.220081 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:47:15.224035 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 10 23:47:15.224093 kernel: BTRFS info (device sda6): turning on async discard Sep 10 23:47:15.224104 kernel: BTRFS info (device sda6): enabling free space tree Sep 10 23:47:15.226715 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:47:15.259669 ignition[996]: INFO : Ignition 2.21.0 Sep 10 23:47:15.259669 ignition[996]: INFO : Stage: files Sep 10 23:47:15.259669 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:47:15.259669 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 10 23:47:15.261715 ignition[996]: DEBUG : files: compiled without relabeling support, skipping Sep 10 23:47:15.264149 ignition[996]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 23:47:15.264149 ignition[996]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 23:47:15.267194 ignition[996]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 23:47:15.268577 ignition[996]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 23:47:15.270337 unknown[996]: wrote ssh authorized keys file for user: core Sep 10 23:47:15.271745 ignition[996]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 23:47:15.274082 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 10 23:47:15.275227 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 10 23:47:15.453696 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 23:47:15.683865 systemd-networkd[811]: eth0: Gained IPv6LL Sep 10 23:47:15.849916 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 10 23:47:15.852577 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 10 23:47:15.852577 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 23:47:15.852577 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:47:15.852577 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:47:15.852577 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:47:15.852577 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:47:15.852577 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:47:15.852577 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:47:15.863012 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:47:15.863012 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:47:15.863012 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 23:47:15.866604 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 23:47:15.866604 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 23:47:15.866604 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 10 23:47:16.127859 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 10 23:47:16.195823 systemd-networkd[811]: eth1: Gained IPv6LL Sep 10 23:47:16.349114 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 23:47:16.349114 ignition[996]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 10 23:47:16.353056 ignition[996]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:47:16.355687 ignition[996]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:47:16.355687 ignition[996]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 10 23:47:16.355687 ignition[996]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 10 23:47:16.361359 ignition[996]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 10 23:47:16.361359 ignition[996]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 10 23:47:16.361359 ignition[996]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 10 23:47:16.361359 ignition[996]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Sep 10 23:47:16.361359 ignition[996]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 23:47:16.361359 ignition[996]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:47:16.361359 ignition[996]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:47:16.361359 ignition[996]: INFO : files: files passed Sep 10 23:47:16.361359 ignition[996]: INFO : Ignition finished successfully Sep 10 23:47:16.358794 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 23:47:16.363981 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 23:47:16.371496 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 23:47:16.375244 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 23:47:16.375338 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 23:47:16.396710 initrd-setup-root-after-ignition[1026]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:47:16.396710 initrd-setup-root-after-ignition[1026]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:47:16.398727 initrd-setup-root-after-ignition[1030]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:47:16.400725 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:47:16.402369 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 23:47:16.404143 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 23:47:16.462385 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 23:47:16.462568 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 23:47:16.464360 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 23:47:16.465389 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 23:47:16.466393 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 23:47:16.467185 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 23:47:16.502314 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:47:16.506704 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 23:47:16.529881 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:47:16.531280 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:47:16.532561 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 23:47:16.533188 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 23:47:16.533310 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:47:16.534736 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 23:47:16.535576 systemd[1]: Stopped target basic.target - Basic System. Sep 10 23:47:16.536700 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 23:47:16.537915 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:47:16.538929 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 23:47:16.540026 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:47:16.541151 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 23:47:16.542250 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:47:16.543351 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 23:47:16.544252 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 23:47:16.545264 systemd[1]: Stopped target swap.target - Swaps. Sep 10 23:47:16.546292 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 23:47:16.546417 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:47:16.547639 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:47:16.548298 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:47:16.549243 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 23:47:16.549324 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:47:16.550307 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 23:47:16.550421 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 23:47:16.551932 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 23:47:16.552088 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:47:16.553058 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 23:47:16.553155 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 23:47:16.554303 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 10 23:47:16.554397 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 10 23:47:16.556300 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 23:47:16.559891 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 23:47:16.563173 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 23:47:16.563338 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:47:16.565845 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 23:47:16.565963 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:47:16.571609 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 23:47:16.574189 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 23:47:16.588621 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 23:47:16.594290 ignition[1050]: INFO : Ignition 2.21.0 Sep 10 23:47:16.594290 ignition[1050]: INFO : Stage: umount Sep 10 23:47:16.596681 ignition[1050]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:47:16.596681 ignition[1050]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 10 23:47:16.596681 ignition[1050]: INFO : umount: umount passed Sep 10 23:47:16.596681 ignition[1050]: INFO : Ignition finished successfully Sep 10 23:47:16.600188 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 23:47:16.601719 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 23:47:16.603211 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 23:47:16.603913 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 23:47:16.604576 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 23:47:16.604626 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 23:47:16.611069 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 10 23:47:16.611149 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 10 23:47:16.612928 systemd[1]: Stopped target network.target - Network. Sep 10 23:47:16.613811 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 23:47:16.613898 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:47:16.616796 systemd[1]: Stopped target paths.target - Path Units. Sep 10 23:47:16.618768 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 23:47:16.622912 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:47:16.628764 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 23:47:16.630021 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 23:47:16.631166 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 23:47:16.631210 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:47:16.632563 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 23:47:16.632604 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:47:16.634023 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 23:47:16.634087 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 23:47:16.634979 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 23:47:16.635024 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 23:47:16.636026 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 23:47:16.637130 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 23:47:16.638352 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 23:47:16.638433 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 23:47:16.640290 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 23:47:16.640396 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 23:47:16.645792 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 23:47:16.645908 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 23:47:16.649210 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 10 23:47:16.649332 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 23:47:16.649370 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:47:16.651969 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 10 23:47:16.655834 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 23:47:16.657784 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 23:47:16.660361 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 10 23:47:16.660542 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 10 23:47:16.663733 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 23:47:16.663807 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:47:16.666231 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 23:47:16.667850 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 23:47:16.667921 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:47:16.669830 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 23:47:16.669891 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:47:16.672874 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 23:47:16.672917 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 23:47:16.673914 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:47:16.678181 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 23:47:16.695343 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 23:47:16.696450 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:47:16.698900 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 23:47:16.699065 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 23:47:16.700584 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 23:47:16.701210 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 23:47:16.704492 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 23:47:16.704528 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:47:16.705902 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 23:47:16.705984 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:47:16.708112 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 23:47:16.708160 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 23:47:16.710231 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 23:47:16.710294 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:47:16.713270 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 23:47:16.713906 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 10 23:47:16.713997 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:47:16.716162 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 23:47:16.716206 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:47:16.719068 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 10 23:47:16.719111 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:47:16.719846 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 23:47:16.719882 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:47:16.720595 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:47:16.720636 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:47:16.729596 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 23:47:16.729727 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 23:47:16.730539 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 23:47:16.732725 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 23:47:16.764560 systemd[1]: Switching root. Sep 10 23:47:16.812412 systemd-journald[244]: Journal stopped Sep 10 23:47:17.747986 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 10 23:47:17.748066 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 23:47:17.748082 kernel: SELinux: policy capability open_perms=1 Sep 10 23:47:17.748094 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 23:47:17.748102 kernel: SELinux: policy capability always_check_network=0 Sep 10 23:47:17.748111 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 23:47:17.748120 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 23:47:17.748133 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 23:47:17.748142 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 23:47:17.748150 kernel: SELinux: policy capability userspace_initial_context=0 Sep 10 23:47:17.748162 kernel: audit: type=1403 audit(1757548036.947:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 23:47:17.748176 systemd[1]: Successfully loaded SELinux policy in 56.168ms. Sep 10 23:47:17.748191 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.833ms. Sep 10 23:47:17.748201 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:47:17.748212 systemd[1]: Detected virtualization kvm. Sep 10 23:47:17.748222 systemd[1]: Detected architecture arm64. Sep 10 23:47:17.748232 systemd[1]: Detected first boot. Sep 10 23:47:17.748242 systemd[1]: Hostname set to . Sep 10 23:47:17.748251 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:47:17.748261 zram_generator::config[1094]: No configuration found. Sep 10 23:47:17.748272 kernel: NET: Registered PF_VSOCK protocol family Sep 10 23:47:17.748283 systemd[1]: Populated /etc with preset unit settings. Sep 10 23:47:17.748294 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 10 23:47:17.748303 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 23:47:17.748313 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 23:47:17.748326 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 23:47:17.748337 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 23:47:17.748347 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 23:47:17.748356 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 23:47:17.748366 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 23:47:17.748376 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 23:47:17.748385 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 23:47:17.748396 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 23:47:17.748406 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 23:47:17.748416 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:47:17.748426 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:47:17.748435 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 23:47:17.748446 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 23:47:17.748456 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 23:47:17.748466 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:47:17.748476 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 10 23:47:17.748487 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:47:17.748497 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:47:17.748507 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 23:47:17.748517 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 23:47:17.748527 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 23:47:17.748537 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 23:47:17.748546 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:47:17.748560 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:47:17.748571 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:47:17.748581 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:47:17.748591 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 23:47:17.748601 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 23:47:17.748611 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 10 23:47:17.748620 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:47:17.748630 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:47:17.748656 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:47:17.748669 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 23:47:17.748681 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 23:47:17.748692 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 23:47:17.748702 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 23:47:17.748712 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 23:47:17.748722 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 23:47:17.748732 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 23:47:17.748742 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 23:47:17.748752 systemd[1]: Reached target machines.target - Containers. Sep 10 23:47:17.748763 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 23:47:17.748773 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:47:17.748783 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:47:17.748793 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 23:47:17.748803 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:47:17.748813 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:47:17.748823 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:47:17.748836 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 23:47:17.748846 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:47:17.748857 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 23:47:17.748867 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 23:47:17.748877 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 23:47:17.748887 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 23:47:17.748897 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 23:47:17.748909 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:47:17.748919 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:47:17.748942 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:47:17.748954 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:47:17.748965 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 23:47:17.748974 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 10 23:47:17.748984 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:47:17.748995 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 23:47:17.749005 systemd[1]: Stopped verity-setup.service. Sep 10 23:47:17.749017 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 23:47:17.749027 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 23:47:17.749036 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 23:47:17.749046 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 23:47:17.749057 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 23:47:17.749067 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 23:47:17.749077 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:47:17.749088 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 23:47:17.749099 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 23:47:17.749110 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:47:17.749120 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:47:17.749130 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:47:17.749141 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:47:17.749151 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:47:17.749161 kernel: ACPI: bus type drm_connector registered Sep 10 23:47:17.749170 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 23:47:17.749181 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:47:17.749222 systemd-journald[1158]: Collecting audit messages is disabled. Sep 10 23:47:17.749246 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 23:47:17.749256 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:47:17.749269 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:47:17.749280 systemd-journald[1158]: Journal started Sep 10 23:47:17.749305 systemd-journald[1158]: Runtime Journal (/run/log/journal/80dc27c55e9f4159929c315c2e35c461) is 8M, max 76.5M, 68.5M free. Sep 10 23:47:17.479863 systemd[1]: Queued start job for default target multi-user.target. Sep 10 23:47:17.496007 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 10 23:47:17.753906 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:47:17.496483 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 23:47:17.752612 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:47:17.754136 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 23:47:17.757662 kernel: fuse: init (API version 7.41) Sep 10 23:47:17.757297 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 23:47:17.763673 kernel: loop: module loaded Sep 10 23:47:17.766834 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 23:47:17.767863 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 23:47:17.770337 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:47:17.770522 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:47:17.771534 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 10 23:47:17.779432 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:47:17.783275 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 23:47:17.785793 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 23:47:17.785833 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:47:17.789367 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 10 23:47:17.795988 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 23:47:17.797848 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:47:17.801913 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 23:47:17.806885 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 23:47:17.808176 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:47:17.812861 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 23:47:17.813516 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:47:17.817833 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 23:47:17.820201 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 23:47:17.827390 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:47:17.838185 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Sep 10 23:47:17.839063 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Sep 10 23:47:17.850419 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 23:47:17.857409 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:47:17.859791 systemd-journald[1158]: Time spent on flushing to /var/log/journal/80dc27c55e9f4159929c315c2e35c461 is 66.820ms for 1174 entries. Sep 10 23:47:17.859791 systemd-journald[1158]: System Journal (/var/log/journal/80dc27c55e9f4159929c315c2e35c461) is 8M, max 584.8M, 576.8M free. Sep 10 23:47:17.942055 systemd-journald[1158]: Received client request to flush runtime journal. Sep 10 23:47:17.942119 kernel: loop0: detected capacity change from 0 to 107312 Sep 10 23:47:17.942141 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 23:47:17.942161 kernel: loop1: detected capacity change from 0 to 211168 Sep 10 23:47:17.859224 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 23:47:17.862413 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 23:47:17.867225 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 10 23:47:17.873877 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 23:47:17.930182 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:47:17.945188 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 23:47:17.947590 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 10 23:47:17.964765 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 23:47:17.969081 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:47:17.976025 kernel: loop2: detected capacity change from 0 to 8 Sep 10 23:47:17.994763 kernel: loop3: detected capacity change from 0 to 138376 Sep 10 23:47:18.012343 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Sep 10 23:47:18.013460 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Sep 10 23:47:18.035845 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:47:18.050745 kernel: loop4: detected capacity change from 0 to 107312 Sep 10 23:47:18.064745 kernel: loop5: detected capacity change from 0 to 211168 Sep 10 23:47:18.098275 kernel: loop6: detected capacity change from 0 to 8 Sep 10 23:47:18.100807 kernel: loop7: detected capacity change from 0 to 138376 Sep 10 23:47:18.119548 (sd-merge)[1239]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 10 23:47:18.120288 (sd-merge)[1239]: Merged extensions into '/usr'. Sep 10 23:47:18.127845 systemd[1]: Reload requested from client PID 1216 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 23:47:18.127865 systemd[1]: Reloading... Sep 10 23:47:18.240678 zram_generator::config[1266]: No configuration found. Sep 10 23:47:18.340236 ldconfig[1209]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 23:47:18.392313 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:47:18.468855 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 23:47:18.469181 systemd[1]: Reloading finished in 340 ms. Sep 10 23:47:18.481378 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 23:47:18.484164 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 23:47:18.493814 systemd[1]: Starting ensure-sysext.service... Sep 10 23:47:18.504441 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:47:18.516739 systemd[1]: Reload requested from client PID 1304 ('systemctl') (unit ensure-sysext.service)... Sep 10 23:47:18.516753 systemd[1]: Reloading... Sep 10 23:47:18.548028 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 10 23:47:18.548366 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 10 23:47:18.550467 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 23:47:18.550843 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 23:47:18.551608 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 23:47:18.551968 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Sep 10 23:47:18.552152 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Sep 10 23:47:18.561132 systemd-tmpfiles[1305]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:47:18.562194 systemd-tmpfiles[1305]: Skipping /boot Sep 10 23:47:18.575298 systemd-tmpfiles[1305]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:47:18.575317 systemd-tmpfiles[1305]: Skipping /boot Sep 10 23:47:18.597683 zram_generator::config[1331]: No configuration found. Sep 10 23:47:18.681354 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:47:18.756234 systemd[1]: Reloading finished in 239 ms. Sep 10 23:47:18.780503 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 23:47:18.786209 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:47:18.792803 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:47:18.795538 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 23:47:18.798016 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 23:47:18.802691 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:47:18.811975 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:47:18.816305 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 23:47:18.823165 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 23:47:18.825736 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:47:18.829079 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:47:18.832032 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:47:18.839975 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:47:18.841904 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:47:18.842094 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:47:18.846612 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:47:18.846869 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:47:18.847001 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:47:18.850493 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:47:18.856989 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:47:18.860079 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:47:18.860256 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:47:18.866686 systemd[1]: Finished ensure-sysext.service. Sep 10 23:47:18.872003 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:47:18.872630 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:47:18.874901 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:47:18.880945 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 23:47:18.883312 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 23:47:18.891057 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 23:47:18.895233 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 23:47:18.900532 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:47:18.903097 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:47:18.912106 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:47:18.913604 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:47:18.915586 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:47:18.916236 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:47:18.920092 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:47:18.929507 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 23:47:18.931804 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 23:47:18.934023 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 23:47:18.943138 systemd-udevd[1374]: Using default interface naming scheme 'v255'. Sep 10 23:47:18.957389 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 23:47:18.961847 augenrules[1415]: No rules Sep 10 23:47:18.964079 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:47:18.964357 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:47:18.988334 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:47:18.993358 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:47:19.068322 systemd-resolved[1373]: Positive Trust Anchors: Sep 10 23:47:19.068344 systemd-resolved[1373]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:47:19.068376 systemd-resolved[1373]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:47:19.074459 systemd-resolved[1373]: Using system hostname 'ci-4372-1-0-n-474f3036a8'. Sep 10 23:47:19.076831 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:47:19.077514 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:47:19.086753 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 23:47:19.087905 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:47:19.089401 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 23:47:19.091031 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 23:47:19.091663 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 23:47:19.092510 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 23:47:19.092541 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:47:19.093300 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 23:47:19.094213 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 23:47:19.095540 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 23:47:19.096291 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:47:19.098546 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 23:47:19.102203 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 23:47:19.105381 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 10 23:47:19.106841 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 10 23:47:19.107513 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 10 23:47:19.112779 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 23:47:19.114692 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 10 23:47:19.117331 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 23:47:19.119033 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:47:19.120019 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:47:19.120807 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:47:19.120838 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:47:19.123314 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 10 23:47:19.127201 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 23:47:19.130061 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 23:47:19.133891 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 23:47:19.136825 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 23:47:19.137404 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 23:47:19.139466 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 23:47:19.162095 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 23:47:19.166885 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 23:47:19.170966 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 23:47:19.176660 jq[1460]: false Sep 10 23:47:19.175140 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 23:47:19.177478 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 23:47:19.178361 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 23:47:19.182890 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 23:47:19.188894 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 23:47:19.191985 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 23:47:19.192984 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 23:47:19.193197 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 23:47:19.216339 jq[1473]: true Sep 10 23:47:19.234337 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 23:47:19.234564 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 23:47:19.280591 jq[1484]: true Sep 10 23:47:19.287661 coreos-metadata[1457]: Sep 10 23:47:19.286 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 10 23:47:19.287661 coreos-metadata[1457]: Sep 10 23:47:19.286 INFO Failed to fetch: error sending request for url (http://169.254.169.254/hetzner/v1/metadata) Sep 10 23:47:19.298183 update_engine[1472]: I20250910 23:47:19.297991 1472 main.cc:92] Flatcar Update Engine starting Sep 10 23:47:19.300180 dbus-daemon[1458]: [system] SELinux support is enabled Sep 10 23:47:19.304066 update_engine[1472]: I20250910 23:47:19.303207 1472 update_check_scheduler.cc:74] Next update check in 10m17s Sep 10 23:47:19.305747 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 23:47:19.310247 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 23:47:19.310284 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 23:47:19.312882 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 23:47:19.312941 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 23:47:19.315693 systemd[1]: Started update-engine.service - Update Engine. Sep 10 23:47:19.318813 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 23:47:19.322236 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 10 23:47:19.337174 extend-filesystems[1461]: Found /dev/sda6 Sep 10 23:47:19.338756 tar[1482]: linux-arm64/LICENSE Sep 10 23:47:19.338756 tar[1482]: linux-arm64/helm Sep 10 23:47:19.346230 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 23:47:19.347548 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 23:47:19.352906 extend-filesystems[1461]: Found /dev/sda9 Sep 10 23:47:19.360380 extend-filesystems[1461]: Checking size of /dev/sda9 Sep 10 23:47:19.394037 extend-filesystems[1461]: Resized partition /dev/sda9 Sep 10 23:47:19.402235 extend-filesystems[1521]: resize2fs 1.47.2 (1-Jan-2025) Sep 10 23:47:19.406234 bash[1518]: Updated "/home/core/.ssh/authorized_keys" Sep 10 23:47:19.406770 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 23:47:19.410799 systemd[1]: Starting sshkeys.service... Sep 10 23:47:19.433664 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 10 23:47:19.470555 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 10 23:47:19.474389 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 10 23:47:19.542667 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 10 23:47:19.558394 systemd-networkd[1426]: lo: Link UP Sep 10 23:47:19.558405 systemd-networkd[1426]: lo: Gained carrier Sep 10 23:47:19.560598 systemd-networkd[1426]: Enumeration completed Sep 10 23:47:19.561366 extend-filesystems[1521]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 10 23:47:19.561366 extend-filesystems[1521]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 10 23:47:19.561366 extend-filesystems[1521]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 10 23:47:19.560724 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:47:19.569103 extend-filesystems[1461]: Resized filesystem in /dev/sda9 Sep 10 23:47:19.562340 systemd[1]: Reached target network.target - Network. Sep 10 23:47:19.569026 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 23:47:19.571424 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 10 23:47:19.577321 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 23:47:19.578456 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 23:47:19.580688 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 23:47:19.603084 coreos-metadata[1527]: Sep 10 23:47:19.603 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 10 23:47:19.603662 coreos-metadata[1527]: Sep 10 23:47:19.603 INFO Failed to fetch: error sending request for url (http://169.254.169.254/hetzner/v1/metadata/public-keys) Sep 10 23:47:19.648707 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 10 23:47:19.670453 (ntainerd)[1544]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 23:47:19.799207 systemd-networkd[1426]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:47:19.799218 systemd-networkd[1426]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:47:19.801111 systemd-networkd[1426]: eth0: Link UP Sep 10 23:47:19.801244 systemd-networkd[1426]: eth0: Gained carrier Sep 10 23:47:19.801267 systemd-networkd[1426]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:47:19.829777 locksmithd[1496]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 23:47:19.848552 systemd-networkd[1426]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:47:19.848560 systemd-networkd[1426]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:47:19.851504 systemd-networkd[1426]: eth1: Link UP Sep 10 23:47:19.854191 systemd-networkd[1426]: eth1: Gained carrier Sep 10 23:47:19.854216 systemd-networkd[1426]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:47:19.877069 systemd-networkd[1426]: eth0: DHCPv4 address 157.90.149.201/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 10 23:47:19.877926 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Sep 10 23:47:19.898707 systemd-networkd[1426]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 10 23:47:19.901087 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Sep 10 23:47:19.957235 systemd-logind[1466]: New seat seat0. Sep 10 23:47:19.961278 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 23:47:20.056553 sshd_keygen[1485]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 23:47:20.069382 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 10 23:47:20.077475 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 23:47:20.089676 kernel: mousedev: PS/2 mouse device common for all mice Sep 10 23:47:20.102583 containerd[1544]: time="2025-09-10T23:47:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 10 23:47:20.106668 containerd[1544]: time="2025-09-10T23:47:20.106373200Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 10 23:47:20.131670 containerd[1544]: time="2025-09-10T23:47:20.129859760Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.88µs" Sep 10 23:47:20.131670 containerd[1544]: time="2025-09-10T23:47:20.129902720Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 10 23:47:20.131670 containerd[1544]: time="2025-09-10T23:47:20.130012480Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 10 23:47:20.131670 containerd[1544]: time="2025-09-10T23:47:20.130184200Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 10 23:47:20.131670 containerd[1544]: time="2025-09-10T23:47:20.130200920Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 10 23:47:20.131670 containerd[1544]: time="2025-09-10T23:47:20.130224480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:47:20.131670 containerd[1544]: time="2025-09-10T23:47:20.130279680Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:47:20.131670 containerd[1544]: time="2025-09-10T23:47:20.130289800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:47:20.131670 containerd[1544]: time="2025-09-10T23:47:20.130509080Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:47:20.131670 containerd[1544]: time="2025-09-10T23:47:20.130523720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:47:20.131670 containerd[1544]: time="2025-09-10T23:47:20.130534280Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:47:20.131670 containerd[1544]: time="2025-09-10T23:47:20.130542600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 10 23:47:20.132077 containerd[1544]: time="2025-09-10T23:47:20.130612360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 10 23:47:20.132077 containerd[1544]: time="2025-09-10T23:47:20.130887520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:47:20.132077 containerd[1544]: time="2025-09-10T23:47:20.130936880Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:47:20.132077 containerd[1544]: time="2025-09-10T23:47:20.130950080Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 10 23:47:20.132077 containerd[1544]: time="2025-09-10T23:47:20.130977360Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 10 23:47:20.132077 containerd[1544]: time="2025-09-10T23:47:20.131198000Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 10 23:47:20.132077 containerd[1544]: time="2025-09-10T23:47:20.131258400Z" level=info msg="metadata content store policy set" policy=shared Sep 10 23:47:20.137556 containerd[1544]: time="2025-09-10T23:47:20.137306520Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 10 23:47:20.137556 containerd[1544]: time="2025-09-10T23:47:20.137377280Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 10 23:47:20.137556 containerd[1544]: time="2025-09-10T23:47:20.137400600Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 10 23:47:20.137556 containerd[1544]: time="2025-09-10T23:47:20.137418800Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 10 23:47:20.137556 containerd[1544]: time="2025-09-10T23:47:20.137437800Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 10 23:47:20.137556 containerd[1544]: time="2025-09-10T23:47:20.137452360Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 10 23:47:20.137556 containerd[1544]: time="2025-09-10T23:47:20.137468080Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 10 23:47:20.137556 containerd[1544]: time="2025-09-10T23:47:20.137484200Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 10 23:47:20.137556 containerd[1544]: time="2025-09-10T23:47:20.137500280Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 10 23:47:20.137556 containerd[1544]: time="2025-09-10T23:47:20.137511720Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 10 23:47:20.137556 containerd[1544]: time="2025-09-10T23:47:20.137525040Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 10 23:47:20.137556 containerd[1544]: time="2025-09-10T23:47:20.137554240Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 10 23:47:20.137852 containerd[1544]: time="2025-09-10T23:47:20.137724200Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 10 23:47:20.137852 containerd[1544]: time="2025-09-10T23:47:20.137752680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 10 23:47:20.137852 containerd[1544]: time="2025-09-10T23:47:20.137774000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 10 23:47:20.137852 containerd[1544]: time="2025-09-10T23:47:20.137786240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 10 23:47:20.137852 containerd[1544]: time="2025-09-10T23:47:20.137800440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 10 23:47:20.137852 containerd[1544]: time="2025-09-10T23:47:20.137814600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 10 23:47:20.137852 containerd[1544]: time="2025-09-10T23:47:20.137829760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 10 23:47:20.137852 containerd[1544]: time="2025-09-10T23:47:20.137841040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 10 23:47:20.137852 containerd[1544]: time="2025-09-10T23:47:20.137857040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 10 23:47:20.138016 containerd[1544]: time="2025-09-10T23:47:20.137871840Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 10 23:47:20.138016 containerd[1544]: time="2025-09-10T23:47:20.137886000Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 10 23:47:20.139299 containerd[1544]: time="2025-09-10T23:47:20.138106280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 10 23:47:20.139299 containerd[1544]: time="2025-09-10T23:47:20.138132160Z" level=info msg="Start snapshots syncer" Sep 10 23:47:20.139299 containerd[1544]: time="2025-09-10T23:47:20.138163160Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 10 23:47:20.139400 containerd[1544]: time="2025-09-10T23:47:20.138494920Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 10 23:47:20.139400 containerd[1544]: time="2025-09-10T23:47:20.138550040Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143306560Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143511160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143540720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143554680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143567080Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143582760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143594840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143606640Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143666160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143683000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143694440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143754520Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143772360Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143782480Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143791200Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143799680Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143811240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143823200Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143902640Z" level=info msg="runtime interface created" Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143921720Z" level=info msg="created NRI interface" Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143931680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143950720Z" level=info msg="Connect containerd service" Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.143982200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 23:47:20.145663 containerd[1544]: time="2025-09-10T23:47:20.144682880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 23:47:20.152788 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 23:47:20.155069 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 23:47:20.160735 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 23:47:20.166160 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 10 23:47:20.170356 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 10 23:47:20.200663 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Sep 10 23:47:20.202109 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 10 23:47:20.202155 kernel: [drm] features: -context_init Sep 10 23:47:20.210134 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 23:47:20.210368 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 23:47:20.214121 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 23:47:20.218871 kernel: [drm] number of scanouts: 1 Sep 10 23:47:20.218983 kernel: [drm] number of cap sets: 0 Sep 10 23:47:20.219016 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Sep 10 23:47:20.224721 kernel: Console: switching to colour frame buffer device 160x50 Sep 10 23:47:20.239683 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 10 23:47:20.268523 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 23:47:20.276046 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 23:47:20.281420 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 10 23:47:20.284030 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 23:47:20.288052 coreos-metadata[1457]: Sep 10 23:47:20.287 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #2 Sep 10 23:47:20.292114 coreos-metadata[1457]: Sep 10 23:47:20.291 INFO Fetch successful Sep 10 23:47:20.292296 coreos-metadata[1457]: Sep 10 23:47:20.292 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 10 23:47:20.292788 coreos-metadata[1457]: Sep 10 23:47:20.292 INFO Fetch successful Sep 10 23:47:20.308805 tar[1482]: linux-arm64/README.md Sep 10 23:47:20.352170 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 23:47:20.381829 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:47:20.390418 containerd[1544]: time="2025-09-10T23:47:20.390287720Z" level=info msg="Start subscribing containerd event" Sep 10 23:47:20.390418 containerd[1544]: time="2025-09-10T23:47:20.390382520Z" level=info msg="Start recovering state" Sep 10 23:47:20.390540 containerd[1544]: time="2025-09-10T23:47:20.390478760Z" level=info msg="Start event monitor" Sep 10 23:47:20.392407 containerd[1544]: time="2025-09-10T23:47:20.390661720Z" level=info msg="Start cni network conf syncer for default" Sep 10 23:47:20.392407 containerd[1544]: time="2025-09-10T23:47:20.390688800Z" level=info msg="Start streaming server" Sep 10 23:47:20.392407 containerd[1544]: time="2025-09-10T23:47:20.390701480Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 10 23:47:20.392407 containerd[1544]: time="2025-09-10T23:47:20.390709160Z" level=info msg="runtime interface starting up..." Sep 10 23:47:20.392407 containerd[1544]: time="2025-09-10T23:47:20.390715120Z" level=info msg="starting plugins..." Sep 10 23:47:20.392407 containerd[1544]: time="2025-09-10T23:47:20.390745080Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 10 23:47:20.393670 containerd[1544]: time="2025-09-10T23:47:20.392723480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 23:47:20.393670 containerd[1544]: time="2025-09-10T23:47:20.392807280Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 23:47:20.393670 containerd[1544]: time="2025-09-10T23:47:20.392864360Z" level=info msg="containerd successfully booted in 0.291101s" Sep 10 23:47:20.394075 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 23:47:20.439442 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 10 23:47:20.440896 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 23:47:20.455405 systemd-logind[1466]: Watching system buttons on /dev/input/event0 (Power Button) Sep 10 23:47:20.469287 systemd-logind[1466]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Sep 10 23:47:20.509565 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:47:20.509832 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:47:20.512999 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 10 23:47:20.516809 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:47:20.562790 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:47:20.604479 coreos-metadata[1527]: Sep 10 23:47:20.604 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #2 Sep 10 23:47:20.605773 coreos-metadata[1527]: Sep 10 23:47:20.605 INFO Fetch successful Sep 10 23:47:20.607972 unknown[1527]: wrote ssh authorized keys file for user: core Sep 10 23:47:20.632423 update-ssh-keys[1634]: Updated "/home/core/.ssh/authorized_keys" Sep 10 23:47:20.633967 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 10 23:47:20.638791 systemd[1]: Finished sshkeys.service. Sep 10 23:47:21.571926 systemd-networkd[1426]: eth1: Gained IPv6LL Sep 10 23:47:21.572762 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Sep 10 23:47:21.576136 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 23:47:21.578311 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 23:47:21.581689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:47:21.585986 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 23:47:21.634457 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 23:47:21.636309 systemd-networkd[1426]: eth0: Gained IPv6LL Sep 10 23:47:21.636857 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Sep 10 23:47:22.389605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:47:22.390729 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 23:47:22.399351 systemd[1]: Startup finished in 2.303s (kernel) + 5.335s (initrd) + 5.507s (userspace) = 13.146s. Sep 10 23:47:22.406362 (kubelet)[1654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:47:22.897876 kubelet[1654]: E0910 23:47:22.897806 1654 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:47:22.900706 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:47:22.900866 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:47:22.901385 systemd[1]: kubelet.service: Consumed 886ms CPU time, 256.6M memory peak. Sep 10 23:47:33.151936 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 23:47:33.154342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:47:33.319729 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:47:33.329309 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:47:33.380868 kubelet[1673]: E0910 23:47:33.380822 1673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:47:33.384301 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:47:33.384462 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:47:33.385159 systemd[1]: kubelet.service: Consumed 181ms CPU time, 105.9M memory peak. Sep 10 23:47:43.574269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 23:47:43.578079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:47:43.753884 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:47:43.761056 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:47:43.804299 kubelet[1687]: E0910 23:47:43.804207 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:47:43.807543 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:47:43.807818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:47:43.809783 systemd[1]: kubelet.service: Consumed 161ms CPU time, 106.5M memory peak. Sep 10 23:47:51.980970 systemd-timesyncd[1389]: Contacted time server 94.130.23.46:123 (2.flatcar.pool.ntp.org). Sep 10 23:47:51.981061 systemd-timesyncd[1389]: Initial clock synchronization to Wed 2025-09-10 23:47:52.121437 UTC. Sep 10 23:47:53.824784 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 10 23:47:53.827600 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:47:53.996829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:47:54.007008 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:47:54.040180 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 23:47:54.044944 systemd[1]: Started sshd@0-157.90.149.201:22-139.178.89.65:41334.service - OpenSSH per-connection server daemon (139.178.89.65:41334). Sep 10 23:47:54.063608 kubelet[1702]: E0910 23:47:54.063546 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:47:54.067975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:47:54.068153 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:47:54.069773 systemd[1]: kubelet.service: Consumed 173ms CPU time, 105.4M memory peak. Sep 10 23:47:55.067000 sshd[1708]: Accepted publickey for core from 139.178.89.65 port 41334 ssh2: RSA SHA256:WhS/KOZ1o/uklv7h/4WLPYbUs/Yyh1JayZdeMawA7QM Sep 10 23:47:55.070182 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:47:55.078082 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 23:47:55.079334 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 23:47:55.089703 systemd-logind[1466]: New session 1 of user core. Sep 10 23:47:55.103331 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 23:47:55.108266 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 23:47:55.125312 (systemd)[1714]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 23:47:55.128425 systemd-logind[1466]: New session c1 of user core. Sep 10 23:47:55.294062 systemd[1714]: Queued start job for default target default.target. Sep 10 23:47:55.304897 systemd[1714]: Created slice app.slice - User Application Slice. Sep 10 23:47:55.304961 systemd[1714]: Reached target paths.target - Paths. Sep 10 23:47:55.305162 systemd[1714]: Reached target timers.target - Timers. Sep 10 23:47:55.307563 systemd[1714]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 23:47:55.341192 systemd[1714]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 23:47:55.341464 systemd[1714]: Reached target sockets.target - Sockets. Sep 10 23:47:55.341548 systemd[1714]: Reached target basic.target - Basic System. Sep 10 23:47:55.341613 systemd[1714]: Reached target default.target - Main User Target. Sep 10 23:47:55.341804 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 23:47:55.342119 systemd[1714]: Startup finished in 206ms. Sep 10 23:47:55.354945 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 23:47:56.051801 systemd[1]: Started sshd@1-157.90.149.201:22-139.178.89.65:41340.service - OpenSSH per-connection server daemon (139.178.89.65:41340). Sep 10 23:47:57.050385 sshd[1725]: Accepted publickey for core from 139.178.89.65 port 41340 ssh2: RSA SHA256:WhS/KOZ1o/uklv7h/4WLPYbUs/Yyh1JayZdeMawA7QM Sep 10 23:47:57.052958 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:47:57.058709 systemd-logind[1466]: New session 2 of user core. Sep 10 23:47:57.074004 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 23:47:57.734617 sshd[1727]: Connection closed by 139.178.89.65 port 41340 Sep 10 23:47:57.733554 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Sep 10 23:47:57.738291 systemd[1]: sshd@1-157.90.149.201:22-139.178.89.65:41340.service: Deactivated successfully. Sep 10 23:47:57.740364 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 23:47:57.742537 systemd-logind[1466]: Session 2 logged out. Waiting for processes to exit. Sep 10 23:47:57.744022 systemd-logind[1466]: Removed session 2. Sep 10 23:47:57.920256 systemd[1]: Started sshd@2-157.90.149.201:22-139.178.89.65:41356.service - OpenSSH per-connection server daemon (139.178.89.65:41356). Sep 10 23:47:58.988354 sshd[1733]: Accepted publickey for core from 139.178.89.65 port 41356 ssh2: RSA SHA256:WhS/KOZ1o/uklv7h/4WLPYbUs/Yyh1JayZdeMawA7QM Sep 10 23:47:58.990396 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:47:58.994920 systemd-logind[1466]: New session 3 of user core. Sep 10 23:47:59.007023 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 23:47:59.710497 sshd[1735]: Connection closed by 139.178.89.65 port 41356 Sep 10 23:47:59.711455 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Sep 10 23:47:59.717502 systemd[1]: sshd@2-157.90.149.201:22-139.178.89.65:41356.service: Deactivated successfully. Sep 10 23:47:59.717701 systemd-logind[1466]: Session 3 logged out. Waiting for processes to exit. Sep 10 23:47:59.721234 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 23:47:59.723811 systemd-logind[1466]: Removed session 3. Sep 10 23:47:59.886921 systemd[1]: Started sshd@3-157.90.149.201:22-139.178.89.65:41364.service - OpenSSH per-connection server daemon (139.178.89.65:41364). Sep 10 23:48:00.910661 sshd[1741]: Accepted publickey for core from 139.178.89.65 port 41364 ssh2: RSA SHA256:WhS/KOZ1o/uklv7h/4WLPYbUs/Yyh1JayZdeMawA7QM Sep 10 23:48:00.912248 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:48:00.918434 systemd-logind[1466]: New session 4 of user core. Sep 10 23:48:00.927408 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 23:48:01.602708 sshd[1743]: Connection closed by 139.178.89.65 port 41364 Sep 10 23:48:01.603286 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Sep 10 23:48:01.607864 systemd[1]: sshd@3-157.90.149.201:22-139.178.89.65:41364.service: Deactivated successfully. Sep 10 23:48:01.610460 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 23:48:01.614491 systemd-logind[1466]: Session 4 logged out. Waiting for processes to exit. Sep 10 23:48:01.616748 systemd-logind[1466]: Removed session 4. Sep 10 23:48:01.775977 systemd[1]: Started sshd@4-157.90.149.201:22-139.178.89.65:34556.service - OpenSSH per-connection server daemon (139.178.89.65:34556). Sep 10 23:48:02.781948 sshd[1749]: Accepted publickey for core from 139.178.89.65 port 34556 ssh2: RSA SHA256:WhS/KOZ1o/uklv7h/4WLPYbUs/Yyh1JayZdeMawA7QM Sep 10 23:48:02.783468 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:48:02.788550 systemd-logind[1466]: New session 5 of user core. Sep 10 23:48:02.796936 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 23:48:03.313252 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 23:48:03.313522 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:48:03.325580 sudo[1752]: pam_unix(sudo:session): session closed for user root Sep 10 23:48:03.485664 sshd[1751]: Connection closed by 139.178.89.65 port 34556 Sep 10 23:48:03.486615 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Sep 10 23:48:03.492190 systemd[1]: sshd@4-157.90.149.201:22-139.178.89.65:34556.service: Deactivated successfully. Sep 10 23:48:03.494623 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 23:48:03.496527 systemd-logind[1466]: Session 5 logged out. Waiting for processes to exit. Sep 10 23:48:03.498406 systemd-logind[1466]: Removed session 5. Sep 10 23:48:03.658987 systemd[1]: Started sshd@5-157.90.149.201:22-139.178.89.65:34564.service - OpenSSH per-connection server daemon (139.178.89.65:34564). Sep 10 23:48:04.073803 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 10 23:48:04.077045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:48:04.228696 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:48:04.239248 (kubelet)[1768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:48:04.252814 update_engine[1472]: I20250910 23:48:04.252732 1472 update_attempter.cc:509] Updating boot flags... Sep 10 23:48:04.304564 kubelet[1768]: E0910 23:48:04.304512 1768 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:48:04.309885 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:48:04.310019 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:48:04.310360 systemd[1]: kubelet.service: Consumed 159ms CPU time, 106.9M memory peak. Sep 10 23:48:04.657888 sshd[1758]: Accepted publickey for core from 139.178.89.65 port 34564 ssh2: RSA SHA256:WhS/KOZ1o/uklv7h/4WLPYbUs/Yyh1JayZdeMawA7QM Sep 10 23:48:04.659697 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:48:04.666078 systemd-logind[1466]: New session 6 of user core. Sep 10 23:48:04.676978 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 23:48:05.177364 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 23:48:05.177681 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:48:05.183359 sudo[1797]: pam_unix(sudo:session): session closed for user root Sep 10 23:48:05.188951 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 10 23:48:05.189233 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:48:05.200982 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:48:05.251959 augenrules[1819]: No rules Sep 10 23:48:05.253043 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:48:05.253372 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:48:05.254887 sudo[1796]: pam_unix(sudo:session): session closed for user root Sep 10 23:48:05.413248 sshd[1795]: Connection closed by 139.178.89.65 port 34564 Sep 10 23:48:05.413122 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Sep 10 23:48:05.418590 systemd[1]: sshd@5-157.90.149.201:22-139.178.89.65:34564.service: Deactivated successfully. Sep 10 23:48:05.420860 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 23:48:05.422613 systemd-logind[1466]: Session 6 logged out. Waiting for processes to exit. Sep 10 23:48:05.424953 systemd-logind[1466]: Removed session 6. Sep 10 23:48:05.587107 systemd[1]: Started sshd@6-157.90.149.201:22-139.178.89.65:34578.service - OpenSSH per-connection server daemon (139.178.89.65:34578). Sep 10 23:48:06.614330 sshd[1828]: Accepted publickey for core from 139.178.89.65 port 34578 ssh2: RSA SHA256:WhS/KOZ1o/uklv7h/4WLPYbUs/Yyh1JayZdeMawA7QM Sep 10 23:48:06.616505 sshd-session[1828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:48:06.622267 systemd-logind[1466]: New session 7 of user core. Sep 10 23:48:06.633302 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 23:48:07.143555 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 23:48:07.144206 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:48:07.496882 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 23:48:07.508317 (dockerd)[1849]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 23:48:07.759715 dockerd[1849]: time="2025-09-10T23:48:07.758243287Z" level=info msg="Starting up" Sep 10 23:48:07.764914 dockerd[1849]: time="2025-09-10T23:48:07.764187476Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 10 23:48:07.796172 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3215528282-merged.mount: Deactivated successfully. Sep 10 23:48:07.823682 dockerd[1849]: time="2025-09-10T23:48:07.823030551Z" level=info msg="Loading containers: start." Sep 10 23:48:07.835969 kernel: Initializing XFRM netlink socket Sep 10 23:48:08.067259 systemd-networkd[1426]: docker0: Link UP Sep 10 23:48:08.071924 dockerd[1849]: time="2025-09-10T23:48:08.071838691Z" level=info msg="Loading containers: done." Sep 10 23:48:08.089402 dockerd[1849]: time="2025-09-10T23:48:08.089346400Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 23:48:08.089608 dockerd[1849]: time="2025-09-10T23:48:08.089446969Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 10 23:48:08.089608 dockerd[1849]: time="2025-09-10T23:48:08.089556395Z" level=info msg="Initializing buildkit" Sep 10 23:48:08.113825 dockerd[1849]: time="2025-09-10T23:48:08.113766109Z" level=info msg="Completed buildkit initialization" Sep 10 23:48:08.124168 dockerd[1849]: time="2025-09-10T23:48:08.124084084Z" level=info msg="Daemon has completed initialization" Sep 10 23:48:08.124357 dockerd[1849]: time="2025-09-10T23:48:08.124185253Z" level=info msg="API listen on /run/docker.sock" Sep 10 23:48:08.124801 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 23:48:09.167942 containerd[1544]: time="2025-09-10T23:48:09.167845325Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 10 23:48:09.752236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3379965558.mount: Deactivated successfully. Sep 10 23:48:11.381017 containerd[1544]: time="2025-09-10T23:48:11.379904434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:48:11.381017 containerd[1544]: time="2025-09-10T23:48:11.380980689Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390326" Sep 10 23:48:11.382120 containerd[1544]: time="2025-09-10T23:48:11.382069521Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:48:11.386002 containerd[1544]: time="2025-09-10T23:48:11.385954957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:48:11.387074 containerd[1544]: time="2025-09-10T23:48:11.387042868Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 2.219113514s" Sep 10 23:48:11.387183 containerd[1544]: time="2025-09-10T23:48:11.387168442Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 10 23:48:11.389234 containerd[1544]: time="2025-09-10T23:48:11.389188648Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 10 23:48:13.335123 containerd[1544]: time="2025-09-10T23:48:13.335059695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:48:13.336738 containerd[1544]: time="2025-09-10T23:48:13.336692551Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547937" Sep 10 23:48:13.337823 containerd[1544]: time="2025-09-10T23:48:13.337773701Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:48:13.341451 containerd[1544]: time="2025-09-10T23:48:13.341377254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:48:13.343240 containerd[1544]: time="2025-09-10T23:48:13.343175247Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.953953316s" Sep 10 23:48:13.343240 containerd[1544]: time="2025-09-10T23:48:13.343212646Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 10 23:48:13.343965 containerd[1544]: time="2025-09-10T23:48:13.343925324Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 10 23:48:14.323565 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 10 23:48:14.325327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:48:14.475339 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:48:14.490349 (kubelet)[2121]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:48:14.539387 kubelet[2121]: E0910 23:48:14.539298 2121 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:48:14.542606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:48:14.542901 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:48:14.544773 systemd[1]: kubelet.service: Consumed 171ms CPU time, 105.1M memory peak. Sep 10 23:48:15.028981 containerd[1544]: time="2025-09-10T23:48:15.028927559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:48:15.030416 containerd[1544]: time="2025-09-10T23:48:15.030303239Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295997" Sep 10 23:48:15.030911 containerd[1544]: time="2025-09-10T23:48:15.030882551Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:48:15.033507 containerd[1544]: time="2025-09-10T23:48:15.033465775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:48:15.034939 containerd[1544]: time="2025-09-10T23:48:15.034907309Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.690935098s" Sep 10 23:48:15.035048 containerd[1544]: time="2025-09-10T23:48:15.035032852Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 10 23:48:15.035518 containerd[1544]: time="2025-09-10T23:48:15.035494468Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 10 23:48:16.073135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount253176461.mount: Deactivated successfully. Sep 10 23:48:16.434028 containerd[1544]: time="2025-09-10T23:48:16.433694967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:48:16.435016 containerd[1544]: time="2025-09-10T23:48:16.434974680Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240132" Sep 10 23:48:16.435913 containerd[1544]: time="2025-09-10T23:48:16.435650161Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:48:16.437737 containerd[1544]: time="2025-09-10T23:48:16.437695539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:48:16.438424 containerd[1544]: time="2025-09-10T23:48:16.438392596Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.402866704s" Sep 10 23:48:16.438424 containerd[1544]: time="2025-09-10T23:48:16.438424499Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 10 23:48:16.438986 containerd[1544]: time="2025-09-10T23:48:16.438948552Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 10 23:48:17.024763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2621909426.mount: Deactivated successfully. Sep 10 23:48:17.707609 containerd[1544]: time="2025-09-10T23:48:17.707545423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:48:17.708952 containerd[1544]: time="2025-09-10T23:48:17.708878015Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Sep 10 23:48:17.710436 containerd[1544]: time="2025-09-10T23:48:17.710354255Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:48:17.715716 containerd[1544]: time="2025-09-10T23:48:17.715086767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:48:17.717296 containerd[1544]: time="2025-09-10T23:48:17.717091538Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.27810612s" Sep 10 23:48:17.717296 containerd[1544]: time="2025-09-10T23:48:17.717136966Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 10 23:48:17.717932 containerd[1544]: time="2025-09-10T23:48:17.717885233Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 23:48:18.231730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585100912.mount: Deactivated successfully. Sep 10 23:48:18.239696 containerd[1544]: time="2025-09-10T23:48:18.238002158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:48:18.239999 containerd[1544]: time="2025-09-10T23:48:18.239838240Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Sep 10 23:48:18.240200 containerd[1544]: time="2025-09-10T23:48:18.240169020Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:48:18.242808 containerd[1544]: time="2025-09-10T23:48:18.242743425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:48:18.243747 containerd[1544]: time="2025-09-10T23:48:18.243712955Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 525.780534ms" Sep 10 23:48:18.243817 containerd[1544]: time="2025-09-10T23:48:18.243748174Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 10 23:48:18.244347 containerd[1544]: time="2025-09-10T23:48:18.244291150Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 10 23:48:18.739257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2447032647.mount: Deactivated successfully. Sep 10 23:48:21.319666 containerd[1544]: time="2025-09-10T23:48:21.318910397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:48:21.320337 containerd[1544]: time="2025-09-10T23:48:21.320313270Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465913" Sep 10 23:48:21.320699 containerd[1544]: time="2025-09-10T23:48:21.320678724Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:48:21.325998 containerd[1544]: time="2025-09-10T23:48:21.325921121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:48:21.327520 containerd[1544]: time="2025-09-10T23:48:21.327479611Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.083150202s" Sep 10 23:48:21.327689 containerd[1544]: time="2025-09-10T23:48:21.327667800Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 10 23:48:24.574227 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 10 23:48:24.577839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:48:24.736820 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:48:24.745363 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:48:24.786961 kubelet[2281]: E0910 23:48:24.786919 2281 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:48:24.790057 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:48:24.790345 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:48:24.790784 systemd[1]: kubelet.service: Consumed 155ms CPU time, 105.1M memory peak. Sep 10 23:48:27.357954 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:48:27.358481 systemd[1]: kubelet.service: Consumed 155ms CPU time, 105.1M memory peak. Sep 10 23:48:27.360800 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:48:27.392803 systemd[1]: Reload requested from client PID 2295 ('systemctl') (unit session-7.scope)... Sep 10 23:48:27.393069 systemd[1]: Reloading... Sep 10 23:48:27.543690 zram_generator::config[2351]: No configuration found. Sep 10 23:48:27.611205 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:48:27.719737 systemd[1]: Reloading finished in 326 ms. Sep 10 23:48:27.774003 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 10 23:48:27.774229 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 10 23:48:27.774613 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:48:27.774679 systemd[1]: kubelet.service: Consumed 108ms CPU time, 95M memory peak. Sep 10 23:48:27.778358 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:48:27.952917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:48:27.973618 (kubelet)[2387]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:48:28.027812 kubelet[2387]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:48:28.027812 kubelet[2387]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 23:48:28.027812 kubelet[2387]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:48:28.028237 kubelet[2387]: I0910 23:48:28.027847 2387 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:48:29.077139 kubelet[2387]: I0910 23:48:29.077093 2387 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 10 23:48:29.077580 kubelet[2387]: I0910 23:48:29.077563 2387 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:48:29.078000 kubelet[2387]: I0910 23:48:29.077977 2387 server.go:956] "Client rotation is on, will bootstrap in background" Sep 10 23:48:29.114273 kubelet[2387]: I0910 23:48:29.114236 2387 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:48:29.114817 kubelet[2387]: E0910 23:48:29.114774 2387 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://157.90.149.201:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 157.90.149.201:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 10 23:48:29.123734 kubelet[2387]: I0910 23:48:29.123628 2387 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:48:29.126747 kubelet[2387]: I0910 23:48:29.126717 2387 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:48:29.128366 kubelet[2387]: I0910 23:48:29.128290 2387 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:48:29.128526 kubelet[2387]: I0910 23:48:29.128352 2387 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-1-0-n-474f3036a8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:48:29.128739 kubelet[2387]: I0910 23:48:29.128577 2387 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:48:29.128739 kubelet[2387]: I0910 23:48:29.128586 2387 container_manager_linux.go:303] "Creating device plugin manager" Sep 10 23:48:29.128909 kubelet[2387]: I0910 23:48:29.128839 2387 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:48:29.132327 kubelet[2387]: I0910 23:48:29.132242 2387 kubelet.go:480] "Attempting to sync node with API server" Sep 10 23:48:29.132327 kubelet[2387]: I0910 23:48:29.132280 2387 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:48:29.132327 kubelet[2387]: I0910 23:48:29.132310 2387 kubelet.go:386] "Adding apiserver pod source" Sep 10 23:48:29.132327 kubelet[2387]: I0910 23:48:29.132325 2387 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:48:29.135546 kubelet[2387]: E0910 23:48:29.135481 2387 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://157.90.149.201:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.90.149.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 10 23:48:29.135718 kubelet[2387]: I0910 23:48:29.135619 2387 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 10 23:48:29.136648 kubelet[2387]: I0910 23:48:29.136605 2387 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 10 23:48:29.136809 kubelet[2387]: W0910 23:48:29.136771 2387 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 23:48:29.141148 kubelet[2387]: I0910 23:48:29.141109 2387 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 23:48:29.141148 kubelet[2387]: I0910 23:48:29.141158 2387 server.go:1289] "Started kubelet" Sep 10 23:48:29.146595 kubelet[2387]: I0910 23:48:29.146559 2387 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:48:29.149429 kubelet[2387]: E0910 23:48:29.147946 2387 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.90.149.201:6443/api/v1/namespaces/default/events\": dial tcp 157.90.149.201:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372-1-0-n-474f3036a8.186410b1810449e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372-1-0-n-474f3036a8,UID:ci-4372-1-0-n-474f3036a8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372-1-0-n-474f3036a8,},FirstTimestamp:2025-09-10 23:48:29.141133799 +0000 UTC m=+1.159077141,LastTimestamp:2025-09-10 23:48:29.141133799 +0000 UTC m=+1.159077141,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372-1-0-n-474f3036a8,}" Sep 10 23:48:29.149868 kubelet[2387]: E0910 23:48:29.149834 2387 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://157.90.149.201:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-1-0-n-474f3036a8&limit=500&resourceVersion=0\": dial tcp 157.90.149.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 10 23:48:29.150997 kubelet[2387]: I0910 23:48:29.150966 2387 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:48:29.152024 kubelet[2387]: I0910 23:48:29.152005 2387 server.go:317] "Adding debug handlers to kubelet server" Sep 10 23:48:29.153197 kubelet[2387]: I0910 23:48:29.153161 2387 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 23:48:29.153449 kubelet[2387]: E0910 23:48:29.153421 2387 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372-1-0-n-474f3036a8\" not found" Sep 10 23:48:29.154794 kubelet[2387]: I0910 23:48:29.154244 2387 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 23:48:29.154794 kubelet[2387]: I0910 23:48:29.154385 2387 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:48:29.157275 kubelet[2387]: I0910 23:48:29.157209 2387 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:48:29.157568 kubelet[2387]: I0910 23:48:29.157553 2387 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:48:29.157972 kubelet[2387]: I0910 23:48:29.157948 2387 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:48:29.164053 kubelet[2387]: E0910 23:48:29.164007 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.149.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-1-0-n-474f3036a8?timeout=10s\": dial tcp 157.90.149.201:6443: connect: connection refused" interval="200ms" Sep 10 23:48:29.164398 kubelet[2387]: E0910 23:48:29.164315 2387 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://157.90.149.201:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 157.90.149.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 10 23:48:29.164593 kubelet[2387]: I0910 23:48:29.164561 2387 factory.go:223] Registration of the systemd container factory successfully Sep 10 23:48:29.164998 kubelet[2387]: I0910 23:48:29.164955 2387 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:48:29.166932 kubelet[2387]: I0910 23:48:29.166897 2387 factory.go:223] Registration of the containerd container factory successfully Sep 10 23:48:29.175125 kubelet[2387]: E0910 23:48:29.175053 2387 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 23:48:29.176925 kubelet[2387]: I0910 23:48:29.176891 2387 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 10 23:48:29.178629 kubelet[2387]: I0910 23:48:29.178274 2387 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 10 23:48:29.178629 kubelet[2387]: I0910 23:48:29.178296 2387 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 10 23:48:29.178629 kubelet[2387]: I0910 23:48:29.178318 2387 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 23:48:29.178629 kubelet[2387]: I0910 23:48:29.178325 2387 kubelet.go:2436] "Starting kubelet main sync loop" Sep 10 23:48:29.178629 kubelet[2387]: E0910 23:48:29.178365 2387 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:48:29.189786 kubelet[2387]: E0910 23:48:29.189748 2387 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://157.90.149.201:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.90.149.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 10 23:48:29.197241 kubelet[2387]: I0910 23:48:29.197209 2387 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 23:48:29.197625 kubelet[2387]: I0910 23:48:29.197370 2387 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 23:48:29.197625 kubelet[2387]: I0910 23:48:29.197393 2387 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:48:29.199784 kubelet[2387]: I0910 23:48:29.199762 2387 policy_none.go:49] "None policy: Start" Sep 10 23:48:29.199887 kubelet[2387]: I0910 23:48:29.199875 2387 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 23:48:29.199954 kubelet[2387]: I0910 23:48:29.199946 2387 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:48:29.206612 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 23:48:29.222350 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 23:48:29.226212 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 23:48:29.240315 kubelet[2387]: E0910 23:48:29.240072 2387 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 10 23:48:29.241232 kubelet[2387]: I0910 23:48:29.240843 2387 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:48:29.241232 kubelet[2387]: I0910 23:48:29.240883 2387 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:48:29.242480 kubelet[2387]: I0910 23:48:29.241208 2387 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:48:29.244203 kubelet[2387]: E0910 23:48:29.244179 2387 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 23:48:29.244676 kubelet[2387]: E0910 23:48:29.244610 2387 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372-1-0-n-474f3036a8\" not found" Sep 10 23:48:29.295977 systemd[1]: Created slice kubepods-burstable-pod0d991fa11dafbe278e5218d4f792d00d.slice - libcontainer container kubepods-burstable-pod0d991fa11dafbe278e5218d4f792d00d.slice. Sep 10 23:48:29.309132 kubelet[2387]: E0910 23:48:29.309006 2387 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-n-474f3036a8\" not found" node="ci-4372-1-0-n-474f3036a8" Sep 10 23:48:29.315363 systemd[1]: Created slice kubepods-burstable-pod05aa1f8f8e29f70affbc592d83ef34e8.slice - libcontainer container kubepods-burstable-pod05aa1f8f8e29f70affbc592d83ef34e8.slice. Sep 10 23:48:29.317664 kubelet[2387]: E0910 23:48:29.317505 2387 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-n-474f3036a8\" not found" node="ci-4372-1-0-n-474f3036a8" Sep 10 23:48:29.340526 systemd[1]: Created slice kubepods-burstable-pod9f05a04860e7608a10b73ec97c31c6b7.slice - libcontainer container kubepods-burstable-pod9f05a04860e7608a10b73ec97c31c6b7.slice. Sep 10 23:48:29.345096 kubelet[2387]: I0910 23:48:29.345064 2387 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-n-474f3036a8" Sep 10 23:48:29.345372 kubelet[2387]: E0910 23:48:29.345264 2387 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-n-474f3036a8\" not found" node="ci-4372-1-0-n-474f3036a8" Sep 10 23:48:29.345534 kubelet[2387]: E0910 23:48:29.345508 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.90.149.201:6443/api/v1/nodes\": dial tcp 157.90.149.201:6443: connect: connection refused" node="ci-4372-1-0-n-474f3036a8" Sep 10 23:48:29.364784 kubelet[2387]: E0910 23:48:29.364689 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.149.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-1-0-n-474f3036a8?timeout=10s\": dial tcp 157.90.149.201:6443: connect: connection refused" interval="400ms" Sep 10 23:48:29.456557 kubelet[2387]: I0910 23:48:29.456419 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d991fa11dafbe278e5218d4f792d00d-k8s-certs\") pod \"kube-apiserver-ci-4372-1-0-n-474f3036a8\" (UID: \"0d991fa11dafbe278e5218d4f792d00d\") " pod="kube-system/kube-apiserver-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:29.456557 kubelet[2387]: I0910 23:48:29.456527 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d991fa11dafbe278e5218d4f792d00d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-1-0-n-474f3036a8\" (UID: \"0d991fa11dafbe278e5218d4f792d00d\") " pod="kube-system/kube-apiserver-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:29.456874 kubelet[2387]: I0910 23:48:29.456573 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d991fa11dafbe278e5218d4f792d00d-ca-certs\") pod \"kube-apiserver-ci-4372-1-0-n-474f3036a8\" (UID: \"0d991fa11dafbe278e5218d4f792d00d\") " pod="kube-system/kube-apiserver-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:29.456874 kubelet[2387]: I0910 23:48:29.456611 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05aa1f8f8e29f70affbc592d83ef34e8-ca-certs\") pod \"kube-controller-manager-ci-4372-1-0-n-474f3036a8\" (UID: \"05aa1f8f8e29f70affbc592d83ef34e8\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:29.456874 kubelet[2387]: I0910 23:48:29.456683 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/05aa1f8f8e29f70affbc592d83ef34e8-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-1-0-n-474f3036a8\" (UID: \"05aa1f8f8e29f70affbc592d83ef34e8\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:29.456874 kubelet[2387]: I0910 23:48:29.456770 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05aa1f8f8e29f70affbc592d83ef34e8-k8s-certs\") pod \"kube-controller-manager-ci-4372-1-0-n-474f3036a8\" (UID: \"05aa1f8f8e29f70affbc592d83ef34e8\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:29.456874 kubelet[2387]: I0910 23:48:29.456812 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05aa1f8f8e29f70affbc592d83ef34e8-kubeconfig\") pod \"kube-controller-manager-ci-4372-1-0-n-474f3036a8\" (UID: \"05aa1f8f8e29f70affbc592d83ef34e8\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:29.457090 kubelet[2387]: I0910 23:48:29.456838 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05aa1f8f8e29f70affbc592d83ef34e8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-1-0-n-474f3036a8\" (UID: \"05aa1f8f8e29f70affbc592d83ef34e8\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:29.457090 kubelet[2387]: I0910 23:48:29.456866 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f05a04860e7608a10b73ec97c31c6b7-kubeconfig\") pod \"kube-scheduler-ci-4372-1-0-n-474f3036a8\" (UID: \"9f05a04860e7608a10b73ec97c31c6b7\") " pod="kube-system/kube-scheduler-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:29.550211 kubelet[2387]: I0910 23:48:29.550155 2387 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-n-474f3036a8" Sep 10 23:48:29.550941 kubelet[2387]: E0910 23:48:29.550905 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.90.149.201:6443/api/v1/nodes\": dial tcp 157.90.149.201:6443: connect: connection refused" node="ci-4372-1-0-n-474f3036a8" Sep 10 23:48:29.612706 containerd[1544]: time="2025-09-10T23:48:29.612207444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-1-0-n-474f3036a8,Uid:0d991fa11dafbe278e5218d4f792d00d,Namespace:kube-system,Attempt:0,}" Sep 10 23:48:29.618962 containerd[1544]: time="2025-09-10T23:48:29.618802614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-1-0-n-474f3036a8,Uid:05aa1f8f8e29f70affbc592d83ef34e8,Namespace:kube-system,Attempt:0,}" Sep 10 23:48:29.646777 containerd[1544]: time="2025-09-10T23:48:29.646449626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-1-0-n-474f3036a8,Uid:9f05a04860e7608a10b73ec97c31c6b7,Namespace:kube-system,Attempt:0,}" Sep 10 23:48:29.650450 containerd[1544]: time="2025-09-10T23:48:29.650158334Z" level=info msg="connecting to shim 214faeb2cbdc77465bb36ebf552007656ec188b64151e3a3628351b8c4f0eaad" address="unix:///run/containerd/s/192c1f699b3f47e3e3cb4a6097f5c39774bc7ef83b878fed4f0034c4716037a6" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:48:29.651212 containerd[1544]: time="2025-09-10T23:48:29.651180961Z" level=info msg="connecting to shim c238e889ef2bd828e5c0990cb4412e74b5ff8a669caf24362ba304df5449ffff" address="unix:///run/containerd/s/d71688e8b6389ad1aa3b388d6a9cdc371f2665a94eb657d9d9b00b06612f0130" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:48:29.690809 containerd[1544]: time="2025-09-10T23:48:29.690583083Z" level=info msg="connecting to shim 2cc00c03ea7af449618a1340adb01cf9371947b506216903c19cfbf7bad69d73" address="unix:///run/containerd/s/d36feb9b3d7612d5043376f78d3e88a6f294c646e58a8daa9177aba882709f09" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:48:29.692869 systemd[1]: Started cri-containerd-c238e889ef2bd828e5c0990cb4412e74b5ff8a669caf24362ba304df5449ffff.scope - libcontainer container c238e889ef2bd828e5c0990cb4412e74b5ff8a669caf24362ba304df5449ffff. Sep 10 23:48:29.701160 systemd[1]: Started cri-containerd-214faeb2cbdc77465bb36ebf552007656ec188b64151e3a3628351b8c4f0eaad.scope - libcontainer container 214faeb2cbdc77465bb36ebf552007656ec188b64151e3a3628351b8c4f0eaad. Sep 10 23:48:29.739935 systemd[1]: Started cri-containerd-2cc00c03ea7af449618a1340adb01cf9371947b506216903c19cfbf7bad69d73.scope - libcontainer container 2cc00c03ea7af449618a1340adb01cf9371947b506216903c19cfbf7bad69d73. Sep 10 23:48:29.766139 kubelet[2387]: E0910 23:48:29.765622 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.149.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-1-0-n-474f3036a8?timeout=10s\": dial tcp 157.90.149.201:6443: connect: connection refused" interval="800ms" Sep 10 23:48:29.779586 containerd[1544]: time="2025-09-10T23:48:29.779103585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-1-0-n-474f3036a8,Uid:05aa1f8f8e29f70affbc592d83ef34e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"214faeb2cbdc77465bb36ebf552007656ec188b64151e3a3628351b8c4f0eaad\"" Sep 10 23:48:29.790400 containerd[1544]: time="2025-09-10T23:48:29.790329039Z" level=info msg="CreateContainer within sandbox \"214faeb2cbdc77465bb36ebf552007656ec188b64151e3a3628351b8c4f0eaad\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 23:48:29.791565 containerd[1544]: time="2025-09-10T23:48:29.791519204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-1-0-n-474f3036a8,Uid:0d991fa11dafbe278e5218d4f792d00d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c238e889ef2bd828e5c0990cb4412e74b5ff8a669caf24362ba304df5449ffff\"" Sep 10 23:48:29.796962 containerd[1544]: time="2025-09-10T23:48:29.796909808Z" level=info msg="CreateContainer within sandbox \"c238e889ef2bd828e5c0990cb4412e74b5ff8a669caf24362ba304df5449ffff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 23:48:29.802480 containerd[1544]: time="2025-09-10T23:48:29.802403622Z" level=info msg="Container 0064b5f793c527dd7c7e6d9708765d2fdaf7656f7f1e07803388f28c42a1ac36: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:48:29.825320 containerd[1544]: time="2025-09-10T23:48:29.825276295Z" level=info msg="Container 4e7d194013f81d69d1cfddfd4bee904d64139dd273fd7c90e9702ecc66812959: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:48:29.827455 containerd[1544]: time="2025-09-10T23:48:29.827416159Z" level=info msg="CreateContainer within sandbox \"214faeb2cbdc77465bb36ebf552007656ec188b64151e3a3628351b8c4f0eaad\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0064b5f793c527dd7c7e6d9708765d2fdaf7656f7f1e07803388f28c42a1ac36\"" Sep 10 23:48:29.828943 containerd[1544]: time="2025-09-10T23:48:29.828914556Z" level=info msg="StartContainer for \"0064b5f793c527dd7c7e6d9708765d2fdaf7656f7f1e07803388f28c42a1ac36\"" Sep 10 23:48:29.830852 containerd[1544]: time="2025-09-10T23:48:29.830811995Z" level=info msg="connecting to shim 0064b5f793c527dd7c7e6d9708765d2fdaf7656f7f1e07803388f28c42a1ac36" address="unix:///run/containerd/s/192c1f699b3f47e3e3cb4a6097f5c39774bc7ef83b878fed4f0034c4716037a6" protocol=ttrpc version=3 Sep 10 23:48:29.833326 containerd[1544]: time="2025-09-10T23:48:29.833190163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-1-0-n-474f3036a8,Uid:9f05a04860e7608a10b73ec97c31c6b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cc00c03ea7af449618a1340adb01cf9371947b506216903c19cfbf7bad69d73\"" Sep 10 23:48:29.835445 containerd[1544]: time="2025-09-10T23:48:29.835395794Z" level=info msg="CreateContainer within sandbox \"c238e889ef2bd828e5c0990cb4412e74b5ff8a669caf24362ba304df5449ffff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4e7d194013f81d69d1cfddfd4bee904d64139dd273fd7c90e9702ecc66812959\"" Sep 10 23:48:29.836225 containerd[1544]: time="2025-09-10T23:48:29.836198838Z" level=info msg="StartContainer for \"4e7d194013f81d69d1cfddfd4bee904d64139dd273fd7c90e9702ecc66812959\"" Sep 10 23:48:29.838928 containerd[1544]: time="2025-09-10T23:48:29.838881279Z" level=info msg="connecting to shim 4e7d194013f81d69d1cfddfd4bee904d64139dd273fd7c90e9702ecc66812959" address="unix:///run/containerd/s/d71688e8b6389ad1aa3b388d6a9cdc371f2665a94eb657d9d9b00b06612f0130" protocol=ttrpc version=3 Sep 10 23:48:29.839805 containerd[1544]: time="2025-09-10T23:48:29.839757050Z" level=info msg="CreateContainer within sandbox \"2cc00c03ea7af449618a1340adb01cf9371947b506216903c19cfbf7bad69d73\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 23:48:29.851848 containerd[1544]: time="2025-09-10T23:48:29.850992226Z" level=info msg="Container 368e67cc4a9ee04594fdfd0ff18ea3d31f26309e1b9c1dd239908f80d0ff533e: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:48:29.858065 systemd[1]: Started cri-containerd-0064b5f793c527dd7c7e6d9708765d2fdaf7656f7f1e07803388f28c42a1ac36.scope - libcontainer container 0064b5f793c527dd7c7e6d9708765d2fdaf7656f7f1e07803388f28c42a1ac36. Sep 10 23:48:29.868756 containerd[1544]: time="2025-09-10T23:48:29.867543598Z" level=info msg="CreateContainer within sandbox \"2cc00c03ea7af449618a1340adb01cf9371947b506216903c19cfbf7bad69d73\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"368e67cc4a9ee04594fdfd0ff18ea3d31f26309e1b9c1dd239908f80d0ff533e\"" Sep 10 23:48:29.870474 containerd[1544]: time="2025-09-10T23:48:29.870434660Z" level=info msg="StartContainer for \"368e67cc4a9ee04594fdfd0ff18ea3d31f26309e1b9c1dd239908f80d0ff533e\"" Sep 10 23:48:29.872224 containerd[1544]: time="2025-09-10T23:48:29.872188684Z" level=info msg="connecting to shim 368e67cc4a9ee04594fdfd0ff18ea3d31f26309e1b9c1dd239908f80d0ff533e" address="unix:///run/containerd/s/d36feb9b3d7612d5043376f78d3e88a6f294c646e58a8daa9177aba882709f09" protocol=ttrpc version=3 Sep 10 23:48:29.876068 systemd[1]: Started cri-containerd-4e7d194013f81d69d1cfddfd4bee904d64139dd273fd7c90e9702ecc66812959.scope - libcontainer container 4e7d194013f81d69d1cfddfd4bee904d64139dd273fd7c90e9702ecc66812959. Sep 10 23:48:29.902935 systemd[1]: Started cri-containerd-368e67cc4a9ee04594fdfd0ff18ea3d31f26309e1b9c1dd239908f80d0ff533e.scope - libcontainer container 368e67cc4a9ee04594fdfd0ff18ea3d31f26309e1b9c1dd239908f80d0ff533e. Sep 10 23:48:29.928669 containerd[1544]: time="2025-09-10T23:48:29.928205344Z" level=info msg="StartContainer for \"0064b5f793c527dd7c7e6d9708765d2fdaf7656f7f1e07803388f28c42a1ac36\" returns successfully" Sep 10 23:48:29.956610 kubelet[2387]: I0910 23:48:29.956433 2387 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-n-474f3036a8" Sep 10 23:48:29.957183 kubelet[2387]: E0910 23:48:29.957156 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.90.149.201:6443/api/v1/nodes\": dial tcp 157.90.149.201:6443: connect: connection refused" node="ci-4372-1-0-n-474f3036a8" Sep 10 23:48:29.981259 kubelet[2387]: E0910 23:48:29.981221 2387 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://157.90.149.201:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 157.90.149.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 10 23:48:29.982513 containerd[1544]: time="2025-09-10T23:48:29.982415256Z" level=info msg="StartContainer for \"4e7d194013f81d69d1cfddfd4bee904d64139dd273fd7c90e9702ecc66812959\" returns successfully" Sep 10 23:48:29.989077 containerd[1544]: time="2025-09-10T23:48:29.989024707Z" level=info msg="StartContainer for \"368e67cc4a9ee04594fdfd0ff18ea3d31f26309e1b9c1dd239908f80d0ff533e\" returns successfully" Sep 10 23:48:30.198410 kubelet[2387]: E0910 23:48:30.198291 2387 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-n-474f3036a8\" not found" node="ci-4372-1-0-n-474f3036a8" Sep 10 23:48:30.206858 kubelet[2387]: E0910 23:48:30.206284 2387 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-n-474f3036a8\" not found" node="ci-4372-1-0-n-474f3036a8" Sep 10 23:48:30.208160 kubelet[2387]: E0910 23:48:30.208136 2387 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-n-474f3036a8\" not found" node="ci-4372-1-0-n-474f3036a8" Sep 10 23:48:30.760855 kubelet[2387]: I0910 23:48:30.760783 2387 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-n-474f3036a8" Sep 10 23:48:31.210149 kubelet[2387]: E0910 23:48:31.209758 2387 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-n-474f3036a8\" not found" node="ci-4372-1-0-n-474f3036a8" Sep 10 23:48:31.210149 kubelet[2387]: E0910 23:48:31.210051 2387 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-n-474f3036a8\" not found" node="ci-4372-1-0-n-474f3036a8" Sep 10 23:48:32.211221 kubelet[2387]: E0910 23:48:32.211185 2387 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-n-474f3036a8\" not found" node="ci-4372-1-0-n-474f3036a8" Sep 10 23:48:32.387783 kubelet[2387]: E0910 23:48:32.387742 2387 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372-1-0-n-474f3036a8\" not found" node="ci-4372-1-0-n-474f3036a8" Sep 10 23:48:32.433479 kubelet[2387]: I0910 23:48:32.433436 2387 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372-1-0-n-474f3036a8" Sep 10 23:48:32.455076 kubelet[2387]: I0910 23:48:32.455023 2387 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:32.540820 kubelet[2387]: E0910 23:48:32.540605 2387 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372-1-0-n-474f3036a8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:32.540820 kubelet[2387]: I0910 23:48:32.540693 2387 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:32.544195 kubelet[2387]: E0910 23:48:32.544126 2387 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372-1-0-n-474f3036a8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:32.544195 kubelet[2387]: I0910 23:48:32.544198 2387 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:32.549078 kubelet[2387]: E0910 23:48:32.548977 2387 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372-1-0-n-474f3036a8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:33.135246 kubelet[2387]: I0910 23:48:33.135062 2387 apiserver.go:52] "Watching apiserver" Sep 10 23:48:33.154780 kubelet[2387]: I0910 23:48:33.154721 2387 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 23:48:34.749669 kubelet[2387]: I0910 23:48:34.749381 2387 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:34.931133 systemd[1]: Reload requested from client PID 2671 ('systemctl') (unit session-7.scope)... Sep 10 23:48:34.931149 systemd[1]: Reloading... Sep 10 23:48:35.041727 zram_generator::config[2715]: No configuration found. Sep 10 23:48:35.129898 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:48:35.252285 systemd[1]: Reloading finished in 320 ms. Sep 10 23:48:35.285830 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:48:35.302376 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 23:48:35.302830 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:48:35.302932 systemd[1]: kubelet.service: Consumed 1.631s CPU time, 128.2M memory peak. Sep 10 23:48:35.306564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:48:35.461852 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:48:35.478065 (kubelet)[2760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:48:35.541083 kubelet[2760]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:48:35.541083 kubelet[2760]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 23:48:35.541083 kubelet[2760]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:48:35.541083 kubelet[2760]: I0910 23:48:35.539981 2760 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:48:35.551379 kubelet[2760]: I0910 23:48:35.551331 2760 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 10 23:48:35.551379 kubelet[2760]: I0910 23:48:35.551372 2760 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:48:35.551737 kubelet[2760]: I0910 23:48:35.551608 2760 server.go:956] "Client rotation is on, will bootstrap in background" Sep 10 23:48:35.553543 kubelet[2760]: I0910 23:48:35.553520 2760 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 10 23:48:35.558116 kubelet[2760]: I0910 23:48:35.557849 2760 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:48:35.564881 kubelet[2760]: I0910 23:48:35.564714 2760 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:48:35.568933 kubelet[2760]: I0910 23:48:35.568895 2760 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:48:35.569253 kubelet[2760]: I0910 23:48:35.569193 2760 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:48:35.569402 kubelet[2760]: I0910 23:48:35.569238 2760 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-1-0-n-474f3036a8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:48:35.569402 kubelet[2760]: I0910 23:48:35.569399 2760 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:48:35.569510 kubelet[2760]: I0910 23:48:35.569408 2760 container_manager_linux.go:303] "Creating device plugin manager" Sep 10 23:48:35.569510 kubelet[2760]: I0910 23:48:35.569447 2760 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:48:35.569638 kubelet[2760]: I0910 23:48:35.569606 2760 kubelet.go:480] "Attempting to sync node with API server" Sep 10 23:48:35.569638 kubelet[2760]: I0910 23:48:35.569624 2760 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:48:35.570141 kubelet[2760]: I0910 23:48:35.570053 2760 kubelet.go:386] "Adding apiserver pod source" Sep 10 23:48:35.570141 kubelet[2760]: I0910 23:48:35.570082 2760 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:48:35.580357 kubelet[2760]: I0910 23:48:35.580323 2760 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 10 23:48:35.581093 kubelet[2760]: I0910 23:48:35.581069 2760 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 10 23:48:35.587171 kubelet[2760]: I0910 23:48:35.587114 2760 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 23:48:35.587171 kubelet[2760]: I0910 23:48:35.587159 2760 server.go:1289] "Started kubelet" Sep 10 23:48:35.592573 kubelet[2760]: I0910 23:48:35.592543 2760 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:48:35.608465 kubelet[2760]: I0910 23:48:35.607834 2760 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:48:35.608795 kubelet[2760]: I0910 23:48:35.608778 2760 server.go:317] "Adding debug handlers to kubelet server" Sep 10 23:48:35.614273 kubelet[2760]: I0910 23:48:35.614182 2760 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:48:35.614453 kubelet[2760]: I0910 23:48:35.614433 2760 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:48:35.614773 kubelet[2760]: I0910 23:48:35.614752 2760 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:48:35.619452 kubelet[2760]: I0910 23:48:35.619416 2760 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 23:48:35.619715 kubelet[2760]: E0910 23:48:35.619689 2760 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372-1-0-n-474f3036a8\" not found" Sep 10 23:48:35.624108 kubelet[2760]: I0910 23:48:35.623714 2760 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 23:48:35.624108 kubelet[2760]: I0910 23:48:35.623826 2760 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:48:35.628058 kubelet[2760]: I0910 23:48:35.627972 2760 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 10 23:48:35.630754 kubelet[2760]: I0910 23:48:35.630622 2760 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 10 23:48:35.630754 kubelet[2760]: I0910 23:48:35.630668 2760 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 10 23:48:35.630754 kubelet[2760]: I0910 23:48:35.630687 2760 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 23:48:35.630754 kubelet[2760]: I0910 23:48:35.630693 2760 kubelet.go:2436] "Starting kubelet main sync loop" Sep 10 23:48:35.630754 kubelet[2760]: E0910 23:48:35.630734 2760 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:48:35.644476 kubelet[2760]: I0910 23:48:35.644425 2760 factory.go:223] Registration of the systemd container factory successfully Sep 10 23:48:35.644591 kubelet[2760]: I0910 23:48:35.644543 2760 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:48:35.646581 kubelet[2760]: E0910 23:48:35.646545 2760 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 23:48:35.647741 kubelet[2760]: I0910 23:48:35.647714 2760 factory.go:223] Registration of the containerd container factory successfully Sep 10 23:48:35.723905 kubelet[2760]: I0910 23:48:35.722784 2760 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 23:48:35.723905 kubelet[2760]: I0910 23:48:35.722804 2760 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 23:48:35.723905 kubelet[2760]: I0910 23:48:35.722852 2760 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:48:35.723905 kubelet[2760]: I0910 23:48:35.722984 2760 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 23:48:35.723905 kubelet[2760]: I0910 23:48:35.722994 2760 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 23:48:35.723905 kubelet[2760]: I0910 23:48:35.723011 2760 policy_none.go:49] "None policy: Start" Sep 10 23:48:35.723905 kubelet[2760]: I0910 23:48:35.723021 2760 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 23:48:35.723905 kubelet[2760]: I0910 23:48:35.723029 2760 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:48:35.723905 kubelet[2760]: I0910 23:48:35.723369 2760 state_mem.go:75] "Updated machine memory state" Sep 10 23:48:35.730777 kubelet[2760]: E0910 23:48:35.730738 2760 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 10 23:48:35.731184 kubelet[2760]: E0910 23:48:35.731019 2760 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 23:48:35.731833 kubelet[2760]: I0910 23:48:35.731790 2760 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:48:35.731995 kubelet[2760]: I0910 23:48:35.731927 2760 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:48:35.735839 kubelet[2760]: E0910 23:48:35.735796 2760 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 23:48:35.737184 kubelet[2760]: I0910 23:48:35.737144 2760 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:48:35.846199 kubelet[2760]: I0910 23:48:35.846170 2760 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-n-474f3036a8" Sep 10 23:48:35.858708 kubelet[2760]: I0910 23:48:35.858637 2760 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372-1-0-n-474f3036a8" Sep 10 23:48:35.859325 kubelet[2760]: I0910 23:48:35.858951 2760 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372-1-0-n-474f3036a8" Sep 10 23:48:35.932049 kubelet[2760]: I0910 23:48:35.932003 2760 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:35.932914 kubelet[2760]: I0910 23:48:35.932501 2760 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:35.933228 kubelet[2760]: I0910 23:48:35.932699 2760 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:35.946721 kubelet[2760]: E0910 23:48:35.946391 2760 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372-1-0-n-474f3036a8\" already exists" pod="kube-system/kube-controller-manager-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:36.026343 kubelet[2760]: I0910 23:48:36.026209 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d991fa11dafbe278e5218d4f792d00d-ca-certs\") pod \"kube-apiserver-ci-4372-1-0-n-474f3036a8\" (UID: \"0d991fa11dafbe278e5218d4f792d00d\") " pod="kube-system/kube-apiserver-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:36.026343 kubelet[2760]: I0910 23:48:36.026262 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d991fa11dafbe278e5218d4f792d00d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-1-0-n-474f3036a8\" (UID: \"0d991fa11dafbe278e5218d4f792d00d\") " pod="kube-system/kube-apiserver-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:36.026343 kubelet[2760]: I0910 23:48:36.026282 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05aa1f8f8e29f70affbc592d83ef34e8-ca-certs\") pod \"kube-controller-manager-ci-4372-1-0-n-474f3036a8\" (UID: \"05aa1f8f8e29f70affbc592d83ef34e8\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:36.026343 kubelet[2760]: I0910 23:48:36.026302 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05aa1f8f8e29f70affbc592d83ef34e8-k8s-certs\") pod \"kube-controller-manager-ci-4372-1-0-n-474f3036a8\" (UID: \"05aa1f8f8e29f70affbc592d83ef34e8\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:36.026343 kubelet[2760]: I0910 23:48:36.026319 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05aa1f8f8e29f70affbc592d83ef34e8-kubeconfig\") pod \"kube-controller-manager-ci-4372-1-0-n-474f3036a8\" (UID: \"05aa1f8f8e29f70affbc592d83ef34e8\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:36.026550 kubelet[2760]: I0910 23:48:36.026335 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f05a04860e7608a10b73ec97c31c6b7-kubeconfig\") pod \"kube-scheduler-ci-4372-1-0-n-474f3036a8\" (UID: \"9f05a04860e7608a10b73ec97c31c6b7\") " pod="kube-system/kube-scheduler-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:36.026550 kubelet[2760]: I0910 23:48:36.026349 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d991fa11dafbe278e5218d4f792d00d-k8s-certs\") pod \"kube-apiserver-ci-4372-1-0-n-474f3036a8\" (UID: \"0d991fa11dafbe278e5218d4f792d00d\") " pod="kube-system/kube-apiserver-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:36.026550 kubelet[2760]: I0910 23:48:36.026366 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/05aa1f8f8e29f70affbc592d83ef34e8-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-1-0-n-474f3036a8\" (UID: \"05aa1f8f8e29f70affbc592d83ef34e8\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:36.026550 kubelet[2760]: I0910 23:48:36.026385 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05aa1f8f8e29f70affbc592d83ef34e8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-1-0-n-474f3036a8\" (UID: \"05aa1f8f8e29f70affbc592d83ef34e8\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:36.574321 kubelet[2760]: I0910 23:48:36.574266 2760 apiserver.go:52] "Watching apiserver" Sep 10 23:48:36.623961 kubelet[2760]: I0910 23:48:36.623920 2760 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 23:48:36.633420 kubelet[2760]: I0910 23:48:36.633339 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372-1-0-n-474f3036a8" podStartSLOduration=1.63327519 podStartE2EDuration="1.63327519s" podCreationTimestamp="2025-09-10 23:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:48:36.617243478 +0000 UTC m=+1.131404511" watchObservedRunningTime="2025-09-10 23:48:36.63327519 +0000 UTC m=+1.147436223" Sep 10 23:48:36.651908 kubelet[2760]: I0910 23:48:36.651830 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372-1-0-n-474f3036a8" podStartSLOduration=1.651809963 podStartE2EDuration="1.651809963s" podCreationTimestamp="2025-09-10 23:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:48:36.635746408 +0000 UTC m=+1.149907441" watchObservedRunningTime="2025-09-10 23:48:36.651809963 +0000 UTC m=+1.165971036" Sep 10 23:48:36.682385 kubelet[2760]: I0910 23:48:36.682347 2760 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:36.683970 kubelet[2760]: I0910 23:48:36.683939 2760 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:36.696263 kubelet[2760]: E0910 23:48:36.696217 2760 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372-1-0-n-474f3036a8\" already exists" pod="kube-system/kube-scheduler-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:36.699397 kubelet[2760]: E0910 23:48:36.699354 2760 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372-1-0-n-474f3036a8\" already exists" pod="kube-system/kube-apiserver-ci-4372-1-0-n-474f3036a8" Sep 10 23:48:36.712915 kubelet[2760]: I0910 23:48:36.712273 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372-1-0-n-474f3036a8" podStartSLOduration=2.712258549 podStartE2EDuration="2.712258549s" podCreationTimestamp="2025-09-10 23:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:48:36.651716756 +0000 UTC m=+1.165877789" watchObservedRunningTime="2025-09-10 23:48:36.712258549 +0000 UTC m=+1.226419542" Sep 10 23:48:40.249441 kubelet[2760]: I0910 23:48:40.249242 2760 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 23:48:40.250205 containerd[1544]: time="2025-09-10T23:48:40.250071142Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 23:48:40.251125 kubelet[2760]: I0910 23:48:40.250599 2760 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 23:48:41.316243 systemd[1]: Created slice kubepods-besteffort-pod1ed32d1a_b956_4240_827b_dade922afe6d.slice - libcontainer container kubepods-besteffort-pod1ed32d1a_b956_4240_827b_dade922afe6d.slice. Sep 10 23:48:41.355031 kubelet[2760]: I0910 23:48:41.354981 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1ed32d1a-b956-4240-827b-dade922afe6d-kube-proxy\") pod \"kube-proxy-s7jtt\" (UID: \"1ed32d1a-b956-4240-827b-dade922afe6d\") " pod="kube-system/kube-proxy-s7jtt" Sep 10 23:48:41.356047 kubelet[2760]: I0910 23:48:41.355255 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ed32d1a-b956-4240-827b-dade922afe6d-xtables-lock\") pod \"kube-proxy-s7jtt\" (UID: \"1ed32d1a-b956-4240-827b-dade922afe6d\") " pod="kube-system/kube-proxy-s7jtt" Sep 10 23:48:41.356047 kubelet[2760]: I0910 23:48:41.355287 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxbtd\" (UniqueName: \"kubernetes.io/projected/1ed32d1a-b956-4240-827b-dade922afe6d-kube-api-access-kxbtd\") pod \"kube-proxy-s7jtt\" (UID: \"1ed32d1a-b956-4240-827b-dade922afe6d\") " pod="kube-system/kube-proxy-s7jtt" Sep 10 23:48:41.356047 kubelet[2760]: I0910 23:48:41.355496 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ed32d1a-b956-4240-827b-dade922afe6d-lib-modules\") pod \"kube-proxy-s7jtt\" (UID: \"1ed32d1a-b956-4240-827b-dade922afe6d\") " pod="kube-system/kube-proxy-s7jtt" Sep 10 23:48:41.513010 systemd[1]: Created slice kubepods-besteffort-podf8aedd8d_3ab9_45a6_9f02_8cba95bd9191.slice - libcontainer container kubepods-besteffort-podf8aedd8d_3ab9_45a6_9f02_8cba95bd9191.slice. Sep 10 23:48:41.557025 kubelet[2760]: I0910 23:48:41.556850 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvpgt\" (UniqueName: \"kubernetes.io/projected/f8aedd8d-3ab9-45a6-9f02-8cba95bd9191-kube-api-access-fvpgt\") pod \"tigera-operator-755d956888-97kn9\" (UID: \"f8aedd8d-3ab9-45a6-9f02-8cba95bd9191\") " pod="tigera-operator/tigera-operator-755d956888-97kn9" Sep 10 23:48:41.557025 kubelet[2760]: I0910 23:48:41.556943 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f8aedd8d-3ab9-45a6-9f02-8cba95bd9191-var-lib-calico\") pod \"tigera-operator-755d956888-97kn9\" (UID: \"f8aedd8d-3ab9-45a6-9f02-8cba95bd9191\") " pod="tigera-operator/tigera-operator-755d956888-97kn9" Sep 10 23:48:41.627429 containerd[1544]: time="2025-09-10T23:48:41.627310980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s7jtt,Uid:1ed32d1a-b956-4240-827b-dade922afe6d,Namespace:kube-system,Attempt:0,}" Sep 10 23:48:41.650339 containerd[1544]: time="2025-09-10T23:48:41.650282789Z" level=info msg="connecting to shim 51a6394fee555074589d00039b4ea7e371e52ffb03f77bee7cb3bc6a997db9e9" address="unix:///run/containerd/s/90304c4673d4fbc32c1371ace52e9bf85d251f8b600c57f13fafecd467d3ccf8" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:48:41.694932 systemd[1]: Started cri-containerd-51a6394fee555074589d00039b4ea7e371e52ffb03f77bee7cb3bc6a997db9e9.scope - libcontainer container 51a6394fee555074589d00039b4ea7e371e52ffb03f77bee7cb3bc6a997db9e9. Sep 10 23:48:41.729870 containerd[1544]: time="2025-09-10T23:48:41.729826810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s7jtt,Uid:1ed32d1a-b956-4240-827b-dade922afe6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"51a6394fee555074589d00039b4ea7e371e52ffb03f77bee7cb3bc6a997db9e9\"" Sep 10 23:48:41.735471 containerd[1544]: time="2025-09-10T23:48:41.735435004Z" level=info msg="CreateContainer within sandbox \"51a6394fee555074589d00039b4ea7e371e52ffb03f77bee7cb3bc6a997db9e9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 23:48:41.747003 containerd[1544]: time="2025-09-10T23:48:41.746636673Z" level=info msg="Container 703f276710c9c333c0e7e8934cc4e052ea43609fbbfa6d62a1f85790b527982f: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:48:41.756967 containerd[1544]: time="2025-09-10T23:48:41.756897368Z" level=info msg="CreateContainer within sandbox \"51a6394fee555074589d00039b4ea7e371e52ffb03f77bee7cb3bc6a997db9e9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"703f276710c9c333c0e7e8934cc4e052ea43609fbbfa6d62a1f85790b527982f\"" Sep 10 23:48:41.758212 containerd[1544]: time="2025-09-10T23:48:41.758172520Z" level=info msg="StartContainer for \"703f276710c9c333c0e7e8934cc4e052ea43609fbbfa6d62a1f85790b527982f\"" Sep 10 23:48:41.762110 containerd[1544]: time="2025-09-10T23:48:41.762063098Z" level=info msg="connecting to shim 703f276710c9c333c0e7e8934cc4e052ea43609fbbfa6d62a1f85790b527982f" address="unix:///run/containerd/s/90304c4673d4fbc32c1371ace52e9bf85d251f8b600c57f13fafecd467d3ccf8" protocol=ttrpc version=3 Sep 10 23:48:41.780816 systemd[1]: Started cri-containerd-703f276710c9c333c0e7e8934cc4e052ea43609fbbfa6d62a1f85790b527982f.scope - libcontainer container 703f276710c9c333c0e7e8934cc4e052ea43609fbbfa6d62a1f85790b527982f. Sep 10 23:48:41.819291 containerd[1544]: time="2025-09-10T23:48:41.818895525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-97kn9,Uid:f8aedd8d-3ab9-45a6-9f02-8cba95bd9191,Namespace:tigera-operator,Attempt:0,}" Sep 10 23:48:41.833877 containerd[1544]: time="2025-09-10T23:48:41.833835243Z" level=info msg="StartContainer for \"703f276710c9c333c0e7e8934cc4e052ea43609fbbfa6d62a1f85790b527982f\" returns successfully" Sep 10 23:48:41.853705 containerd[1544]: time="2025-09-10T23:48:41.853133645Z" level=info msg="connecting to shim 441d9668504a0d2ce6ef17e2d7edb95cddd776833d182f084e7ec48d675c7693" address="unix:///run/containerd/s/10d7f5c09bdf45de0b1db30ec9d091a7c4d3d4de80928e739dca86bd8916e071" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:48:41.884851 systemd[1]: Started cri-containerd-441d9668504a0d2ce6ef17e2d7edb95cddd776833d182f084e7ec48d675c7693.scope - libcontainer container 441d9668504a0d2ce6ef17e2d7edb95cddd776833d182f084e7ec48d675c7693. Sep 10 23:48:41.937670 containerd[1544]: time="2025-09-10T23:48:41.937244923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-97kn9,Uid:f8aedd8d-3ab9-45a6-9f02-8cba95bd9191,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"441d9668504a0d2ce6ef17e2d7edb95cddd776833d182f084e7ec48d675c7693\"" Sep 10 23:48:41.939844 containerd[1544]: time="2025-09-10T23:48:41.939765424Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 10 23:48:44.199898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount687954667.mount: Deactivated successfully. Sep 10 23:48:44.646833 containerd[1544]: time="2025-09-10T23:48:44.646771581Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:48:44.648171 containerd[1544]: time="2025-09-10T23:48:44.648138047Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 10 23:48:44.649011 containerd[1544]: time="2025-09-10T23:48:44.648951847Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:48:44.652744 containerd[1544]: time="2025-09-10T23:48:44.651902311Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:48:44.652744 containerd[1544]: time="2025-09-10T23:48:44.652547383Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.712703394s" Sep 10 23:48:44.652744 containerd[1544]: time="2025-09-10T23:48:44.652575464Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 10 23:48:44.658586 containerd[1544]: time="2025-09-10T23:48:44.658547675Z" level=info msg="CreateContainer within sandbox \"441d9668504a0d2ce6ef17e2d7edb95cddd776833d182f084e7ec48d675c7693\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 10 23:48:44.668093 containerd[1544]: time="2025-09-10T23:48:44.668038898Z" level=info msg="Container 49c6aed8d062a217938223aad3f2d9d2c6b4369dd651568ad727883acc6182b4: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:48:44.681066 containerd[1544]: time="2025-09-10T23:48:44.681005771Z" level=info msg="CreateContainer within sandbox \"441d9668504a0d2ce6ef17e2d7edb95cddd776833d182f084e7ec48d675c7693\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"49c6aed8d062a217938223aad3f2d9d2c6b4369dd651568ad727883acc6182b4\"" Sep 10 23:48:44.681800 containerd[1544]: time="2025-09-10T23:48:44.681763648Z" level=info msg="StartContainer for \"49c6aed8d062a217938223aad3f2d9d2c6b4369dd651568ad727883acc6182b4\"" Sep 10 23:48:44.683479 containerd[1544]: time="2025-09-10T23:48:44.682903103Z" level=info msg="connecting to shim 49c6aed8d062a217938223aad3f2d9d2c6b4369dd651568ad727883acc6182b4" address="unix:///run/containerd/s/10d7f5c09bdf45de0b1db30ec9d091a7c4d3d4de80928e739dca86bd8916e071" protocol=ttrpc version=3 Sep 10 23:48:44.707881 systemd[1]: Started cri-containerd-49c6aed8d062a217938223aad3f2d9d2c6b4369dd651568ad727883acc6182b4.scope - libcontainer container 49c6aed8d062a217938223aad3f2d9d2c6b4369dd651568ad727883acc6182b4. Sep 10 23:48:44.749346 containerd[1544]: time="2025-09-10T23:48:44.749272901Z" level=info msg="StartContainer for \"49c6aed8d062a217938223aad3f2d9d2c6b4369dd651568ad727883acc6182b4\" returns successfully" Sep 10 23:48:45.729768 kubelet[2760]: I0910 23:48:45.729624 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s7jtt" podStartSLOduration=4.7296028 podStartE2EDuration="4.7296028s" podCreationTimestamp="2025-09-10 23:48:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:48:42.713774063 +0000 UTC m=+7.227935136" watchObservedRunningTime="2025-09-10 23:48:45.7296028 +0000 UTC m=+10.243763833" Sep 10 23:48:45.731440 kubelet[2760]: I0910 23:48:45.731016 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-97kn9" podStartSLOduration=2.01594155 podStartE2EDuration="4.730904261s" podCreationTimestamp="2025-09-10 23:48:41 +0000 UTC" firstStartedPulling="2025-09-10 23:48:41.938944698 +0000 UTC m=+6.453105731" lastFinishedPulling="2025-09-10 23:48:44.653907409 +0000 UTC m=+9.168068442" observedRunningTime="2025-09-10 23:48:45.724860979 +0000 UTC m=+10.239022092" watchObservedRunningTime="2025-09-10 23:48:45.730904261 +0000 UTC m=+10.245065494" Sep 10 23:48:51.013271 sudo[1831]: pam_unix(sudo:session): session closed for user root Sep 10 23:48:51.174831 sshd[1830]: Connection closed by 139.178.89.65 port 34578 Sep 10 23:48:51.176506 sshd-session[1828]: pam_unix(sshd:session): session closed for user core Sep 10 23:48:51.182726 systemd-logind[1466]: Session 7 logged out. Waiting for processes to exit. Sep 10 23:48:51.183268 systemd[1]: sshd@6-157.90.149.201:22-139.178.89.65:34578.service: Deactivated successfully. Sep 10 23:48:51.190349 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 23:48:51.191126 systemd[1]: session-7.scope: Consumed 7.756s CPU time, 229.6M memory peak. Sep 10 23:48:51.196798 systemd-logind[1466]: Removed session 7. Sep 10 23:48:57.540412 systemd[1]: Created slice kubepods-besteffort-podd84344d5_c017_4e4c_b69a_7af6aa39ca2d.slice - libcontainer container kubepods-besteffort-podd84344d5_c017_4e4c_b69a_7af6aa39ca2d.slice. Sep 10 23:48:57.563520 kubelet[2760]: I0910 23:48:57.563330 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d84344d5-c017-4e4c-b69a-7af6aa39ca2d-typha-certs\") pod \"calico-typha-758c5b9c99-qr6kj\" (UID: \"d84344d5-c017-4e4c-b69a-7af6aa39ca2d\") " pod="calico-system/calico-typha-758c5b9c99-qr6kj" Sep 10 23:48:57.564606 kubelet[2760]: I0910 23:48:57.563972 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d84344d5-c017-4e4c-b69a-7af6aa39ca2d-tigera-ca-bundle\") pod \"calico-typha-758c5b9c99-qr6kj\" (UID: \"d84344d5-c017-4e4c-b69a-7af6aa39ca2d\") " pod="calico-system/calico-typha-758c5b9c99-qr6kj" Sep 10 23:48:57.564606 kubelet[2760]: I0910 23:48:57.564427 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfbgq\" (UniqueName: \"kubernetes.io/projected/d84344d5-c017-4e4c-b69a-7af6aa39ca2d-kube-api-access-qfbgq\") pod \"calico-typha-758c5b9c99-qr6kj\" (UID: \"d84344d5-c017-4e4c-b69a-7af6aa39ca2d\") " pod="calico-system/calico-typha-758c5b9c99-qr6kj" Sep 10 23:48:57.729299 systemd[1]: Created slice kubepods-besteffort-pode04ba00a_b3f2_45be_8d8a_a79b0d3fc34a.slice - libcontainer container kubepods-besteffort-pode04ba00a_b3f2_45be_8d8a_a79b0d3fc34a.slice. Sep 10 23:48:57.768013 kubelet[2760]: I0910 23:48:57.767951 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a-lib-modules\") pod \"calico-node-djz5w\" (UID: \"e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a\") " pod="calico-system/calico-node-djz5w" Sep 10 23:48:57.768013 kubelet[2760]: I0910 23:48:57.768013 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a-tigera-ca-bundle\") pod \"calico-node-djz5w\" (UID: \"e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a\") " pod="calico-system/calico-node-djz5w" Sep 10 23:48:57.768190 kubelet[2760]: I0910 23:48:57.768031 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a-cni-bin-dir\") pod \"calico-node-djz5w\" (UID: \"e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a\") " pod="calico-system/calico-node-djz5w" Sep 10 23:48:57.768190 kubelet[2760]: I0910 23:48:57.768064 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a-cni-net-dir\") pod \"calico-node-djz5w\" (UID: \"e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a\") " pod="calico-system/calico-node-djz5w" Sep 10 23:48:57.768451 kubelet[2760]: I0910 23:48:57.768350 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a-policysync\") pod \"calico-node-djz5w\" (UID: \"e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a\") " pod="calico-system/calico-node-djz5w" Sep 10 23:48:57.768451 kubelet[2760]: I0910 23:48:57.768398 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a-cni-log-dir\") pod \"calico-node-djz5w\" (UID: \"e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a\") " pod="calico-system/calico-node-djz5w" Sep 10 23:48:57.768541 kubelet[2760]: I0910 23:48:57.768464 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a-var-run-calico\") pod \"calico-node-djz5w\" (UID: \"e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a\") " pod="calico-system/calico-node-djz5w" Sep 10 23:48:57.768541 kubelet[2760]: I0910 23:48:57.768488 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a-xtables-lock\") pod \"calico-node-djz5w\" (UID: \"e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a\") " pod="calico-system/calico-node-djz5w" Sep 10 23:48:57.768541 kubelet[2760]: I0910 23:48:57.768529 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8phv7\" (UniqueName: \"kubernetes.io/projected/e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a-kube-api-access-8phv7\") pod \"calico-node-djz5w\" (UID: \"e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a\") " pod="calico-system/calico-node-djz5w" Sep 10 23:48:57.768609 kubelet[2760]: I0910 23:48:57.768569 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a-node-certs\") pod \"calico-node-djz5w\" (UID: \"e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a\") " pod="calico-system/calico-node-djz5w" Sep 10 23:48:57.768631 kubelet[2760]: I0910 23:48:57.768610 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a-var-lib-calico\") pod \"calico-node-djz5w\" (UID: \"e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a\") " pod="calico-system/calico-node-djz5w" Sep 10 23:48:57.768781 kubelet[2760]: I0910 23:48:57.768637 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a-flexvol-driver-host\") pod \"calico-node-djz5w\" (UID: \"e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a\") " pod="calico-system/calico-node-djz5w" Sep 10 23:48:57.847095 containerd[1544]: time="2025-09-10T23:48:57.846986140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-758c5b9c99-qr6kj,Uid:d84344d5-c017-4e4c-b69a-7af6aa39ca2d,Namespace:calico-system,Attempt:0,}" Sep 10 23:48:57.878541 kubelet[2760]: E0910 23:48:57.877378 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.878541 kubelet[2760]: W0910 23:48:57.877521 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.878541 kubelet[2760]: E0910 23:48:57.877556 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.882433 containerd[1544]: time="2025-09-10T23:48:57.882364299Z" level=info msg="connecting to shim e172cb00e8da86844cc22288355371f79876baed0915416fbe82388a7607e80c" address="unix:///run/containerd/s/0810718ccdfbe555fe26b8f5c882d9586a64426faa6cc2c5d64f0b209cc5ab40" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:48:57.889733 kubelet[2760]: E0910 23:48:57.887823 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.889733 kubelet[2760]: W0910 23:48:57.888019 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.889733 kubelet[2760]: E0910 23:48:57.888046 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.904693 kubelet[2760]: E0910 23:48:57.904565 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.904693 kubelet[2760]: W0910 23:48:57.904688 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.904836 kubelet[2760]: E0910 23:48:57.904711 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.917542 kubelet[2760]: E0910 23:48:57.916491 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vgs4" podUID="3865e030-478e-4bf4-875f-3e63ea712016" Sep 10 23:48:57.939927 systemd[1]: Started cri-containerd-e172cb00e8da86844cc22288355371f79876baed0915416fbe82388a7607e80c.scope - libcontainer container e172cb00e8da86844cc22288355371f79876baed0915416fbe82388a7607e80c. Sep 10 23:48:57.959154 kubelet[2760]: E0910 23:48:57.959107 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.959297 kubelet[2760]: W0910 23:48:57.959168 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.959297 kubelet[2760]: E0910 23:48:57.959198 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.959760 kubelet[2760]: E0910 23:48:57.959738 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.959805 kubelet[2760]: W0910 23:48:57.959759 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.959826 kubelet[2760]: E0910 23:48:57.959803 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.960114 kubelet[2760]: E0910 23:48:57.960066 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.960114 kubelet[2760]: W0910 23:48:57.960096 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.960114 kubelet[2760]: E0910 23:48:57.960108 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.960597 kubelet[2760]: E0910 23:48:57.960560 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.960597 kubelet[2760]: W0910 23:48:57.960590 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.960597 kubelet[2760]: E0910 23:48:57.960603 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.961178 kubelet[2760]: E0910 23:48:57.961151 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.961178 kubelet[2760]: W0910 23:48:57.961175 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.961273 kubelet[2760]: E0910 23:48:57.961190 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.962233 kubelet[2760]: E0910 23:48:57.961868 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.962233 kubelet[2760]: W0910 23:48:57.961894 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.962233 kubelet[2760]: E0910 23:48:57.961906 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.962233 kubelet[2760]: E0910 23:48:57.962225 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.962233 kubelet[2760]: W0910 23:48:57.962236 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.962384 kubelet[2760]: E0910 23:48:57.962246 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.962613 kubelet[2760]: E0910 23:48:57.962583 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.962613 kubelet[2760]: W0910 23:48:57.962602 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.962708 kubelet[2760]: E0910 23:48:57.962625 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.963801 kubelet[2760]: E0910 23:48:57.963769 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.963801 kubelet[2760]: W0910 23:48:57.963791 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.963801 kubelet[2760]: E0910 23:48:57.963802 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.963989 kubelet[2760]: E0910 23:48:57.963966 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.963989 kubelet[2760]: W0910 23:48:57.963981 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.963989 kubelet[2760]: E0910 23:48:57.963990 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.964775 kubelet[2760]: E0910 23:48:57.964748 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.964775 kubelet[2760]: W0910 23:48:57.964768 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.964875 kubelet[2760]: E0910 23:48:57.964791 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.964973 kubelet[2760]: E0910 23:48:57.964949 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.964973 kubelet[2760]: W0910 23:48:57.964966 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.964973 kubelet[2760]: E0910 23:48:57.964975 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.965255 kubelet[2760]: E0910 23:48:57.965234 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.965255 kubelet[2760]: W0910 23:48:57.965250 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.965331 kubelet[2760]: E0910 23:48:57.965261 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.965748 kubelet[2760]: E0910 23:48:57.965719 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.965748 kubelet[2760]: W0910 23:48:57.965741 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.965748 kubelet[2760]: E0910 23:48:57.965753 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.966287 kubelet[2760]: E0910 23:48:57.966247 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.966287 kubelet[2760]: W0910 23:48:57.966267 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.966287 kubelet[2760]: E0910 23:48:57.966278 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.966452 kubelet[2760]: E0910 23:48:57.966429 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.966452 kubelet[2760]: W0910 23:48:57.966444 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.966452 kubelet[2760]: E0910 23:48:57.966452 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.967068 kubelet[2760]: E0910 23:48:57.967030 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.967068 kubelet[2760]: W0910 23:48:57.967059 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.967068 kubelet[2760]: E0910 23:48:57.967071 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.967315 kubelet[2760]: E0910 23:48:57.967289 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.967315 kubelet[2760]: W0910 23:48:57.967298 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.967315 kubelet[2760]: E0910 23:48:57.967306 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.967990 kubelet[2760]: E0910 23:48:57.967969 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.967990 kubelet[2760]: W0910 23:48:57.967984 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.968107 kubelet[2760]: E0910 23:48:57.968014 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.968255 kubelet[2760]: E0910 23:48:57.968230 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.968255 kubelet[2760]: W0910 23:48:57.968248 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.968255 kubelet[2760]: E0910 23:48:57.968258 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.973011 kubelet[2760]: E0910 23:48:57.972983 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.973011 kubelet[2760]: W0910 23:48:57.973006 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.973165 kubelet[2760]: E0910 23:48:57.973023 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.973447 kubelet[2760]: I0910 23:48:57.973417 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3865e030-478e-4bf4-875f-3e63ea712016-varrun\") pod \"csi-node-driver-4vgs4\" (UID: \"3865e030-478e-4bf4-875f-3e63ea712016\") " pod="calico-system/csi-node-driver-4vgs4" Sep 10 23:48:57.974003 kubelet[2760]: E0910 23:48:57.973679 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.974003 kubelet[2760]: W0910 23:48:57.973699 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.974003 kubelet[2760]: E0910 23:48:57.973712 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.974003 kubelet[2760]: E0910 23:48:57.973865 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.974003 kubelet[2760]: W0910 23:48:57.973873 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.974003 kubelet[2760]: E0910 23:48:57.973881 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.974831 kubelet[2760]: E0910 23:48:57.974797 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.974831 kubelet[2760]: W0910 23:48:57.974822 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.974831 kubelet[2760]: E0910 23:48:57.974837 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.974945 kubelet[2760]: I0910 23:48:57.974861 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3865e030-478e-4bf4-875f-3e63ea712016-registration-dir\") pod \"csi-node-driver-4vgs4\" (UID: \"3865e030-478e-4bf4-875f-3e63ea712016\") " pod="calico-system/csi-node-driver-4vgs4" Sep 10 23:48:57.975026 kubelet[2760]: E0910 23:48:57.975007 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.975026 kubelet[2760]: W0910 23:48:57.975021 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.975428 kubelet[2760]: E0910 23:48:57.975030 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.975428 kubelet[2760]: I0910 23:48:57.975093 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3865e030-478e-4bf4-875f-3e63ea712016-socket-dir\") pod \"csi-node-driver-4vgs4\" (UID: \"3865e030-478e-4bf4-875f-3e63ea712016\") " pod="calico-system/csi-node-driver-4vgs4" Sep 10 23:48:57.975428 kubelet[2760]: E0910 23:48:57.975260 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.975428 kubelet[2760]: W0910 23:48:57.975270 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.975428 kubelet[2760]: E0910 23:48:57.975282 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.975428 kubelet[2760]: I0910 23:48:57.975309 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3865e030-478e-4bf4-875f-3e63ea712016-kubelet-dir\") pod \"csi-node-driver-4vgs4\" (UID: \"3865e030-478e-4bf4-875f-3e63ea712016\") " pod="calico-system/csi-node-driver-4vgs4" Sep 10 23:48:57.975561 kubelet[2760]: E0910 23:48:57.975461 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.975561 kubelet[2760]: W0910 23:48:57.975472 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.975561 kubelet[2760]: E0910 23:48:57.975480 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.975561 kubelet[2760]: I0910 23:48:57.975496 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwngj\" (UniqueName: \"kubernetes.io/projected/3865e030-478e-4bf4-875f-3e63ea712016-kube-api-access-fwngj\") pod \"csi-node-driver-4vgs4\" (UID: \"3865e030-478e-4bf4-875f-3e63ea712016\") " pod="calico-system/csi-node-driver-4vgs4" Sep 10 23:48:57.976130 kubelet[2760]: E0910 23:48:57.976061 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.976130 kubelet[2760]: W0910 23:48:57.976127 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.976227 kubelet[2760]: E0910 23:48:57.976143 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.976756 kubelet[2760]: E0910 23:48:57.976628 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.976756 kubelet[2760]: W0910 23:48:57.976750 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.976849 kubelet[2760]: E0910 23:48:57.976764 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.977343 kubelet[2760]: E0910 23:48:57.977308 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.977343 kubelet[2760]: W0910 23:48:57.977328 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.977343 kubelet[2760]: E0910 23:48:57.977340 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.978166 kubelet[2760]: E0910 23:48:57.977906 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.978166 kubelet[2760]: W0910 23:48:57.977919 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.978166 kubelet[2760]: E0910 23:48:57.977929 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.978571 kubelet[2760]: E0910 23:48:57.978533 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.978571 kubelet[2760]: W0910 23:48:57.978560 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.978571 kubelet[2760]: E0910 23:48:57.978575 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.979909 kubelet[2760]: E0910 23:48:57.979881 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.979909 kubelet[2760]: W0910 23:48:57.979903 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.979909 kubelet[2760]: E0910 23:48:57.979915 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.980130 kubelet[2760]: E0910 23:48:57.980110 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.980130 kubelet[2760]: W0910 23:48:57.980127 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.980187 kubelet[2760]: E0910 23:48:57.980136 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:57.980392 kubelet[2760]: E0910 23:48:57.980363 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:57.980392 kubelet[2760]: W0910 23:48:57.980387 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:57.980451 kubelet[2760]: E0910 23:48:57.980399 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.035685 containerd[1544]: time="2025-09-10T23:48:58.034310888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-djz5w,Uid:e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a,Namespace:calico-system,Attempt:0,}" Sep 10 23:48:58.059526 containerd[1544]: time="2025-09-10T23:48:58.059485364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-758c5b9c99-qr6kj,Uid:d84344d5-c017-4e4c-b69a-7af6aa39ca2d,Namespace:calico-system,Attempt:0,} returns sandbox id \"e172cb00e8da86844cc22288355371f79876baed0915416fbe82388a7607e80c\"" Sep 10 23:48:58.064836 containerd[1544]: time="2025-09-10T23:48:58.064800995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 10 23:48:58.077583 kubelet[2760]: E0910 23:48:58.077511 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.077583 kubelet[2760]: W0910 23:48:58.077573 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.077767 kubelet[2760]: E0910 23:48:58.077595 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.078247 kubelet[2760]: E0910 23:48:58.078175 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.078247 kubelet[2760]: W0910 23:48:58.078222 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.078247 kubelet[2760]: E0910 23:48:58.078236 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.078993 kubelet[2760]: E0910 23:48:58.078906 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.079145 kubelet[2760]: W0910 23:48:58.079119 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.079203 kubelet[2760]: E0910 23:48:58.079146 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.080029 kubelet[2760]: E0910 23:48:58.080004 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.080029 kubelet[2760]: W0910 23:48:58.080022 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.080231 kubelet[2760]: E0910 23:48:58.080043 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.080802 kubelet[2760]: E0910 23:48:58.080774 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.080802 kubelet[2760]: W0910 23:48:58.080795 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.080910 kubelet[2760]: E0910 23:48:58.080809 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.081145 kubelet[2760]: E0910 23:48:58.081069 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.081145 kubelet[2760]: W0910 23:48:58.081140 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.081145 kubelet[2760]: E0910 23:48:58.081155 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.081695 kubelet[2760]: E0910 23:48:58.081631 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.081695 kubelet[2760]: W0910 23:48:58.081688 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.081791 kubelet[2760]: E0910 23:48:58.081702 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.082036 kubelet[2760]: E0910 23:48:58.082008 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.082036 kubelet[2760]: W0910 23:48:58.082026 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.082036 kubelet[2760]: E0910 23:48:58.082038 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.082552 kubelet[2760]: E0910 23:48:58.082520 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.082552 kubelet[2760]: W0910 23:48:58.082538 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.082552 kubelet[2760]: E0910 23:48:58.082552 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.083375 kubelet[2760]: E0910 23:48:58.083345 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.083375 kubelet[2760]: W0910 23:48:58.083369 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.084593 kubelet[2760]: E0910 23:48:58.083542 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.084917 kubelet[2760]: E0910 23:48:58.084887 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.084917 kubelet[2760]: W0910 23:48:58.084912 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.085009 kubelet[2760]: E0910 23:48:58.084927 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.085435 kubelet[2760]: E0910 23:48:58.085409 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.085435 kubelet[2760]: W0910 23:48:58.085430 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.085536 kubelet[2760]: E0910 23:48:58.085452 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.085635 kubelet[2760]: E0910 23:48:58.085618 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.085635 kubelet[2760]: W0910 23:48:58.085629 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.085965 kubelet[2760]: E0910 23:48:58.085638 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.086211 kubelet[2760]: E0910 23:48:58.086181 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.086318 kubelet[2760]: W0910 23:48:58.086294 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.086374 kubelet[2760]: E0910 23:48:58.086317 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.086635 kubelet[2760]: E0910 23:48:58.086614 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.086635 kubelet[2760]: W0910 23:48:58.086634 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.087098 kubelet[2760]: E0910 23:48:58.087049 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.088118 kubelet[2760]: E0910 23:48:58.088062 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.088172 kubelet[2760]: W0910 23:48:58.088119 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.088172 kubelet[2760]: E0910 23:48:58.088134 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.090005 kubelet[2760]: E0910 23:48:58.089975 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.090005 kubelet[2760]: W0910 23:48:58.090000 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.090115 kubelet[2760]: E0910 23:48:58.090014 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.090588 kubelet[2760]: E0910 23:48:58.090551 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.090588 kubelet[2760]: W0910 23:48:58.090572 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.090588 kubelet[2760]: E0910 23:48:58.090584 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.090897 kubelet[2760]: E0910 23:48:58.090864 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.090897 kubelet[2760]: W0910 23:48:58.090884 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.090897 kubelet[2760]: E0910 23:48:58.090896 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.091653 kubelet[2760]: E0910 23:48:58.091425 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.091653 kubelet[2760]: W0910 23:48:58.091445 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.091653 kubelet[2760]: E0910 23:48:58.091458 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.094068 containerd[1544]: time="2025-09-10T23:48:58.094016825Z" level=info msg="connecting to shim 89e4663b5cf3ea9a477c66eeb976401b20eaa0d1653528c43c8fac7ede898dc9" address="unix:///run/containerd/s/ae1d2a23eabe4f98e3ce9af1e7d89a8f1778fd22989087818c30157c14e3e431" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:48:58.094238 kubelet[2760]: E0910 23:48:58.094215 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.094274 kubelet[2760]: W0910 23:48:58.094236 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.094274 kubelet[2760]: E0910 23:48:58.094253 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.094551 kubelet[2760]: E0910 23:48:58.094523 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.094551 kubelet[2760]: W0910 23:48:58.094533 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.094679 kubelet[2760]: E0910 23:48:58.094545 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.095671 kubelet[2760]: E0910 23:48:58.095621 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.095746 kubelet[2760]: W0910 23:48:58.095686 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.095746 kubelet[2760]: E0910 23:48:58.095706 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.096315 kubelet[2760]: E0910 23:48:58.096286 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.096315 kubelet[2760]: W0910 23:48:58.096310 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.096415 kubelet[2760]: E0910 23:48:58.096324 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.098950 kubelet[2760]: E0910 23:48:58.098038 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.098950 kubelet[2760]: W0910 23:48:58.098062 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.098950 kubelet[2760]: E0910 23:48:58.098074 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.128172 kubelet[2760]: E0910 23:48:58.128123 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:48:58.128172 kubelet[2760]: W0910 23:48:58.128154 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:48:58.128353 kubelet[2760]: E0910 23:48:58.128199 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:48:58.149853 systemd[1]: Started cri-containerd-89e4663b5cf3ea9a477c66eeb976401b20eaa0d1653528c43c8fac7ede898dc9.scope - libcontainer container 89e4663b5cf3ea9a477c66eeb976401b20eaa0d1653528c43c8fac7ede898dc9. Sep 10 23:48:58.199280 containerd[1544]: time="2025-09-10T23:48:58.199221016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-djz5w,Uid:e04ba00a-b3f2-45be-8d8a-a79b0d3fc34a,Namespace:calico-system,Attempt:0,} returns sandbox id \"89e4663b5cf3ea9a477c66eeb976401b20eaa0d1653528c43c8fac7ede898dc9\"" Sep 10 23:48:59.433492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1513491222.mount: Deactivated successfully. Sep 10 23:48:59.633590 kubelet[2760]: E0910 23:48:59.633468 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vgs4" podUID="3865e030-478e-4bf4-875f-3e63ea712016" Sep 10 23:49:00.060012 containerd[1544]: time="2025-09-10T23:49:00.059934072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:00.061467 containerd[1544]: time="2025-09-10T23:49:00.061405031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 10 23:49:00.062412 containerd[1544]: time="2025-09-10T23:49:00.062366257Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:00.064949 containerd[1544]: time="2025-09-10T23:49:00.064878844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:00.066175 containerd[1544]: time="2025-09-10T23:49:00.066127718Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.000355095s" Sep 10 23:49:00.066175 containerd[1544]: time="2025-09-10T23:49:00.066164919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 10 23:49:00.069351 containerd[1544]: time="2025-09-10T23:49:00.067798002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 10 23:49:00.092695 containerd[1544]: time="2025-09-10T23:49:00.092492503Z" level=info msg="CreateContainer within sandbox \"e172cb00e8da86844cc22288355371f79876baed0915416fbe82388a7607e80c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 10 23:49:00.102908 containerd[1544]: time="2025-09-10T23:49:00.102861140Z" level=info msg="Container b229765f8af24bcd39cc96b840e8cbdfaadee0b9415d263eb4df257e19776ca9: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:00.106992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2659701490.mount: Deactivated successfully. Sep 10 23:49:00.117285 containerd[1544]: time="2025-09-10T23:49:00.117216884Z" level=info msg="CreateContainer within sandbox \"e172cb00e8da86844cc22288355371f79876baed0915416fbe82388a7607e80c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b229765f8af24bcd39cc96b840e8cbdfaadee0b9415d263eb4df257e19776ca9\"" Sep 10 23:49:00.119113 containerd[1544]: time="2025-09-10T23:49:00.119075694Z" level=info msg="StartContainer for \"b229765f8af24bcd39cc96b840e8cbdfaadee0b9415d263eb4df257e19776ca9\"" Sep 10 23:49:00.121163 containerd[1544]: time="2025-09-10T23:49:00.121105468Z" level=info msg="connecting to shim b229765f8af24bcd39cc96b840e8cbdfaadee0b9415d263eb4df257e19776ca9" address="unix:///run/containerd/s/0810718ccdfbe555fe26b8f5c882d9586a64426faa6cc2c5d64f0b209cc5ab40" protocol=ttrpc version=3 Sep 10 23:49:00.146855 systemd[1]: Started cri-containerd-b229765f8af24bcd39cc96b840e8cbdfaadee0b9415d263eb4df257e19776ca9.scope - libcontainer container b229765f8af24bcd39cc96b840e8cbdfaadee0b9415d263eb4df257e19776ca9. Sep 10 23:49:00.199680 containerd[1544]: time="2025-09-10T23:49:00.199560007Z" level=info msg="StartContainer for \"b229765f8af24bcd39cc96b840e8cbdfaadee0b9415d263eb4df257e19776ca9\" returns successfully" Sep 10 23:49:00.786911 kubelet[2760]: E0910 23:49:00.786802 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.786911 kubelet[2760]: W0910 23:49:00.786827 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.786911 kubelet[2760]: E0910 23:49:00.786866 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.787850 kubelet[2760]: E0910 23:49:00.787833 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.787995 kubelet[2760]: W0910 23:49:00.787930 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.787995 kubelet[2760]: E0910 23:49:00.787950 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.788356 kubelet[2760]: E0910 23:49:00.788288 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.788356 kubelet[2760]: W0910 23:49:00.788302 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.788356 kubelet[2760]: E0910 23:49:00.788311 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.788672 kubelet[2760]: E0910 23:49:00.788659 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.788810 kubelet[2760]: W0910 23:49:00.788741 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.788810 kubelet[2760]: E0910 23:49:00.788758 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.789111 kubelet[2760]: E0910 23:49:00.789057 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.789111 kubelet[2760]: W0910 23:49:00.789069 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.789111 kubelet[2760]: E0910 23:49:00.789079 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.789450 kubelet[2760]: E0910 23:49:00.789394 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.789450 kubelet[2760]: W0910 23:49:00.789406 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.789450 kubelet[2760]: E0910 23:49:00.789416 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.789805 kubelet[2760]: E0910 23:49:00.789747 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.789805 kubelet[2760]: W0910 23:49:00.789759 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.789805 kubelet[2760]: E0910 23:49:00.789768 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.790269 kubelet[2760]: E0910 23:49:00.790191 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.790269 kubelet[2760]: W0910 23:49:00.790206 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.790269 kubelet[2760]: E0910 23:49:00.790217 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.790924 kubelet[2760]: E0910 23:49:00.790808 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.790924 kubelet[2760]: W0910 23:49:00.790822 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.790924 kubelet[2760]: E0910 23:49:00.790833 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.791186 kubelet[2760]: E0910 23:49:00.791105 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.791186 kubelet[2760]: W0910 23:49:00.791117 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.791186 kubelet[2760]: E0910 23:49:00.791146 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.791492 kubelet[2760]: E0910 23:49:00.791432 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.791492 kubelet[2760]: W0910 23:49:00.791446 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.791492 kubelet[2760]: E0910 23:49:00.791455 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.791828 kubelet[2760]: E0910 23:49:00.791758 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.791828 kubelet[2760]: W0910 23:49:00.791771 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.791828 kubelet[2760]: E0910 23:49:00.791781 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.792080 kubelet[2760]: E0910 23:49:00.792066 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.792239 kubelet[2760]: W0910 23:49:00.792175 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.792239 kubelet[2760]: E0910 23:49:00.792194 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.792548 kubelet[2760]: E0910 23:49:00.792488 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.792548 kubelet[2760]: W0910 23:49:00.792500 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.792548 kubelet[2760]: E0910 23:49:00.792509 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.792899 kubelet[2760]: E0910 23:49:00.792814 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.792899 kubelet[2760]: W0910 23:49:00.792828 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.792899 kubelet[2760]: E0910 23:49:00.792838 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.802580 kubelet[2760]: E0910 23:49:00.802504 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.802580 kubelet[2760]: W0910 23:49:00.802578 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.802976 kubelet[2760]: E0910 23:49:00.802605 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.803209 kubelet[2760]: E0910 23:49:00.803183 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.803209 kubelet[2760]: W0910 23:49:00.803203 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.803435 kubelet[2760]: E0910 23:49:00.803215 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.803571 kubelet[2760]: E0910 23:49:00.803553 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.803571 kubelet[2760]: W0910 23:49:00.803566 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.803848 kubelet[2760]: E0910 23:49:00.803577 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.804073 kubelet[2760]: E0910 23:49:00.803956 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.804073 kubelet[2760]: W0910 23:49:00.803978 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.804073 kubelet[2760]: E0910 23:49:00.803989 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.804714 kubelet[2760]: E0910 23:49:00.804475 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.804714 kubelet[2760]: W0910 23:49:00.804485 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.804714 kubelet[2760]: E0910 23:49:00.804494 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.804935 kubelet[2760]: E0910 23:49:00.804922 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.804935 kubelet[2760]: W0910 23:49:00.804934 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.805017 kubelet[2760]: E0910 23:49:00.804943 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.805264 kubelet[2760]: E0910 23:49:00.805243 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.805356 kubelet[2760]: W0910 23:49:00.805265 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.805356 kubelet[2760]: E0910 23:49:00.805279 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.805566 kubelet[2760]: E0910 23:49:00.805549 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.805566 kubelet[2760]: W0910 23:49:00.805564 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.805743 kubelet[2760]: E0910 23:49:00.805579 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.806075 kubelet[2760]: E0910 23:49:00.806060 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.806075 kubelet[2760]: W0910 23:49:00.806076 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.806214 kubelet[2760]: E0910 23:49:00.806088 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.806414 kubelet[2760]: E0910 23:49:00.806395 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.806454 kubelet[2760]: W0910 23:49:00.806413 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.806454 kubelet[2760]: E0910 23:49:00.806427 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.806698 kubelet[2760]: E0910 23:49:00.806681 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.806698 kubelet[2760]: W0910 23:49:00.806698 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.806885 kubelet[2760]: E0910 23:49:00.806711 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.806979 kubelet[2760]: E0910 23:49:00.806966 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.807050 kubelet[2760]: W0910 23:49:00.806979 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.807050 kubelet[2760]: E0910 23:49:00.806989 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.807256 kubelet[2760]: E0910 23:49:00.807240 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.807256 kubelet[2760]: W0910 23:49:00.807255 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.807353 kubelet[2760]: E0910 23:49:00.807265 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.807505 kubelet[2760]: E0910 23:49:00.807491 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.807554 kubelet[2760]: W0910 23:49:00.807505 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.807554 kubelet[2760]: E0910 23:49:00.807516 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.807932 kubelet[2760]: E0910 23:49:00.807916 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.807987 kubelet[2760]: W0910 23:49:00.807934 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.807987 kubelet[2760]: E0910 23:49:00.807948 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.808226 kubelet[2760]: E0910 23:49:00.808208 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.808272 kubelet[2760]: W0910 23:49:00.808232 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.808272 kubelet[2760]: E0910 23:49:00.808247 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.808565 kubelet[2760]: E0910 23:49:00.808548 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.808617 kubelet[2760]: W0910 23:49:00.808566 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.808861 kubelet[2760]: E0910 23:49:00.808786 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:00.809334 kubelet[2760]: E0910 23:49:00.809315 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:49:00.809386 kubelet[2760]: W0910 23:49:00.809337 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:49:00.809386 kubelet[2760]: E0910 23:49:00.809352 2760 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:49:01.502124 containerd[1544]: time="2025-09-10T23:49:01.502055911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:01.503019 containerd[1544]: time="2025-09-10T23:49:01.502772770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 10 23:49:01.503850 containerd[1544]: time="2025-09-10T23:49:01.503808716Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:01.505965 containerd[1544]: time="2025-09-10T23:49:01.505931652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:01.506780 containerd[1544]: time="2025-09-10T23:49:01.506750953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.43891243s" Sep 10 23:49:01.506969 containerd[1544]: time="2025-09-10T23:49:01.506876756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 10 23:49:01.511504 containerd[1544]: time="2025-09-10T23:49:01.511445035Z" level=info msg="CreateContainer within sandbox \"89e4663b5cf3ea9a477c66eeb976401b20eaa0d1653528c43c8fac7ede898dc9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 10 23:49:01.522923 containerd[1544]: time="2025-09-10T23:49:01.521826345Z" level=info msg="Container 128509cfda89326e98e416f2f163824986f708b3e5ab778596a244c57b373e71: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:01.533740 containerd[1544]: time="2025-09-10T23:49:01.533695413Z" level=info msg="CreateContainer within sandbox \"89e4663b5cf3ea9a477c66eeb976401b20eaa0d1653528c43c8fac7ede898dc9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"128509cfda89326e98e416f2f163824986f708b3e5ab778596a244c57b373e71\"" Sep 10 23:49:01.535276 containerd[1544]: time="2025-09-10T23:49:01.535210013Z" level=info msg="StartContainer for \"128509cfda89326e98e416f2f163824986f708b3e5ab778596a244c57b373e71\"" Sep 10 23:49:01.537329 containerd[1544]: time="2025-09-10T23:49:01.537242505Z" level=info msg="connecting to shim 128509cfda89326e98e416f2f163824986f708b3e5ab778596a244c57b373e71" address="unix:///run/containerd/s/ae1d2a23eabe4f98e3ce9af1e7d89a8f1778fd22989087818c30157c14e3e431" protocol=ttrpc version=3 Sep 10 23:49:01.569028 systemd[1]: Started cri-containerd-128509cfda89326e98e416f2f163824986f708b3e5ab778596a244c57b373e71.scope - libcontainer container 128509cfda89326e98e416f2f163824986f708b3e5ab778596a244c57b373e71. Sep 10 23:49:01.624775 containerd[1544]: time="2025-09-10T23:49:01.624711979Z" level=info msg="StartContainer for \"128509cfda89326e98e416f2f163824986f708b3e5ab778596a244c57b373e71\" returns successfully" Sep 10 23:49:01.639661 kubelet[2760]: E0910 23:49:01.639600 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vgs4" podUID="3865e030-478e-4bf4-875f-3e63ea712016" Sep 10 23:49:01.654577 systemd[1]: cri-containerd-128509cfda89326e98e416f2f163824986f708b3e5ab778596a244c57b373e71.scope: Deactivated successfully. Sep 10 23:49:01.658697 containerd[1544]: time="2025-09-10T23:49:01.658545738Z" level=info msg="received exit event container_id:\"128509cfda89326e98e416f2f163824986f708b3e5ab778596a244c57b373e71\" id:\"128509cfda89326e98e416f2f163824986f708b3e5ab778596a244c57b373e71\" pid:3434 exited_at:{seconds:1757548141 nanos:657669156}" Sep 10 23:49:01.658955 containerd[1544]: time="2025-09-10T23:49:01.658627580Z" level=info msg="TaskExit event in podsandbox handler container_id:\"128509cfda89326e98e416f2f163824986f708b3e5ab778596a244c57b373e71\" id:\"128509cfda89326e98e416f2f163824986f708b3e5ab778596a244c57b373e71\" pid:3434 exited_at:{seconds:1757548141 nanos:657669156}" Sep 10 23:49:01.690448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-128509cfda89326e98e416f2f163824986f708b3e5ab778596a244c57b373e71-rootfs.mount: Deactivated successfully. Sep 10 23:49:01.757771 kubelet[2760]: I0910 23:49:01.757341 2760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:49:01.781170 kubelet[2760]: I0910 23:49:01.781077 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-758c5b9c99-qr6kj" podStartSLOduration=2.777813987 podStartE2EDuration="4.781058883s" podCreationTimestamp="2025-09-10 23:48:57 +0000 UTC" firstStartedPulling="2025-09-10 23:48:58.063859408 +0000 UTC m=+22.578020441" lastFinishedPulling="2025-09-10 23:49:00.067104304 +0000 UTC m=+24.581265337" observedRunningTime="2025-09-10 23:49:00.771817757 +0000 UTC m=+25.285978790" watchObservedRunningTime="2025-09-10 23:49:01.781058883 +0000 UTC m=+26.295219916" Sep 10 23:49:02.766380 containerd[1544]: time="2025-09-10T23:49:02.766122619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 10 23:49:03.631464 kubelet[2760]: E0910 23:49:03.631362 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vgs4" podUID="3865e030-478e-4bf4-875f-3e63ea712016" Sep 10 23:49:05.237245 containerd[1544]: time="2025-09-10T23:49:05.236810143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:05.239720 containerd[1544]: time="2025-09-10T23:49:05.238910792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 10 23:49:05.241099 containerd[1544]: time="2025-09-10T23:49:05.241035161Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:05.243677 containerd[1544]: time="2025-09-10T23:49:05.243506779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:05.244717 containerd[1544]: time="2025-09-10T23:49:05.244333519Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.478163818s" Sep 10 23:49:05.244717 containerd[1544]: time="2025-09-10T23:49:05.244375160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 10 23:49:05.250678 containerd[1544]: time="2025-09-10T23:49:05.250145375Z" level=info msg="CreateContainer within sandbox \"89e4663b5cf3ea9a477c66eeb976401b20eaa0d1653528c43c8fac7ede898dc9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 10 23:49:05.260031 containerd[1544]: time="2025-09-10T23:49:05.259991525Z" level=info msg="Container 5ebede18d20c37fb304be2ea5063d9aebec09d44ff501eb51e40a64c86be2f5b: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:05.270009 containerd[1544]: time="2025-09-10T23:49:05.269966078Z" level=info msg="CreateContainer within sandbox \"89e4663b5cf3ea9a477c66eeb976401b20eaa0d1653528c43c8fac7ede898dc9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5ebede18d20c37fb304be2ea5063d9aebec09d44ff501eb51e40a64c86be2f5b\"" Sep 10 23:49:05.271891 containerd[1544]: time="2025-09-10T23:49:05.271826842Z" level=info msg="StartContainer for \"5ebede18d20c37fb304be2ea5063d9aebec09d44ff501eb51e40a64c86be2f5b\"" Sep 10 23:49:05.273286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3326296904.mount: Deactivated successfully. Sep 10 23:49:05.274314 containerd[1544]: time="2025-09-10T23:49:05.274277459Z" level=info msg="connecting to shim 5ebede18d20c37fb304be2ea5063d9aebec09d44ff501eb51e40a64c86be2f5b" address="unix:///run/containerd/s/ae1d2a23eabe4f98e3ce9af1e7d89a8f1778fd22989087818c30157c14e3e431" protocol=ttrpc version=3 Sep 10 23:49:05.301824 systemd[1]: Started cri-containerd-5ebede18d20c37fb304be2ea5063d9aebec09d44ff501eb51e40a64c86be2f5b.scope - libcontainer container 5ebede18d20c37fb304be2ea5063d9aebec09d44ff501eb51e40a64c86be2f5b. Sep 10 23:49:05.349956 containerd[1544]: time="2025-09-10T23:49:05.349887708Z" level=info msg="StartContainer for \"5ebede18d20c37fb304be2ea5063d9aebec09d44ff501eb51e40a64c86be2f5b\" returns successfully" Sep 10 23:49:05.632361 kubelet[2760]: E0910 23:49:05.632029 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vgs4" podUID="3865e030-478e-4bf4-875f-3e63ea712016" Sep 10 23:49:05.931402 containerd[1544]: time="2025-09-10T23:49:05.931174426Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 23:49:05.934803 systemd[1]: cri-containerd-5ebede18d20c37fb304be2ea5063d9aebec09d44ff501eb51e40a64c86be2f5b.scope: Deactivated successfully. Sep 10 23:49:05.935464 systemd[1]: cri-containerd-5ebede18d20c37fb304be2ea5063d9aebec09d44ff501eb51e40a64c86be2f5b.scope: Consumed 520ms CPU time, 186.4M memory peak, 165.8M written to disk. Sep 10 23:49:05.936632 containerd[1544]: time="2025-09-10T23:49:05.936497791Z" level=info msg="received exit event container_id:\"5ebede18d20c37fb304be2ea5063d9aebec09d44ff501eb51e40a64c86be2f5b\" id:\"5ebede18d20c37fb304be2ea5063d9aebec09d44ff501eb51e40a64c86be2f5b\" pid:3495 exited_at:{seconds:1757548145 nanos:936176743}" Sep 10 23:49:05.937069 containerd[1544]: time="2025-09-10T23:49:05.937040403Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ebede18d20c37fb304be2ea5063d9aebec09d44ff501eb51e40a64c86be2f5b\" id:\"5ebede18d20c37fb304be2ea5063d9aebec09d44ff501eb51e40a64c86be2f5b\" pid:3495 exited_at:{seconds:1757548145 nanos:936176743}" Sep 10 23:49:05.951315 kubelet[2760]: I0910 23:49:05.951278 2760 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 10 23:49:05.971508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ebede18d20c37fb304be2ea5063d9aebec09d44ff501eb51e40a64c86be2f5b-rootfs.mount: Deactivated successfully. Sep 10 23:49:06.072448 systemd[1]: Created slice kubepods-burstable-pod1f271b08_153c_491b_807a_29baad083a03.slice - libcontainer container kubepods-burstable-pod1f271b08_153c_491b_807a_29baad083a03.slice. Sep 10 23:49:06.090468 systemd[1]: Created slice kubepods-burstable-pode51822d2_3192_476b_8d88_be0574f27d69.slice - libcontainer container kubepods-burstable-pode51822d2_3192_476b_8d88_be0574f27d69.slice. Sep 10 23:49:06.101832 systemd[1]: Created slice kubepods-besteffort-pod85486300_8241_4b09_b357_178b6193527e.slice - libcontainer container kubepods-besteffort-pod85486300_8241_4b09_b357_178b6193527e.slice. Sep 10 23:49:06.115061 systemd[1]: Created slice kubepods-besteffort-podbba1b489_acc9_4ee7_bd26_c1c2543cea09.slice - libcontainer container kubepods-besteffort-podbba1b489_acc9_4ee7_bd26_c1c2543cea09.slice. Sep 10 23:49:06.122493 systemd[1]: Created slice kubepods-besteffort-podd4ce0378_6a09_4538_a889_a9c466eebe4f.slice - libcontainer container kubepods-besteffort-podd4ce0378_6a09_4538_a889_a9c466eebe4f.slice. Sep 10 23:49:06.136805 systemd[1]: Created slice kubepods-besteffort-pod15bca87c_125d_4788_af8c_2c683525ab9f.slice - libcontainer container kubepods-besteffort-pod15bca87c_125d_4788_af8c_2c683525ab9f.slice. Sep 10 23:49:06.145678 kubelet[2760]: I0910 23:49:06.145617 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f271b08-153c-491b-807a-29baad083a03-config-volume\") pod \"coredns-674b8bbfcf-zwk7w\" (UID: \"1f271b08-153c-491b-807a-29baad083a03\") " pod="kube-system/coredns-674b8bbfcf-zwk7w" Sep 10 23:49:06.146279 systemd[1]: Created slice kubepods-besteffort-pod195bfbbb_cd1a_4b25_aa6f_d7e8a3139277.slice - libcontainer container kubepods-besteffort-pod195bfbbb_cd1a_4b25_aa6f_d7e8a3139277.slice. Sep 10 23:49:06.147365 kubelet[2760]: I0910 23:49:06.147340 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv5q7\" (UniqueName: \"kubernetes.io/projected/195bfbbb-cd1a-4b25-aa6f-d7e8a3139277-kube-api-access-zv5q7\") pod \"calico-apiserver-868f5f4df9-gk4bh\" (UID: \"195bfbbb-cd1a-4b25-aa6f-d7e8a3139277\") " pod="calico-apiserver/calico-apiserver-868f5f4df9-gk4bh" Sep 10 23:49:06.147665 kubelet[2760]: I0910 23:49:06.147624 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/15bca87c-125d-4788-af8c-2c683525ab9f-goldmane-key-pair\") pod \"goldmane-54d579b49d-g2jf2\" (UID: \"15bca87c-125d-4788-af8c-2c683525ab9f\") " pod="calico-system/goldmane-54d579b49d-g2jf2" Sep 10 23:49:06.147942 kubelet[2760]: I0910 23:49:06.147815 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bba1b489-acc9-4ee7-bd26-c1c2543cea09-calico-apiserver-certs\") pod \"calico-apiserver-868f5f4df9-98krg\" (UID: \"bba1b489-acc9-4ee7-bd26-c1c2543cea09\") " pod="calico-apiserver/calico-apiserver-868f5f4df9-98krg" Sep 10 23:49:06.148041 kubelet[2760]: I0910 23:49:06.148024 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4ce0378-6a09-4538-a889-a9c466eebe4f-whisker-ca-bundle\") pod \"whisker-5ffc5df785-9297n\" (UID: \"d4ce0378-6a09-4538-a889-a9c466eebe4f\") " pod="calico-system/whisker-5ffc5df785-9297n" Sep 10 23:49:06.148113 kubelet[2760]: I0910 23:49:06.148102 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5q7q\" (UniqueName: \"kubernetes.io/projected/e51822d2-3192-476b-8d88-be0574f27d69-kube-api-access-b5q7q\") pod \"coredns-674b8bbfcf-spgv2\" (UID: \"e51822d2-3192-476b-8d88-be0574f27d69\") " pod="kube-system/coredns-674b8bbfcf-spgv2" Sep 10 23:49:06.148196 kubelet[2760]: I0910 23:49:06.148185 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15bca87c-125d-4788-af8c-2c683525ab9f-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-g2jf2\" (UID: \"15bca87c-125d-4788-af8c-2c683525ab9f\") " pod="calico-system/goldmane-54d579b49d-g2jf2" Sep 10 23:49:06.148319 kubelet[2760]: I0910 23:49:06.148305 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85486300-8241-4b09-b357-178b6193527e-tigera-ca-bundle\") pod \"calico-kube-controllers-596bb59679-rdc8k\" (UID: \"85486300-8241-4b09-b357-178b6193527e\") " pod="calico-system/calico-kube-controllers-596bb59679-rdc8k" Sep 10 23:49:06.148403 kubelet[2760]: I0910 23:49:06.148389 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh2bl\" (UniqueName: \"kubernetes.io/projected/85486300-8241-4b09-b357-178b6193527e-kube-api-access-qh2bl\") pod \"calico-kube-controllers-596bb59679-rdc8k\" (UID: \"85486300-8241-4b09-b357-178b6193527e\") " pod="calico-system/calico-kube-controllers-596bb59679-rdc8k" Sep 10 23:49:06.148612 kubelet[2760]: I0910 23:49:06.148471 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15bca87c-125d-4788-af8c-2c683525ab9f-config\") pod \"goldmane-54d579b49d-g2jf2\" (UID: \"15bca87c-125d-4788-af8c-2c683525ab9f\") " pod="calico-system/goldmane-54d579b49d-g2jf2" Sep 10 23:49:06.148729 kubelet[2760]: I0910 23:49:06.148708 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp5tw\" (UniqueName: \"kubernetes.io/projected/1f271b08-153c-491b-807a-29baad083a03-kube-api-access-wp5tw\") pod \"coredns-674b8bbfcf-zwk7w\" (UID: \"1f271b08-153c-491b-807a-29baad083a03\") " pod="kube-system/coredns-674b8bbfcf-zwk7w" Sep 10 23:49:06.148873 kubelet[2760]: I0910 23:49:06.148814 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99rdf\" (UniqueName: \"kubernetes.io/projected/d4ce0378-6a09-4538-a889-a9c466eebe4f-kube-api-access-99rdf\") pod \"whisker-5ffc5df785-9297n\" (UID: \"d4ce0378-6a09-4538-a889-a9c466eebe4f\") " pod="calico-system/whisker-5ffc5df785-9297n" Sep 10 23:49:06.148873 kubelet[2760]: I0910 23:49:06.148838 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gw7c\" (UniqueName: \"kubernetes.io/projected/bba1b489-acc9-4ee7-bd26-c1c2543cea09-kube-api-access-4gw7c\") pod \"calico-apiserver-868f5f4df9-98krg\" (UID: \"bba1b489-acc9-4ee7-bd26-c1c2543cea09\") " pod="calico-apiserver/calico-apiserver-868f5f4df9-98krg" Sep 10 23:49:06.148957 kubelet[2760]: I0910 23:49:06.148862 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d4ce0378-6a09-4538-a889-a9c466eebe4f-whisker-backend-key-pair\") pod \"whisker-5ffc5df785-9297n\" (UID: \"d4ce0378-6a09-4538-a889-a9c466eebe4f\") " pod="calico-system/whisker-5ffc5df785-9297n" Sep 10 23:49:06.149105 kubelet[2760]: I0910 23:49:06.149008 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e51822d2-3192-476b-8d88-be0574f27d69-config-volume\") pod \"coredns-674b8bbfcf-spgv2\" (UID: \"e51822d2-3192-476b-8d88-be0574f27d69\") " pod="kube-system/coredns-674b8bbfcf-spgv2" Sep 10 23:49:06.149105 kubelet[2760]: I0910 23:49:06.149048 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/195bfbbb-cd1a-4b25-aa6f-d7e8a3139277-calico-apiserver-certs\") pod \"calico-apiserver-868f5f4df9-gk4bh\" (UID: \"195bfbbb-cd1a-4b25-aa6f-d7e8a3139277\") " pod="calico-apiserver/calico-apiserver-868f5f4df9-gk4bh" Sep 10 23:49:06.149105 kubelet[2760]: I0910 23:49:06.149064 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2f4s\" (UniqueName: \"kubernetes.io/projected/15bca87c-125d-4788-af8c-2c683525ab9f-kube-api-access-c2f4s\") pod \"goldmane-54d579b49d-g2jf2\" (UID: \"15bca87c-125d-4788-af8c-2c683525ab9f\") " pod="calico-system/goldmane-54d579b49d-g2jf2" Sep 10 23:49:06.378339 containerd[1544]: time="2025-09-10T23:49:06.378200595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zwk7w,Uid:1f271b08-153c-491b-807a-29baad083a03,Namespace:kube-system,Attempt:0,}" Sep 10 23:49:06.397540 containerd[1544]: time="2025-09-10T23:49:06.397492156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-spgv2,Uid:e51822d2-3192-476b-8d88-be0574f27d69,Namespace:kube-system,Attempt:0,}" Sep 10 23:49:06.407551 containerd[1544]: time="2025-09-10T23:49:06.407288460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596bb59679-rdc8k,Uid:85486300-8241-4b09-b357-178b6193527e,Namespace:calico-system,Attempt:0,}" Sep 10 23:49:06.421290 containerd[1544]: time="2025-09-10T23:49:06.421244899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868f5f4df9-98krg,Uid:bba1b489-acc9-4ee7-bd26-c1c2543cea09,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:49:06.428192 containerd[1544]: time="2025-09-10T23:49:06.427933451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ffc5df785-9297n,Uid:d4ce0378-6a09-4538-a889-a9c466eebe4f,Namespace:calico-system,Attempt:0,}" Sep 10 23:49:06.443752 containerd[1544]: time="2025-09-10T23:49:06.443715732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-g2jf2,Uid:15bca87c-125d-4788-af8c-2c683525ab9f,Namespace:calico-system,Attempt:0,}" Sep 10 23:49:06.452108 containerd[1544]: time="2025-09-10T23:49:06.451976921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868f5f4df9-gk4bh,Uid:195bfbbb-cd1a-4b25-aa6f-d7e8a3139277,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:49:06.553054 containerd[1544]: time="2025-09-10T23:49:06.552916346Z" level=error msg="Failed to destroy network for sandbox \"25e0185a89e56ad966e670486e92b425e2a7262fce09e4f7c2f1f2488b350e24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:06.555508 containerd[1544]: time="2025-09-10T23:49:06.555448564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zwk7w,Uid:1f271b08-153c-491b-807a-29baad083a03,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"25e0185a89e56ad966e670486e92b425e2a7262fce09e4f7c2f1f2488b350e24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:06.556519 kubelet[2760]: E0910 23:49:06.556051 2760 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25e0185a89e56ad966e670486e92b425e2a7262fce09e4f7c2f1f2488b350e24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:06.556519 kubelet[2760]: E0910 23:49:06.556127 2760 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25e0185a89e56ad966e670486e92b425e2a7262fce09e4f7c2f1f2488b350e24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zwk7w" Sep 10 23:49:06.556519 kubelet[2760]: E0910 23:49:06.556146 2760 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25e0185a89e56ad966e670486e92b425e2a7262fce09e4f7c2f1f2488b350e24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zwk7w" Sep 10 23:49:06.556826 kubelet[2760]: E0910 23:49:06.556199 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zwk7w_kube-system(1f271b08-153c-491b-807a-29baad083a03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zwk7w_kube-system(1f271b08-153c-491b-807a-29baad083a03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25e0185a89e56ad966e670486e92b425e2a7262fce09e4f7c2f1f2488b350e24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zwk7w" podUID="1f271b08-153c-491b-807a-29baad083a03" Sep 10 23:49:06.583030 containerd[1544]: time="2025-09-10T23:49:06.582981353Z" level=error msg="Failed to destroy network for sandbox \"262bc6d4ec26a5f1a8dbaf54c0f51c01ef4d999c1e5f063b19465784ad791ea9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:06.590423 containerd[1544]: time="2025-09-10T23:49:06.590300480Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-spgv2,Uid:e51822d2-3192-476b-8d88-be0574f27d69,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"262bc6d4ec26a5f1a8dbaf54c0f51c01ef4d999c1e5f063b19465784ad791ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:06.592185 kubelet[2760]: E0910 23:49:06.590741 2760 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"262bc6d4ec26a5f1a8dbaf54c0f51c01ef4d999c1e5f063b19465784ad791ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:06.592185 kubelet[2760]: E0910 23:49:06.590853 2760 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"262bc6d4ec26a5f1a8dbaf54c0f51c01ef4d999c1e5f063b19465784ad791ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-spgv2" Sep 10 23:49:06.592185 kubelet[2760]: E0910 23:49:06.590874 2760 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"262bc6d4ec26a5f1a8dbaf54c0f51c01ef4d999c1e5f063b19465784ad791ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-spgv2" Sep 10 23:49:06.592382 kubelet[2760]: E0910 23:49:06.590934 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-spgv2_kube-system(e51822d2-3192-476b-8d88-be0574f27d69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-spgv2_kube-system(e51822d2-3192-476b-8d88-be0574f27d69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"262bc6d4ec26a5f1a8dbaf54c0f51c01ef4d999c1e5f063b19465784ad791ea9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-spgv2" podUID="e51822d2-3192-476b-8d88-be0574f27d69" Sep 10 23:49:06.626597 containerd[1544]: time="2025-09-10T23:49:06.626546468Z" level=error msg="Failed to destroy network for sandbox \"de898ba38bed79cae8145fa17d68430ca46968fc83d9bd294893457dffe6a7b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:06.630002 containerd[1544]: time="2025-09-10T23:49:06.629724181Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868f5f4df9-gk4bh,Uid:195bfbbb-cd1a-4b25-aa6f-d7e8a3139277,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"de898ba38bed79cae8145fa17d68430ca46968fc83d9bd294893457dffe6a7b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:06.630774 kubelet[2760]: E0910 23:49:06.630354 2760 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de898ba38bed79cae8145fa17d68430ca46968fc83d9bd294893457dffe6a7b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:06.631543 kubelet[2760]: E0910 23:49:06.631115 2760 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de898ba38bed79cae8145fa17d68430ca46968fc83d9bd294893457dffe6a7b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-868f5f4df9-gk4bh" Sep 10 23:49:06.631543 kubelet[2760]: E0910 23:49:06.631161 2760 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de898ba38bed79cae8145fa17d68430ca46968fc83d9bd294893457dffe6a7b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-868f5f4df9-gk4bh" Sep 10 23:49:06.631543 kubelet[2760]: E0910 23:49:06.631240 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-868f5f4df9-gk4bh_calico-apiserver(195bfbbb-cd1a-4b25-aa6f-d7e8a3139277)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-868f5f4df9-gk4bh_calico-apiserver(195bfbbb-cd1a-4b25-aa6f-d7e8a3139277)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de898ba38bed79cae8145fa17d68430ca46968fc83d9bd294893457dffe6a7b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-868f5f4df9-gk4bh" podUID="195bfbbb-cd1a-4b25-aa6f-d7e8a3139277" Sep 10 23:49:06.640350 containerd[1544]: time="2025-09-10T23:49:06.640281822Z" level=error msg="Failed to destroy network for sandbox \"29ec60b3b8b4c8d88e9e809910d6d31e154cf36cc68c6b93d22d335bf2aa5d8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:06.641110 containerd[1544]: time="2025-09-10T23:49:06.641079960Z" level=error msg="Failed to destroy network for sandbox \"a90b6dcb3278cba6ef141d9ad09cb605c33bae4cb45ec5f5e45b4f3e9f6bf4ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:06.642775 containerd[1544]: time="2025-09-10T23:49:06.642589915Z" level=error msg="Failed to destroy network for sandbox \"0995bdead19cda20ae0c04d50efce1605a34055263457ac303ec1489937201b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:06.643703 containerd[1544]: time="2025-09-10T23:49:06.643614338Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868f5f4df9-98krg,Uid:bba1b489-acc9-4ee7-bd26-c1c2543cea09,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"29ec60b3b8b4c8d88e9e809910d6d31e154cf36cc68c6b93d22d335bf2aa5d8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:06.644017 kubelet[2760]: E0910 23:49:06.643890 2760 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29ec60b3b8b4c8d88e9e809910d6d31e154cf36cc68c6b93d22d335bf2aa5d8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:06.644017 kubelet[2760]: E0910 23:49:06.643950 2760 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29ec60b3b8b4c8d88e9e809910d6d31e154cf36cc68c6b93d22d335bf2aa5d8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-868f5f4df9-98krg" Sep 10 23:49:06.644017 kubelet[2760]: E0910 23:49:06.643975 2760 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29ec60b3b8b4c8d88e9e809910d6d31e154cf36cc68c6b93d22d335bf2aa5d8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-868f5f4df9-98krg" Sep 10 23:49:06.645130 kubelet[2760]: E0910 23:49:06.644029 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-868f5f4df9-98krg_calico-apiserver(bba1b489-acc9-4ee7-bd26-c1c2543cea09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-868f5f4df9-98krg_calico-apiserver(bba1b489-acc9-4ee7-bd26-c1c2543cea09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29ec60b3b8b4c8d88e9e809910d6d31e154cf36cc68c6b93d22d335bf2aa5d8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-868f5f4df9-98krg" podUID="bba1b489-acc9-4ee7-bd26-c1c2543cea09" Sep 10 23:49:06.646677 containerd[1544]: time="2025-09-10T23:49:06.646404162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596bb59679-rdc8k,Uid:85486300-8241-4b09-b357-178b6193527e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a90b6dcb3278cba6ef141d9ad09cb605c33bae4cb45ec5f5e45b4f3e9f6bf4ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:06.647083 kubelet[2760]: E0910 23:49:06.646620 2760 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a90b6dcb3278cba6ef141d9ad09cb605c33bae4cb45ec5f5e45b4f3e9f6bf4ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:06.647083 kubelet[2760]: E0910 23:49:06.646961 2760 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a90b6dcb3278cba6ef141d9ad09cb605c33bae4cb45ec5f5e45b4f3e9f6bf4ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596bb59679-rdc8k" Sep 10 23:49:06.647083 kubelet[2760]: E0910 23:49:06.646982 2760 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a90b6dcb3278cba6ef141d9ad09cb605c33bae4cb45ec5f5e45b4f3e9f6bf4ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596bb59679-rdc8k" Sep 10 23:49:06.647196 kubelet[2760]: E0910 23:49:06.647040 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-596bb59679-rdc8k_calico-system(85486300-8241-4b09-b357-178b6193527e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-596bb59679-rdc8k_calico-system(85486300-8241-4b09-b357-178b6193527e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a90b6dcb3278cba6ef141d9ad09cb605c33bae4cb45ec5f5e45b4f3e9f6bf4ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-596bb59679-rdc8k" podUID="85486300-8241-4b09-b357-178b6193527e" Sep 10 23:49:06.647774 containerd[1544]: time="2025-09-10T23:49:06.647637030Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ffc5df785-9297n,Uid:d4ce0378-6a09-4538-a889-a9c466eebe4f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0995bdead19cda20ae0c04d50efce1605a34055263457ac303ec1489937201b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:06.648526 kubelet[2760]: E0910 23:49:06.648415 2760 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0995bdead19cda20ae0c04d50efce1605a34055263457ac303ec1489937201b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:06.649960 kubelet[2760]: E0910 23:49:06.649816 2760 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0995bdead19cda20ae0c04d50efce1605a34055263457ac303ec1489937201b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5ffc5df785-9297n" Sep 10 23:49:06.649960 kubelet[2760]: E0910 23:49:06.649861 2760 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0995bdead19cda20ae0c04d50efce1605a34055263457ac303ec1489937201b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5ffc5df785-9297n" Sep 10 23:49:06.649960 kubelet[2760]: E0910 23:49:06.649907 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5ffc5df785-9297n_calico-system(d4ce0378-6a09-4538-a889-a9c466eebe4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5ffc5df785-9297n_calico-system(d4ce0378-6a09-4538-a889-a9c466eebe4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0995bdead19cda20ae0c04d50efce1605a34055263457ac303ec1489937201b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5ffc5df785-9297n" podUID="d4ce0378-6a09-4538-a889-a9c466eebe4f" Sep 10 23:49:06.658942 containerd[1544]: time="2025-09-10T23:49:06.658893487Z" level=error msg="Failed to destroy network for sandbox \"1d0103d1baf548d03d2357c76a5334de0109ec9c8c0f81c972d1334923d54a9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:06.660989 containerd[1544]: time="2025-09-10T23:49:06.660825891Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-g2jf2,Uid:15bca87c-125d-4788-af8c-2c683525ab9f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0103d1baf548d03d2357c76a5334de0109ec9c8c0f81c972d1334923d54a9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:06.661519 kubelet[2760]: E0910 23:49:06.661457 2760 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0103d1baf548d03d2357c76a5334de0109ec9c8c0f81c972d1334923d54a9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:06.661596 kubelet[2760]: E0910 23:49:06.661558 2760 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0103d1baf548d03d2357c76a5334de0109ec9c8c0f81c972d1334923d54a9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-g2jf2" Sep 10 23:49:06.661986 kubelet[2760]: E0910 23:49:06.661614 2760 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0103d1baf548d03d2357c76a5334de0109ec9c8c0f81c972d1334923d54a9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-g2jf2" Sep 10 23:49:06.662551 kubelet[2760]: E0910 23:49:06.662062 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-g2jf2_calico-system(15bca87c-125d-4788-af8c-2c683525ab9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-g2jf2_calico-system(15bca87c-125d-4788-af8c-2c683525ab9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d0103d1baf548d03d2357c76a5334de0109ec9c8c0f81c972d1334923d54a9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-g2jf2" podUID="15bca87c-125d-4788-af8c-2c683525ab9f" Sep 10 23:49:06.796457 containerd[1544]: time="2025-09-10T23:49:06.795580169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 10 23:49:07.642385 systemd[1]: Created slice kubepods-besteffort-pod3865e030_478e_4bf4_875f_3e63ea712016.slice - libcontainer container kubepods-besteffort-pod3865e030_478e_4bf4_875f_3e63ea712016.slice. Sep 10 23:49:07.646005 containerd[1544]: time="2025-09-10T23:49:07.645963981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4vgs4,Uid:3865e030-478e-4bf4-875f-3e63ea712016,Namespace:calico-system,Attempt:0,}" Sep 10 23:49:07.701968 containerd[1544]: time="2025-09-10T23:49:07.701915750Z" level=error msg="Failed to destroy network for sandbox \"2bc8a9483b8a241a2372d4d535ad65e7cb1ade45d0b1c4c952bb807b67c74c0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:07.704293 systemd[1]: run-netns-cni\x2d221d8936\x2d2746\x2d3f97\x2d75be\x2d1ffce4e973f8.mount: Deactivated successfully. Sep 10 23:49:07.706303 containerd[1544]: time="2025-09-10T23:49:07.706159084Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4vgs4,Uid:3865e030-478e-4bf4-875f-3e63ea712016,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bc8a9483b8a241a2372d4d535ad65e7cb1ade45d0b1c4c952bb807b67c74c0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:07.706704 kubelet[2760]: E0910 23:49:07.706628 2760 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bc8a9483b8a241a2372d4d535ad65e7cb1ade45d0b1c4c952bb807b67c74c0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:49:07.707213 kubelet[2760]: E0910 23:49:07.707002 2760 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bc8a9483b8a241a2372d4d535ad65e7cb1ade45d0b1c4c952bb807b67c74c0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4vgs4" Sep 10 23:49:07.707213 kubelet[2760]: E0910 23:49:07.707032 2760 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bc8a9483b8a241a2372d4d535ad65e7cb1ade45d0b1c4c952bb807b67c74c0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4vgs4" Sep 10 23:49:07.707213 kubelet[2760]: E0910 23:49:07.707115 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4vgs4_calico-system(3865e030-478e-4bf4-875f-3e63ea712016)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4vgs4_calico-system(3865e030-478e-4bf4-875f-3e63ea712016)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2bc8a9483b8a241a2372d4d535ad65e7cb1ade45d0b1c4c952bb807b67c74c0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4vgs4" podUID="3865e030-478e-4bf4-875f-3e63ea712016" Sep 10 23:49:13.397869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3374899081.mount: Deactivated successfully. Sep 10 23:49:13.420517 containerd[1544]: time="2025-09-10T23:49:13.419242180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:13.420517 containerd[1544]: time="2025-09-10T23:49:13.420463484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 10 23:49:13.421134 containerd[1544]: time="2025-09-10T23:49:13.420864572Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:13.422617 containerd[1544]: time="2025-09-10T23:49:13.422583246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:13.423053 containerd[1544]: time="2025-09-10T23:49:13.423016775Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 6.627387325s" Sep 10 23:49:13.423053 containerd[1544]: time="2025-09-10T23:49:13.423052016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 10 23:49:13.448164 containerd[1544]: time="2025-09-10T23:49:13.448117913Z" level=info msg="CreateContainer within sandbox \"89e4663b5cf3ea9a477c66eeb976401b20eaa0d1653528c43c8fac7ede898dc9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 10 23:49:13.485632 containerd[1544]: time="2025-09-10T23:49:13.483819141Z" level=info msg="Container 216c31e801ee5173b2d7024b85a67cffb790646428f54a2be6781089c51589b9: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:13.509412 containerd[1544]: time="2025-09-10T23:49:13.509359728Z" level=info msg="CreateContainer within sandbox \"89e4663b5cf3ea9a477c66eeb976401b20eaa0d1653528c43c8fac7ede898dc9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"216c31e801ee5173b2d7024b85a67cffb790646428f54a2be6781089c51589b9\"" Sep 10 23:49:13.510654 containerd[1544]: time="2025-09-10T23:49:13.510598832Z" level=info msg="StartContainer for \"216c31e801ee5173b2d7024b85a67cffb790646428f54a2be6781089c51589b9\"" Sep 10 23:49:13.512761 containerd[1544]: time="2025-09-10T23:49:13.512724634Z" level=info msg="connecting to shim 216c31e801ee5173b2d7024b85a67cffb790646428f54a2be6781089c51589b9" address="unix:///run/containerd/s/ae1d2a23eabe4f98e3ce9af1e7d89a8f1778fd22989087818c30157c14e3e431" protocol=ttrpc version=3 Sep 10 23:49:13.572867 systemd[1]: Started cri-containerd-216c31e801ee5173b2d7024b85a67cffb790646428f54a2be6781089c51589b9.scope - libcontainer container 216c31e801ee5173b2d7024b85a67cffb790646428f54a2be6781089c51589b9. Sep 10 23:49:13.621690 containerd[1544]: time="2025-09-10T23:49:13.621566233Z" level=info msg="StartContainer for \"216c31e801ee5173b2d7024b85a67cffb790646428f54a2be6781089c51589b9\" returns successfully" Sep 10 23:49:13.773694 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 10 23:49:13.774519 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 10 23:49:13.846079 kubelet[2760]: I0910 23:49:13.846011 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-djz5w" podStartSLOduration=1.623492255 podStartE2EDuration="16.845992845s" podCreationTimestamp="2025-09-10 23:48:57 +0000 UTC" firstStartedPulling="2025-09-10 23:48:58.201885732 +0000 UTC m=+22.716046805" lastFinishedPulling="2025-09-10 23:49:13.424386362 +0000 UTC m=+37.938547395" observedRunningTime="2025-09-10 23:49:13.845776641 +0000 UTC m=+38.359937714" watchObservedRunningTime="2025-09-10 23:49:13.845992845 +0000 UTC m=+38.360153878" Sep 10 23:49:14.010362 kubelet[2760]: I0910 23:49:14.009785 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99rdf\" (UniqueName: \"kubernetes.io/projected/d4ce0378-6a09-4538-a889-a9c466eebe4f-kube-api-access-99rdf\") pod \"d4ce0378-6a09-4538-a889-a9c466eebe4f\" (UID: \"d4ce0378-6a09-4538-a889-a9c466eebe4f\") " Sep 10 23:49:14.010362 kubelet[2760]: I0910 23:49:14.009937 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4ce0378-6a09-4538-a889-a9c466eebe4f-whisker-ca-bundle\") pod \"d4ce0378-6a09-4538-a889-a9c466eebe4f\" (UID: \"d4ce0378-6a09-4538-a889-a9c466eebe4f\") " Sep 10 23:49:14.010362 kubelet[2760]: I0910 23:49:14.010064 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d4ce0378-6a09-4538-a889-a9c466eebe4f-whisker-backend-key-pair\") pod \"d4ce0378-6a09-4538-a889-a9c466eebe4f\" (UID: \"d4ce0378-6a09-4538-a889-a9c466eebe4f\") " Sep 10 23:49:14.013922 kubelet[2760]: I0910 23:49:14.013876 2760 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4ce0378-6a09-4538-a889-a9c466eebe4f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d4ce0378-6a09-4538-a889-a9c466eebe4f" (UID: "d4ce0378-6a09-4538-a889-a9c466eebe4f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 23:49:14.016161 kubelet[2760]: I0910 23:49:14.016056 2760 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4ce0378-6a09-4538-a889-a9c466eebe4f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d4ce0378-6a09-4538-a889-a9c466eebe4f" (UID: "d4ce0378-6a09-4538-a889-a9c466eebe4f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 10 23:49:14.016868 kubelet[2760]: I0910 23:49:14.016785 2760 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4ce0378-6a09-4538-a889-a9c466eebe4f-kube-api-access-99rdf" (OuterVolumeSpecName: "kube-api-access-99rdf") pod "d4ce0378-6a09-4538-a889-a9c466eebe4f" (UID: "d4ce0378-6a09-4538-a889-a9c466eebe4f"). InnerVolumeSpecName "kube-api-access-99rdf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 23:49:14.110778 kubelet[2760]: I0910 23:49:14.110619 2760 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4ce0378-6a09-4538-a889-a9c466eebe4f-whisker-ca-bundle\") on node \"ci-4372-1-0-n-474f3036a8\" DevicePath \"\"" Sep 10 23:49:14.110778 kubelet[2760]: I0910 23:49:14.110673 2760 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d4ce0378-6a09-4538-a889-a9c466eebe4f-whisker-backend-key-pair\") on node \"ci-4372-1-0-n-474f3036a8\" DevicePath \"\"" Sep 10 23:49:14.110778 kubelet[2760]: I0910 23:49:14.110685 2760 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99rdf\" (UniqueName: \"kubernetes.io/projected/d4ce0378-6a09-4538-a889-a9c466eebe4f-kube-api-access-99rdf\") on node \"ci-4372-1-0-n-474f3036a8\" DevicePath \"\"" Sep 10 23:49:14.398604 systemd[1]: var-lib-kubelet-pods-d4ce0378\x2d6a09\x2d4538\x2da889\x2da9c466eebe4f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d99rdf.mount: Deactivated successfully. Sep 10 23:49:14.398874 systemd[1]: var-lib-kubelet-pods-d4ce0378\x2d6a09\x2d4538\x2da889\x2da9c466eebe4f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 10 23:49:14.824469 kubelet[2760]: I0910 23:49:14.824411 2760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:49:14.833969 systemd[1]: Removed slice kubepods-besteffort-podd4ce0378_6a09_4538_a889_a9c466eebe4f.slice - libcontainer container kubepods-besteffort-podd4ce0378_6a09_4538_a889_a9c466eebe4f.slice. Sep 10 23:49:14.918726 systemd[1]: Created slice kubepods-besteffort-pod528201bb_db46_4fe1_82c7_8362be2321ae.slice - libcontainer container kubepods-besteffort-pod528201bb_db46_4fe1_82c7_8362be2321ae.slice. Sep 10 23:49:15.018075 kubelet[2760]: I0910 23:49:15.017919 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/528201bb-db46-4fe1-82c7-8362be2321ae-whisker-backend-key-pair\") pod \"whisker-557f7db7d4-mbxbt\" (UID: \"528201bb-db46-4fe1-82c7-8362be2321ae\") " pod="calico-system/whisker-557f7db7d4-mbxbt" Sep 10 23:49:15.018075 kubelet[2760]: I0910 23:49:15.017987 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/528201bb-db46-4fe1-82c7-8362be2321ae-whisker-ca-bundle\") pod \"whisker-557f7db7d4-mbxbt\" (UID: \"528201bb-db46-4fe1-82c7-8362be2321ae\") " pod="calico-system/whisker-557f7db7d4-mbxbt" Sep 10 23:49:15.018075 kubelet[2760]: I0910 23:49:15.018043 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4lv6\" (UniqueName: \"kubernetes.io/projected/528201bb-db46-4fe1-82c7-8362be2321ae-kube-api-access-h4lv6\") pod \"whisker-557f7db7d4-mbxbt\" (UID: \"528201bb-db46-4fe1-82c7-8362be2321ae\") " pod="calico-system/whisker-557f7db7d4-mbxbt" Sep 10 23:49:15.225747 containerd[1544]: time="2025-09-10T23:49:15.225306454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-557f7db7d4-mbxbt,Uid:528201bb-db46-4fe1-82c7-8362be2321ae,Namespace:calico-system,Attempt:0,}" Sep 10 23:49:15.470203 systemd-networkd[1426]: calie3fb9be5976: Link UP Sep 10 23:49:15.472786 systemd-networkd[1426]: calie3fb9be5976: Gained carrier Sep 10 23:49:15.497750 containerd[1544]: 2025-09-10 23:49:15.254 [INFO][3819] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:49:15.497750 containerd[1544]: 2025-09-10 23:49:15.313 [INFO][3819] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--n--474f3036a8-k8s-whisker--557f7db7d4--mbxbt-eth0 whisker-557f7db7d4- calico-system 528201bb-db46-4fe1-82c7-8362be2321ae 913 0 2025-09-10 23:49:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:557f7db7d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372-1-0-n-474f3036a8 whisker-557f7db7d4-mbxbt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie3fb9be5976 [] [] }} ContainerID="ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03" Namespace="calico-system" Pod="whisker-557f7db7d4-mbxbt" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-whisker--557f7db7d4--mbxbt-" Sep 10 23:49:15.497750 containerd[1544]: 2025-09-10 23:49:15.313 [INFO][3819] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03" Namespace="calico-system" Pod="whisker-557f7db7d4-mbxbt" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-whisker--557f7db7d4--mbxbt-eth0" Sep 10 23:49:15.497750 containerd[1544]: 2025-09-10 23:49:15.390 [INFO][3848] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03" HandleID="k8s-pod-network.ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03" Workload="ci--4372--1--0--n--474f3036a8-k8s-whisker--557f7db7d4--mbxbt-eth0" Sep 10 23:49:15.497750 containerd[1544]: 2025-09-10 23:49:15.390 [INFO][3848] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03" HandleID="k8s-pod-network.ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03" Workload="ci--4372--1--0--n--474f3036a8-k8s-whisker--557f7db7d4--mbxbt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000341ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-1-0-n-474f3036a8", "pod":"whisker-557f7db7d4-mbxbt", "timestamp":"2025-09-10 23:49:15.390443584 +0000 UTC"}, Hostname:"ci-4372-1-0-n-474f3036a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:49:15.497750 containerd[1544]: 2025-09-10 23:49:15.390 [INFO][3848] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:49:15.497750 containerd[1544]: 2025-09-10 23:49:15.390 [INFO][3848] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:49:15.497750 containerd[1544]: 2025-09-10 23:49:15.391 [INFO][3848] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-n-474f3036a8' Sep 10 23:49:15.497750 containerd[1544]: 2025-09-10 23:49:15.404 [INFO][3848] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:15.497750 containerd[1544]: 2025-09-10 23:49:15.416 [INFO][3848] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:15.497750 containerd[1544]: 2025-09-10 23:49:15.427 [INFO][3848] ipam/ipam.go 511: Trying affinity for 192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:15.497750 containerd[1544]: 2025-09-10 23:49:15.430 [INFO][3848] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:15.497750 containerd[1544]: 2025-09-10 23:49:15.433 [INFO][3848] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:15.497750 containerd[1544]: 2025-09-10 23:49:15.434 [INFO][3848] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:15.497750 containerd[1544]: 2025-09-10 23:49:15.437 [INFO][3848] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03 Sep 10 23:49:15.497750 containerd[1544]: 2025-09-10 23:49:15.445 [INFO][3848] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:15.497750 containerd[1544]: 2025-09-10 23:49:15.451 [INFO][3848] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.129/26] block=192.168.92.128/26 handle="k8s-pod-network.ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:15.497750 containerd[1544]: 2025-09-10 23:49:15.452 [INFO][3848] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.129/26] handle="k8s-pod-network.ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:15.497750 containerd[1544]: 2025-09-10 23:49:15.452 [INFO][3848] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:49:15.497750 containerd[1544]: 2025-09-10 23:49:15.452 [INFO][3848] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.129/26] IPv6=[] ContainerID="ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03" HandleID="k8s-pod-network.ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03" Workload="ci--4372--1--0--n--474f3036a8-k8s-whisker--557f7db7d4--mbxbt-eth0" Sep 10 23:49:15.499105 containerd[1544]: 2025-09-10 23:49:15.457 [INFO][3819] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03" Namespace="calico-system" Pod="whisker-557f7db7d4-mbxbt" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-whisker--557f7db7d4--mbxbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--n--474f3036a8-k8s-whisker--557f7db7d4--mbxbt-eth0", GenerateName:"whisker-557f7db7d4-", Namespace:"calico-system", SelfLink:"", UID:"528201bb-db46-4fe1-82c7-8362be2321ae", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 49, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"557f7db7d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-n-474f3036a8", ContainerID:"", Pod:"whisker-557f7db7d4-mbxbt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.92.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie3fb9be5976", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:49:15.499105 containerd[1544]: 2025-09-10 23:49:15.457 [INFO][3819] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.129/32] ContainerID="ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03" Namespace="calico-system" Pod="whisker-557f7db7d4-mbxbt" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-whisker--557f7db7d4--mbxbt-eth0" Sep 10 23:49:15.499105 containerd[1544]: 2025-09-10 23:49:15.457 [INFO][3819] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3fb9be5976 ContainerID="ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03" Namespace="calico-system" Pod="whisker-557f7db7d4-mbxbt" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-whisker--557f7db7d4--mbxbt-eth0" Sep 10 23:49:15.499105 containerd[1544]: 2025-09-10 23:49:15.472 [INFO][3819] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03" Namespace="calico-system" Pod="whisker-557f7db7d4-mbxbt" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-whisker--557f7db7d4--mbxbt-eth0" Sep 10 23:49:15.499105 containerd[1544]: 2025-09-10 23:49:15.474 [INFO][3819] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03" Namespace="calico-system" Pod="whisker-557f7db7d4-mbxbt" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-whisker--557f7db7d4--mbxbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--n--474f3036a8-k8s-whisker--557f7db7d4--mbxbt-eth0", GenerateName:"whisker-557f7db7d4-", Namespace:"calico-system", SelfLink:"", UID:"528201bb-db46-4fe1-82c7-8362be2321ae", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 49, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"557f7db7d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-n-474f3036a8", ContainerID:"ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03", Pod:"whisker-557f7db7d4-mbxbt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.92.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie3fb9be5976", MAC:"b6:c6:30:3a:81:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:49:15.499105 containerd[1544]: 2025-09-10 23:49:15.490 [INFO][3819] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03" Namespace="calico-system" Pod="whisker-557f7db7d4-mbxbt" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-whisker--557f7db7d4--mbxbt-eth0" Sep 10 23:49:15.584345 containerd[1544]: time="2025-09-10T23:49:15.583814737Z" level=info msg="connecting to shim ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03" address="unix:///run/containerd/s/86293aa0a28af8f6a62fbc4b37c0b60b9e877c0919372bd4235a6ead1cd81423" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:49:15.605086 kubelet[2760]: I0910 23:49:15.605005 2760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:49:15.629852 systemd[1]: Started cri-containerd-ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03.scope - libcontainer container ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03. Sep 10 23:49:15.643037 kubelet[2760]: I0910 23:49:15.642989 2760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4ce0378-6a09-4538-a889-a9c466eebe4f" path="/var/lib/kubelet/pods/d4ce0378-6a09-4538-a889-a9c466eebe4f/volumes" Sep 10 23:49:15.824716 containerd[1544]: time="2025-09-10T23:49:15.824593559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-557f7db7d4-mbxbt,Uid:528201bb-db46-4fe1-82c7-8362be2321ae,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03\"" Sep 10 23:49:15.829138 containerd[1544]: time="2025-09-10T23:49:15.828881842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 10 23:49:16.203249 systemd-networkd[1426]: vxlan.calico: Link UP Sep 10 23:49:16.203257 systemd-networkd[1426]: vxlan.calico: Gained carrier Sep 10 23:49:16.708494 systemd-networkd[1426]: calie3fb9be5976: Gained IPv6LL Sep 10 23:49:17.335898 containerd[1544]: time="2025-09-10T23:49:17.335784176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:17.337187 containerd[1544]: time="2025-09-10T23:49:17.337124201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 10 23:49:17.339339 containerd[1544]: time="2025-09-10T23:49:17.338218141Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:17.340539 containerd[1544]: time="2025-09-10T23:49:17.340505464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:17.341395 containerd[1544]: time="2025-09-10T23:49:17.341322039Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.512397477s" Sep 10 23:49:17.341521 containerd[1544]: time="2025-09-10T23:49:17.341503323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 10 23:49:17.344897 containerd[1544]: time="2025-09-10T23:49:17.344864305Z" level=info msg="CreateContainer within sandbox \"ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 10 23:49:17.356800 containerd[1544]: time="2025-09-10T23:49:17.356754807Z" level=info msg="Container d2ea9294590bb4e461b25158d8690e13e3f5f06f3bee218f54783c2f49cd38c9: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:17.361260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2594266234.mount: Deactivated successfully. Sep 10 23:49:17.369504 containerd[1544]: time="2025-09-10T23:49:17.369426323Z" level=info msg="CreateContainer within sandbox \"ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d2ea9294590bb4e461b25158d8690e13e3f5f06f3bee218f54783c2f49cd38c9\"" Sep 10 23:49:17.371684 containerd[1544]: time="2025-09-10T23:49:17.370531904Z" level=info msg="StartContainer for \"d2ea9294590bb4e461b25158d8690e13e3f5f06f3bee218f54783c2f49cd38c9\"" Sep 10 23:49:17.372072 containerd[1544]: time="2025-09-10T23:49:17.372046252Z" level=info msg="connecting to shim d2ea9294590bb4e461b25158d8690e13e3f5f06f3bee218f54783c2f49cd38c9" address="unix:///run/containerd/s/86293aa0a28af8f6a62fbc4b37c0b60b9e877c0919372bd4235a6ead1cd81423" protocol=ttrpc version=3 Sep 10 23:49:17.403992 systemd[1]: Started cri-containerd-d2ea9294590bb4e461b25158d8690e13e3f5f06f3bee218f54783c2f49cd38c9.scope - libcontainer container d2ea9294590bb4e461b25158d8690e13e3f5f06f3bee218f54783c2f49cd38c9. Sep 10 23:49:17.452167 containerd[1544]: time="2025-09-10T23:49:17.452093144Z" level=info msg="StartContainer for \"d2ea9294590bb4e461b25158d8690e13e3f5f06f3bee218f54783c2f49cd38c9\" returns successfully" Sep 10 23:49:17.456238 containerd[1544]: time="2025-09-10T23:49:17.456045018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 10 23:49:17.539915 systemd-networkd[1426]: vxlan.calico: Gained IPv6LL Sep 10 23:49:17.634540 containerd[1544]: time="2025-09-10T23:49:17.634416862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868f5f4df9-98krg,Uid:bba1b489-acc9-4ee7-bd26-c1c2543cea09,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:49:17.795921 systemd-networkd[1426]: cali26f8bfff971: Link UP Sep 10 23:49:17.797182 systemd-networkd[1426]: cali26f8bfff971: Gained carrier Sep 10 23:49:17.818754 containerd[1544]: 2025-09-10 23:49:17.685 [INFO][4123] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--98krg-eth0 calico-apiserver-868f5f4df9- calico-apiserver bba1b489-acc9-4ee7-bd26-c1c2543cea09 843 0 2025-09-10 23:48:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:868f5f4df9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372-1-0-n-474f3036a8 calico-apiserver-868f5f4df9-98krg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali26f8bfff971 [] [] }} ContainerID="3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53" Namespace="calico-apiserver" Pod="calico-apiserver-868f5f4df9-98krg" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--98krg-" Sep 10 23:49:17.818754 containerd[1544]: 2025-09-10 23:49:17.686 [INFO][4123] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53" Namespace="calico-apiserver" Pod="calico-apiserver-868f5f4df9-98krg" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--98krg-eth0" Sep 10 23:49:17.818754 containerd[1544]: 2025-09-10 23:49:17.730 [INFO][4135] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53" HandleID="k8s-pod-network.3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53" Workload="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--98krg-eth0" Sep 10 23:49:17.818754 containerd[1544]: 2025-09-10 23:49:17.731 [INFO][4135] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53" HandleID="k8s-pod-network.3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53" Workload="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--98krg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372-1-0-n-474f3036a8", "pod":"calico-apiserver-868f5f4df9-98krg", "timestamp":"2025-09-10 23:49:17.730927221 +0000 UTC"}, Hostname:"ci-4372-1-0-n-474f3036a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:49:17.818754 containerd[1544]: 2025-09-10 23:49:17.731 [INFO][4135] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:49:17.818754 containerd[1544]: 2025-09-10 23:49:17.731 [INFO][4135] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:49:17.818754 containerd[1544]: 2025-09-10 23:49:17.731 [INFO][4135] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-n-474f3036a8' Sep 10 23:49:17.818754 containerd[1544]: 2025-09-10 23:49:17.743 [INFO][4135] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:17.818754 containerd[1544]: 2025-09-10 23:49:17.753 [INFO][4135] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:17.818754 containerd[1544]: 2025-09-10 23:49:17.760 [INFO][4135] ipam/ipam.go 511: Trying affinity for 192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:17.818754 containerd[1544]: 2025-09-10 23:49:17.764 [INFO][4135] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:17.818754 containerd[1544]: 2025-09-10 23:49:17.767 [INFO][4135] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:17.818754 containerd[1544]: 2025-09-10 23:49:17.767 [INFO][4135] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:17.818754 containerd[1544]: 2025-09-10 23:49:17.771 [INFO][4135] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53 Sep 10 23:49:17.818754 containerd[1544]: 2025-09-10 23:49:17.777 [INFO][4135] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:17.818754 containerd[1544]: 2025-09-10 23:49:17.785 [INFO][4135] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.130/26] block=192.168.92.128/26 handle="k8s-pod-network.3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:17.818754 containerd[1544]: 2025-09-10 23:49:17.785 [INFO][4135] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.130/26] handle="k8s-pod-network.3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:17.818754 containerd[1544]: 2025-09-10 23:49:17.785 [INFO][4135] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:49:17.818754 containerd[1544]: 2025-09-10 23:49:17.786 [INFO][4135] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.130/26] IPv6=[] ContainerID="3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53" HandleID="k8s-pod-network.3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53" Workload="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--98krg-eth0" Sep 10 23:49:17.819282 containerd[1544]: 2025-09-10 23:49:17.789 [INFO][4123] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53" Namespace="calico-apiserver" Pod="calico-apiserver-868f5f4df9-98krg" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--98krg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--98krg-eth0", GenerateName:"calico-apiserver-868f5f4df9-", Namespace:"calico-apiserver", SelfLink:"", UID:"bba1b489-acc9-4ee7-bd26-c1c2543cea09", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 48, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"868f5f4df9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-n-474f3036a8", ContainerID:"", Pod:"calico-apiserver-868f5f4df9-98krg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali26f8bfff971", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:49:17.819282 containerd[1544]: 2025-09-10 23:49:17.790 [INFO][4123] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.130/32] ContainerID="3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53" Namespace="calico-apiserver" Pod="calico-apiserver-868f5f4df9-98krg" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--98krg-eth0" Sep 10 23:49:17.819282 containerd[1544]: 2025-09-10 23:49:17.790 [INFO][4123] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26f8bfff971 ContainerID="3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53" Namespace="calico-apiserver" Pod="calico-apiserver-868f5f4df9-98krg" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--98krg-eth0" Sep 10 23:49:17.819282 containerd[1544]: 2025-09-10 23:49:17.797 [INFO][4123] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53" Namespace="calico-apiserver" Pod="calico-apiserver-868f5f4df9-98krg" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--98krg-eth0" Sep 10 23:49:17.819282 containerd[1544]: 2025-09-10 23:49:17.798 [INFO][4123] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53" Namespace="calico-apiserver" Pod="calico-apiserver-868f5f4df9-98krg" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--98krg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--98krg-eth0", GenerateName:"calico-apiserver-868f5f4df9-", Namespace:"calico-apiserver", SelfLink:"", UID:"bba1b489-acc9-4ee7-bd26-c1c2543cea09", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 48, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"868f5f4df9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-n-474f3036a8", ContainerID:"3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53", Pod:"calico-apiserver-868f5f4df9-98krg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali26f8bfff971", MAC:"f6:e9:49:8b:e0:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:49:17.819282 containerd[1544]: 2025-09-10 23:49:17.813 [INFO][4123] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53" Namespace="calico-apiserver" Pod="calico-apiserver-868f5f4df9-98krg" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--98krg-eth0" Sep 10 23:49:17.867984 containerd[1544]: time="2025-09-10T23:49:17.867894614Z" level=info msg="connecting to shim 3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53" address="unix:///run/containerd/s/dfcec907f21ef2b8e4c166cbb9e5db6d6b168e9434b06d6e0fee9021361b22ee" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:49:17.898882 systemd[1]: Started cri-containerd-3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53.scope - libcontainer container 3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53. Sep 10 23:49:17.941853 containerd[1544]: time="2025-09-10T23:49:17.941633748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868f5f4df9-98krg,Uid:bba1b489-acc9-4ee7-bd26-c1c2543cea09,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53\"" Sep 10 23:49:18.632183 containerd[1544]: time="2025-09-10T23:49:18.631879334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-g2jf2,Uid:15bca87c-125d-4788-af8c-2c683525ab9f,Namespace:calico-system,Attempt:0,}" Sep 10 23:49:18.632183 containerd[1544]: time="2025-09-10T23:49:18.632129938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zwk7w,Uid:1f271b08-153c-491b-807a-29baad083a03,Namespace:kube-system,Attempt:0,}" Sep 10 23:49:18.815071 systemd-networkd[1426]: caliaa3f7d145d0: Link UP Sep 10 23:49:18.815284 systemd-networkd[1426]: caliaa3f7d145d0: Gained carrier Sep 10 23:49:18.840258 containerd[1544]: 2025-09-10 23:49:18.704 [INFO][4197] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--n--474f3036a8-k8s-goldmane--54d579b49d--g2jf2-eth0 goldmane-54d579b49d- calico-system 15bca87c-125d-4788-af8c-2c683525ab9f 847 0 2025-09-10 23:48:57 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372-1-0-n-474f3036a8 goldmane-54d579b49d-g2jf2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliaa3f7d145d0 [] [] }} ContainerID="f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2" Namespace="calico-system" Pod="goldmane-54d579b49d-g2jf2" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-goldmane--54d579b49d--g2jf2-" Sep 10 23:49:18.840258 containerd[1544]: 2025-09-10 23:49:18.704 [INFO][4197] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2" Namespace="calico-system" Pod="goldmane-54d579b49d-g2jf2" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-goldmane--54d579b49d--g2jf2-eth0" Sep 10 23:49:18.840258 containerd[1544]: 2025-09-10 23:49:18.744 [INFO][4221] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2" HandleID="k8s-pod-network.f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2" Workload="ci--4372--1--0--n--474f3036a8-k8s-goldmane--54d579b49d--g2jf2-eth0" Sep 10 23:49:18.840258 containerd[1544]: 2025-09-10 23:49:18.744 [INFO][4221] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2" HandleID="k8s-pod-network.f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2" Workload="ci--4372--1--0--n--474f3036a8-k8s-goldmane--54d579b49d--g2jf2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa630), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-1-0-n-474f3036a8", "pod":"goldmane-54d579b49d-g2jf2", "timestamp":"2025-09-10 23:49:18.744107757 +0000 UTC"}, Hostname:"ci-4372-1-0-n-474f3036a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:49:18.840258 containerd[1544]: 2025-09-10 23:49:18.744 [INFO][4221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:49:18.840258 containerd[1544]: 2025-09-10 23:49:18.744 [INFO][4221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:49:18.840258 containerd[1544]: 2025-09-10 23:49:18.744 [INFO][4221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-n-474f3036a8' Sep 10 23:49:18.840258 containerd[1544]: 2025-09-10 23:49:18.756 [INFO][4221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:18.840258 containerd[1544]: 2025-09-10 23:49:18.762 [INFO][4221] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:18.840258 containerd[1544]: 2025-09-10 23:49:18.770 [INFO][4221] ipam/ipam.go 511: Trying affinity for 192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:18.840258 containerd[1544]: 2025-09-10 23:49:18.773 [INFO][4221] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:18.840258 containerd[1544]: 2025-09-10 23:49:18.778 [INFO][4221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:18.840258 containerd[1544]: 2025-09-10 23:49:18.778 [INFO][4221] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:18.840258 containerd[1544]: 2025-09-10 23:49:18.781 [INFO][4221] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2 Sep 10 23:49:18.840258 containerd[1544]: 2025-09-10 23:49:18.787 [INFO][4221] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:18.840258 containerd[1544]: 2025-09-10 23:49:18.800 [INFO][4221] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.131/26] block=192.168.92.128/26 handle="k8s-pod-network.f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:18.840258 containerd[1544]: 2025-09-10 23:49:18.800 [INFO][4221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.131/26] handle="k8s-pod-network.f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:18.840258 containerd[1544]: 2025-09-10 23:49:18.800 [INFO][4221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:49:18.840258 containerd[1544]: 2025-09-10 23:49:18.800 [INFO][4221] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.131/26] IPv6=[] ContainerID="f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2" HandleID="k8s-pod-network.f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2" Workload="ci--4372--1--0--n--474f3036a8-k8s-goldmane--54d579b49d--g2jf2-eth0" Sep 10 23:49:18.842225 containerd[1544]: 2025-09-10 23:49:18.806 [INFO][4197] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2" Namespace="calico-system" Pod="goldmane-54d579b49d-g2jf2" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-goldmane--54d579b49d--g2jf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--n--474f3036a8-k8s-goldmane--54d579b49d--g2jf2-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"15bca87c-125d-4788-af8c-2c683525ab9f", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 48, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-n-474f3036a8", ContainerID:"", Pod:"goldmane-54d579b49d-g2jf2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.92.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaa3f7d145d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:49:18.842225 containerd[1544]: 2025-09-10 23:49:18.807 [INFO][4197] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.131/32] ContainerID="f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2" Namespace="calico-system" Pod="goldmane-54d579b49d-g2jf2" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-goldmane--54d579b49d--g2jf2-eth0" Sep 10 23:49:18.842225 containerd[1544]: 2025-09-10 23:49:18.807 [INFO][4197] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa3f7d145d0 ContainerID="f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2" Namespace="calico-system" Pod="goldmane-54d579b49d-g2jf2" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-goldmane--54d579b49d--g2jf2-eth0" Sep 10 23:49:18.842225 containerd[1544]: 2025-09-10 23:49:18.816 [INFO][4197] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2" Namespace="calico-system" Pod="goldmane-54d579b49d-g2jf2" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-goldmane--54d579b49d--g2jf2-eth0" Sep 10 23:49:18.842225 containerd[1544]: 2025-09-10 23:49:18.817 [INFO][4197] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2" Namespace="calico-system" Pod="goldmane-54d579b49d-g2jf2" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-goldmane--54d579b49d--g2jf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--n--474f3036a8-k8s-goldmane--54d579b49d--g2jf2-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"15bca87c-125d-4788-af8c-2c683525ab9f", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 48, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-n-474f3036a8", ContainerID:"f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2", Pod:"goldmane-54d579b49d-g2jf2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.92.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaa3f7d145d0", MAC:"be:be:31:5c:24:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:49:18.842225 containerd[1544]: 2025-09-10 23:49:18.834 [INFO][4197] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2" Namespace="calico-system" Pod="goldmane-54d579b49d-g2jf2" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-goldmane--54d579b49d--g2jf2-eth0" Sep 10 23:49:18.883141 containerd[1544]: time="2025-09-10T23:49:18.882360819Z" level=info msg="connecting to shim f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2" address="unix:///run/containerd/s/1726f66c1481e76787cd7ec58b4f5d4dc199ba5ee1303311da85dae613482095" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:49:18.943673 systemd-networkd[1426]: cali43ca243dd8c: Link UP Sep 10 23:49:18.943868 systemd-networkd[1426]: cali43ca243dd8c: Gained carrier Sep 10 23:49:18.944842 systemd[1]: Started cri-containerd-f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2.scope - libcontainer container f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2. Sep 10 23:49:18.973444 containerd[1544]: 2025-09-10 23:49:18.714 [INFO][4203] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--zwk7w-eth0 coredns-674b8bbfcf- kube-system 1f271b08-153c-491b-807a-29baad083a03 842 0 2025-09-10 23:48:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372-1-0-n-474f3036a8 coredns-674b8bbfcf-zwk7w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali43ca243dd8c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d" Namespace="kube-system" Pod="coredns-674b8bbfcf-zwk7w" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--zwk7w-" Sep 10 23:49:18.973444 containerd[1544]: 2025-09-10 23:49:18.715 [INFO][4203] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d" Namespace="kube-system" Pod="coredns-674b8bbfcf-zwk7w" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--zwk7w-eth0" Sep 10 23:49:18.973444 containerd[1544]: 2025-09-10 23:49:18.759 [INFO][4226] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d" HandleID="k8s-pod-network.b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d" Workload="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--zwk7w-eth0" Sep 10 23:49:18.973444 containerd[1544]: 2025-09-10 23:49:18.760 [INFO][4226] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d" HandleID="k8s-pod-network.b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d" Workload="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--zwk7w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb690), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372-1-0-n-474f3036a8", "pod":"coredns-674b8bbfcf-zwk7w", "timestamp":"2025-09-10 23:49:18.759897007 +0000 UTC"}, Hostname:"ci-4372-1-0-n-474f3036a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:49:18.973444 containerd[1544]: 2025-09-10 23:49:18.760 [INFO][4226] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:49:18.973444 containerd[1544]: 2025-09-10 23:49:18.800 [INFO][4226] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:49:18.973444 containerd[1544]: 2025-09-10 23:49:18.800 [INFO][4226] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-n-474f3036a8' Sep 10 23:49:18.973444 containerd[1544]: 2025-09-10 23:49:18.857 [INFO][4226] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:18.973444 containerd[1544]: 2025-09-10 23:49:18.876 [INFO][4226] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:18.973444 containerd[1544]: 2025-09-10 23:49:18.889 [INFO][4226] ipam/ipam.go 511: Trying affinity for 192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:18.973444 containerd[1544]: 2025-09-10 23:49:18.894 [INFO][4226] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:18.973444 containerd[1544]: 2025-09-10 23:49:18.899 [INFO][4226] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:18.973444 containerd[1544]: 2025-09-10 23:49:18.900 [INFO][4226] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:18.973444 containerd[1544]: 2025-09-10 23:49:18.905 [INFO][4226] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d Sep 10 23:49:18.973444 containerd[1544]: 2025-09-10 23:49:18.916 [INFO][4226] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:18.973444 containerd[1544]: 2025-09-10 23:49:18.932 [INFO][4226] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.132/26] block=192.168.92.128/26 handle="k8s-pod-network.b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:18.973444 containerd[1544]: 2025-09-10 23:49:18.932 [INFO][4226] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.132/26] handle="k8s-pod-network.b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:18.973444 containerd[1544]: 2025-09-10 23:49:18.932 [INFO][4226] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:49:18.973444 containerd[1544]: 2025-09-10 23:49:18.932 [INFO][4226] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.132/26] IPv6=[] ContainerID="b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d" HandleID="k8s-pod-network.b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d" Workload="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--zwk7w-eth0" Sep 10 23:49:18.974779 containerd[1544]: 2025-09-10 23:49:18.938 [INFO][4203] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d" Namespace="kube-system" Pod="coredns-674b8bbfcf-zwk7w" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--zwk7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--zwk7w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1f271b08-153c-491b-807a-29baad083a03", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 48, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-n-474f3036a8", ContainerID:"", Pod:"coredns-674b8bbfcf-zwk7w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43ca243dd8c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:49:18.974779 containerd[1544]: 2025-09-10 23:49:18.939 [INFO][4203] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.132/32] ContainerID="b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d" Namespace="kube-system" Pod="coredns-674b8bbfcf-zwk7w" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--zwk7w-eth0" Sep 10 23:49:18.974779 containerd[1544]: 2025-09-10 23:49:18.940 [INFO][4203] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43ca243dd8c ContainerID="b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d" Namespace="kube-system" Pod="coredns-674b8bbfcf-zwk7w" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--zwk7w-eth0" Sep 10 23:49:18.974779 containerd[1544]: 2025-09-10 23:49:18.942 [INFO][4203] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d" Namespace="kube-system" Pod="coredns-674b8bbfcf-zwk7w" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--zwk7w-eth0" Sep 10 23:49:18.974779 containerd[1544]: 2025-09-10 23:49:18.945 [INFO][4203] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d" Namespace="kube-system" Pod="coredns-674b8bbfcf-zwk7w" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--zwk7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--zwk7w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1f271b08-153c-491b-807a-29baad083a03", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 48, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-n-474f3036a8", ContainerID:"b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d", Pod:"coredns-674b8bbfcf-zwk7w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43ca243dd8c", MAC:"1e:b4:15:b7:23:9d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:49:18.975303 containerd[1544]: 2025-09-10 23:49:18.962 [INFO][4203] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d" Namespace="kube-system" Pod="coredns-674b8bbfcf-zwk7w" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--zwk7w-eth0" Sep 10 23:49:19.015560 containerd[1544]: time="2025-09-10T23:49:19.015500823Z" level=info msg="connecting to shim b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d" address="unix:///run/containerd/s/ffd91eee1ad62b9d56f2bb969651b86283615795edb92febd07653e771a969db" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:49:19.056212 containerd[1544]: time="2025-09-10T23:49:19.056169841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-g2jf2,Uid:15bca87c-125d-4788-af8c-2c683525ab9f,Namespace:calico-system,Attempt:0,} returns sandbox id \"f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2\"" Sep 10 23:49:19.057016 systemd[1]: Started cri-containerd-b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d.scope - libcontainer container b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d. Sep 10 23:49:19.115569 containerd[1544]: time="2025-09-10T23:49:19.115513718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zwk7w,Uid:1f271b08-153c-491b-807a-29baad083a03,Namespace:kube-system,Attempt:0,} returns sandbox id \"b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d\"" Sep 10 23:49:19.121533 containerd[1544]: time="2025-09-10T23:49:19.121417745Z" level=info msg="CreateContainer within sandbox \"b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:49:19.140884 containerd[1544]: time="2025-09-10T23:49:19.140771376Z" level=info msg="Container eb0c7959548b6baf74a53cf8c4f0406eee14b28b72eb89d8678de4e6547fe414: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:19.152828 containerd[1544]: time="2025-09-10T23:49:19.152775074Z" level=info msg="CreateContainer within sandbox \"b776bbb75c7e5f0dca425349d1423fe6413f98ca8f9706dfc1f6608a6ba75b1d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eb0c7959548b6baf74a53cf8c4f0406eee14b28b72eb89d8678de4e6547fe414\"" Sep 10 23:49:19.154786 containerd[1544]: time="2025-09-10T23:49:19.154741150Z" level=info msg="StartContainer for \"eb0c7959548b6baf74a53cf8c4f0406eee14b28b72eb89d8678de4e6547fe414\"" Sep 10 23:49:19.157519 containerd[1544]: time="2025-09-10T23:49:19.156995390Z" level=info msg="connecting to shim eb0c7959548b6baf74a53cf8c4f0406eee14b28b72eb89d8678de4e6547fe414" address="unix:///run/containerd/s/ffd91eee1ad62b9d56f2bb969651b86283615795edb92febd07653e771a969db" protocol=ttrpc version=3 Sep 10 23:49:19.190955 systemd[1]: Started cri-containerd-eb0c7959548b6baf74a53cf8c4f0406eee14b28b72eb89d8678de4e6547fe414.scope - libcontainer container eb0c7959548b6baf74a53cf8c4f0406eee14b28b72eb89d8678de4e6547fe414. Sep 10 23:49:19.246349 containerd[1544]: time="2025-09-10T23:49:19.246306291Z" level=info msg="StartContainer for \"eb0c7959548b6baf74a53cf8c4f0406eee14b28b72eb89d8678de4e6547fe414\" returns successfully" Sep 10 23:49:19.636063 containerd[1544]: time="2025-09-10T23:49:19.635997923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596bb59679-rdc8k,Uid:85486300-8241-4b09-b357-178b6193527e,Namespace:calico-system,Attempt:0,}" Sep 10 23:49:19.780493 systemd-networkd[1426]: cali26f8bfff971: Gained IPv6LL Sep 10 23:49:19.864151 systemd-networkd[1426]: cali0edd4a98f2d: Link UP Sep 10 23:49:19.865743 systemd-networkd[1426]: cali0edd4a98f2d: Gained carrier Sep 10 23:49:19.914523 containerd[1544]: 2025-09-10 23:49:19.723 [INFO][4384] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--n--474f3036a8-k8s-calico--kube--controllers--596bb59679--rdc8k-eth0 calico-kube-controllers-596bb59679- calico-system 85486300-8241-4b09-b357-178b6193527e 846 0 2025-09-10 23:48:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:596bb59679 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372-1-0-n-474f3036a8 calico-kube-controllers-596bb59679-rdc8k eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0edd4a98f2d [] [] }} ContainerID="0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f" Namespace="calico-system" Pod="calico-kube-controllers-596bb59679-rdc8k" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--kube--controllers--596bb59679--rdc8k-" Sep 10 23:49:19.914523 containerd[1544]: 2025-09-10 23:49:19.723 [INFO][4384] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f" Namespace="calico-system" Pod="calico-kube-controllers-596bb59679-rdc8k" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--kube--controllers--596bb59679--rdc8k-eth0" Sep 10 23:49:19.914523 containerd[1544]: 2025-09-10 23:49:19.770 [INFO][4396] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f" HandleID="k8s-pod-network.0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f" Workload="ci--4372--1--0--n--474f3036a8-k8s-calico--kube--controllers--596bb59679--rdc8k-eth0" Sep 10 23:49:19.914523 containerd[1544]: 2025-09-10 23:49:19.771 [INFO][4396] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f" HandleID="k8s-pod-network.0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f" Workload="ci--4372--1--0--n--474f3036a8-k8s-calico--kube--controllers--596bb59679--rdc8k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-1-0-n-474f3036a8", "pod":"calico-kube-controllers-596bb59679-rdc8k", "timestamp":"2025-09-10 23:49:19.770655446 +0000 UTC"}, Hostname:"ci-4372-1-0-n-474f3036a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:49:19.914523 containerd[1544]: 2025-09-10 23:49:19.771 [INFO][4396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:49:19.914523 containerd[1544]: 2025-09-10 23:49:19.771 [INFO][4396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:49:19.914523 containerd[1544]: 2025-09-10 23:49:19.771 [INFO][4396] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-n-474f3036a8' Sep 10 23:49:19.914523 containerd[1544]: 2025-09-10 23:49:19.793 [INFO][4396] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:19.914523 containerd[1544]: 2025-09-10 23:49:19.801 [INFO][4396] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:19.914523 containerd[1544]: 2025-09-10 23:49:19.809 [INFO][4396] ipam/ipam.go 511: Trying affinity for 192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:19.914523 containerd[1544]: 2025-09-10 23:49:19.814 [INFO][4396] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:19.914523 containerd[1544]: 2025-09-10 23:49:19.818 [INFO][4396] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:19.914523 containerd[1544]: 2025-09-10 23:49:19.818 [INFO][4396] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:19.914523 containerd[1544]: 2025-09-10 23:49:19.822 [INFO][4396] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f Sep 10 23:49:19.914523 containerd[1544]: 2025-09-10 23:49:19.837 [INFO][4396] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:19.914523 containerd[1544]: 2025-09-10 23:49:19.849 [INFO][4396] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.133/26] block=192.168.92.128/26 handle="k8s-pod-network.0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:19.914523 containerd[1544]: 2025-09-10 23:49:19.849 [INFO][4396] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.133/26] handle="k8s-pod-network.0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:19.914523 containerd[1544]: 2025-09-10 23:49:19.849 [INFO][4396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:49:19.914523 containerd[1544]: 2025-09-10 23:49:19.849 [INFO][4396] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.133/26] IPv6=[] ContainerID="0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f" HandleID="k8s-pod-network.0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f" Workload="ci--4372--1--0--n--474f3036a8-k8s-calico--kube--controllers--596bb59679--rdc8k-eth0" Sep 10 23:49:19.915054 containerd[1544]: 2025-09-10 23:49:19.857 [INFO][4384] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f" Namespace="calico-system" Pod="calico-kube-controllers-596bb59679-rdc8k" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--kube--controllers--596bb59679--rdc8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--n--474f3036a8-k8s-calico--kube--controllers--596bb59679--rdc8k-eth0", GenerateName:"calico-kube-controllers-596bb59679-", Namespace:"calico-system", SelfLink:"", UID:"85486300-8241-4b09-b357-178b6193527e", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 48, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"596bb59679", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-n-474f3036a8", ContainerID:"", Pod:"calico-kube-controllers-596bb59679-rdc8k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0edd4a98f2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:49:19.915054 containerd[1544]: 2025-09-10 23:49:19.857 [INFO][4384] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.133/32] ContainerID="0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f" Namespace="calico-system" Pod="calico-kube-controllers-596bb59679-rdc8k" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--kube--controllers--596bb59679--rdc8k-eth0" Sep 10 23:49:19.915054 containerd[1544]: 2025-09-10 23:49:19.857 [INFO][4384] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0edd4a98f2d ContainerID="0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f" Namespace="calico-system" Pod="calico-kube-controllers-596bb59679-rdc8k" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--kube--controllers--596bb59679--rdc8k-eth0" Sep 10 23:49:19.915054 containerd[1544]: 2025-09-10 23:49:19.867 [INFO][4384] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f" Namespace="calico-system" Pod="calico-kube-controllers-596bb59679-rdc8k" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--kube--controllers--596bb59679--rdc8k-eth0" Sep 10 23:49:19.915054 containerd[1544]: 2025-09-10 23:49:19.871 [INFO][4384] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f" Namespace="calico-system" Pod="calico-kube-controllers-596bb59679-rdc8k" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--kube--controllers--596bb59679--rdc8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--n--474f3036a8-k8s-calico--kube--controllers--596bb59679--rdc8k-eth0", GenerateName:"calico-kube-controllers-596bb59679-", Namespace:"calico-system", SelfLink:"", UID:"85486300-8241-4b09-b357-178b6193527e", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 48, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"596bb59679", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-n-474f3036a8", ContainerID:"0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f", Pod:"calico-kube-controllers-596bb59679-rdc8k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0edd4a98f2d", MAC:"b2:9d:82:ff:a9:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:49:19.915054 containerd[1544]: 2025-09-10 23:49:19.904 [INFO][4384] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f" Namespace="calico-system" Pod="calico-kube-controllers-596bb59679-rdc8k" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--kube--controllers--596bb59679--rdc8k-eth0" Sep 10 23:49:19.964663 kubelet[2760]: I0910 23:49:19.963523 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zwk7w" podStartSLOduration=38.963497386 podStartE2EDuration="38.963497386s" podCreationTimestamp="2025-09-10 23:48:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:49:19.912862867 +0000 UTC m=+44.427023900" watchObservedRunningTime="2025-09-10 23:49:19.963497386 +0000 UTC m=+44.477658419" Sep 10 23:49:19.973664 containerd[1544]: time="2025-09-10T23:49:19.973419646Z" level=info msg="connecting to shim 0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f" address="unix:///run/containerd/s/2d5770aa679457c210ed3124f39851011590f832858e4886a088aa761565405f" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:49:20.036039 systemd[1]: Started cri-containerd-0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f.scope - libcontainer container 0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f. Sep 10 23:49:20.120493 containerd[1544]: time="2025-09-10T23:49:20.120352565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596bb59679-rdc8k,Uid:85486300-8241-4b09-b357-178b6193527e,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f\"" Sep 10 23:49:20.180468 containerd[1544]: time="2025-09-10T23:49:20.179726029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:20.181837 containerd[1544]: time="2025-09-10T23:49:20.181806627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 10 23:49:20.182622 containerd[1544]: time="2025-09-10T23:49:20.182596481Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:20.186212 containerd[1544]: time="2025-09-10T23:49:20.186153105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:20.187204 containerd[1544]: time="2025-09-10T23:49:20.186856637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 2.730660897s" Sep 10 23:49:20.187633 containerd[1544]: time="2025-09-10T23:49:20.187608531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 10 23:49:20.191682 containerd[1544]: time="2025-09-10T23:49:20.190134376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 23:49:20.193914 containerd[1544]: time="2025-09-10T23:49:20.193865963Z" level=info msg="CreateContainer within sandbox \"ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 10 23:49:20.201489 containerd[1544]: time="2025-09-10T23:49:20.201370297Z" level=info msg="Container ec8547cb88a464fbcdbc9fd211cf58804c3d478c5d1d3c74c4da421bc045ffc6: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:20.220900 containerd[1544]: time="2025-09-10T23:49:20.220835526Z" level=info msg="CreateContainer within sandbox \"ee0f0e61d92c3ad94608a571a705f3f86edcea798898a3ef10869c49f42c1a03\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ec8547cb88a464fbcdbc9fd211cf58804c3d478c5d1d3c74c4da421bc045ffc6\"" Sep 10 23:49:20.222303 containerd[1544]: time="2025-09-10T23:49:20.221851864Z" level=info msg="StartContainer for \"ec8547cb88a464fbcdbc9fd211cf58804c3d478c5d1d3c74c4da421bc045ffc6\"" Sep 10 23:49:20.228179 systemd-networkd[1426]: cali43ca243dd8c: Gained IPv6LL Sep 10 23:49:20.252724 containerd[1544]: time="2025-09-10T23:49:20.252589415Z" level=info msg="connecting to shim ec8547cb88a464fbcdbc9fd211cf58804c3d478c5d1d3c74c4da421bc045ffc6" address="unix:///run/containerd/s/86293aa0a28af8f6a62fbc4b37c0b60b9e877c0919372bd4235a6ead1cd81423" protocol=ttrpc version=3 Sep 10 23:49:20.282940 systemd[1]: Started cri-containerd-ec8547cb88a464fbcdbc9fd211cf58804c3d478c5d1d3c74c4da421bc045ffc6.scope - libcontainer container ec8547cb88a464fbcdbc9fd211cf58804c3d478c5d1d3c74c4da421bc045ffc6. Sep 10 23:49:20.360532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2037379432.mount: Deactivated successfully. Sep 10 23:49:20.362849 containerd[1544]: time="2025-09-10T23:49:20.362803831Z" level=info msg="StartContainer for \"ec8547cb88a464fbcdbc9fd211cf58804c3d478c5d1d3c74c4da421bc045ffc6\" returns successfully" Sep 10 23:49:20.419912 systemd-networkd[1426]: caliaa3f7d145d0: Gained IPv6LL Sep 10 23:49:20.632521 containerd[1544]: time="2025-09-10T23:49:20.632377942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4vgs4,Uid:3865e030-478e-4bf4-875f-3e63ea712016,Namespace:calico-system,Attempt:0,}" Sep 10 23:49:20.632521 containerd[1544]: time="2025-09-10T23:49:20.632378422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-spgv2,Uid:e51822d2-3192-476b-8d88-be0574f27d69,Namespace:kube-system,Attempt:0,}" Sep 10 23:49:20.840954 systemd-networkd[1426]: calid19fa490474: Link UP Sep 10 23:49:20.844784 systemd-networkd[1426]: calid19fa490474: Gained carrier Sep 10 23:49:20.865697 containerd[1544]: 2025-09-10 23:49:20.721 [INFO][4494] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--spgv2-eth0 coredns-674b8bbfcf- kube-system e51822d2-3192-476b-8d88-be0574f27d69 845 0 2025-09-10 23:48:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372-1-0-n-474f3036a8 coredns-674b8bbfcf-spgv2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid19fa490474 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-spgv2" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--spgv2-" Sep 10 23:49:20.865697 containerd[1544]: 2025-09-10 23:49:20.722 [INFO][4494] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-spgv2" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--spgv2-eth0" Sep 10 23:49:20.865697 containerd[1544]: 2025-09-10 23:49:20.767 [INFO][4519] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf" HandleID="k8s-pod-network.d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf" Workload="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--spgv2-eth0" Sep 10 23:49:20.865697 containerd[1544]: 2025-09-10 23:49:20.768 [INFO][4519] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf" HandleID="k8s-pod-network.d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf" Workload="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--spgv2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b610), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372-1-0-n-474f3036a8", "pod":"coredns-674b8bbfcf-spgv2", "timestamp":"2025-09-10 23:49:20.767893371 +0000 UTC"}, Hostname:"ci-4372-1-0-n-474f3036a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:49:20.865697 containerd[1544]: 2025-09-10 23:49:20.768 [INFO][4519] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:49:20.865697 containerd[1544]: 2025-09-10 23:49:20.768 [INFO][4519] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:49:20.865697 containerd[1544]: 2025-09-10 23:49:20.768 [INFO][4519] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-n-474f3036a8' Sep 10 23:49:20.865697 containerd[1544]: 2025-09-10 23:49:20.785 [INFO][4519] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:20.865697 containerd[1544]: 2025-09-10 23:49:20.793 [INFO][4519] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:20.865697 containerd[1544]: 2025-09-10 23:49:20.799 [INFO][4519] ipam/ipam.go 511: Trying affinity for 192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:20.865697 containerd[1544]: 2025-09-10 23:49:20.802 [INFO][4519] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:20.865697 containerd[1544]: 2025-09-10 23:49:20.806 [INFO][4519] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:20.865697 containerd[1544]: 2025-09-10 23:49:20.806 [INFO][4519] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:20.865697 containerd[1544]: 2025-09-10 23:49:20.808 [INFO][4519] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf Sep 10 23:49:20.865697 containerd[1544]: 2025-09-10 23:49:20.816 [INFO][4519] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:20.865697 containerd[1544]: 2025-09-10 23:49:20.824 [INFO][4519] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.134/26] block=192.168.92.128/26 handle="k8s-pod-network.d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:20.865697 containerd[1544]: 2025-09-10 23:49:20.825 [INFO][4519] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.134/26] handle="k8s-pod-network.d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:20.865697 containerd[1544]: 2025-09-10 23:49:20.825 [INFO][4519] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:49:20.865697 containerd[1544]: 2025-09-10 23:49:20.825 [INFO][4519] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.134/26] IPv6=[] ContainerID="d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf" HandleID="k8s-pod-network.d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf" Workload="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--spgv2-eth0" Sep 10 23:49:20.866944 containerd[1544]: 2025-09-10 23:49:20.829 [INFO][4494] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-spgv2" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--spgv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--spgv2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e51822d2-3192-476b-8d88-be0574f27d69", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 48, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-n-474f3036a8", ContainerID:"", Pod:"coredns-674b8bbfcf-spgv2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid19fa490474", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:49:20.866944 containerd[1544]: 2025-09-10 23:49:20.830 [INFO][4494] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.134/32] ContainerID="d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-spgv2" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--spgv2-eth0" Sep 10 23:49:20.866944 containerd[1544]: 2025-09-10 23:49:20.830 [INFO][4494] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid19fa490474 ContainerID="d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-spgv2" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--spgv2-eth0" Sep 10 23:49:20.866944 containerd[1544]: 2025-09-10 23:49:20.845 [INFO][4494] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-spgv2" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--spgv2-eth0" Sep 10 23:49:20.866944 containerd[1544]: 2025-09-10 23:49:20.848 [INFO][4494] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-spgv2" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--spgv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--spgv2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e51822d2-3192-476b-8d88-be0574f27d69", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 48, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-n-474f3036a8", ContainerID:"d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf", Pod:"coredns-674b8bbfcf-spgv2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid19fa490474", MAC:"da:a4:35:04:44:b3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:49:20.867431 containerd[1544]: 2025-09-10 23:49:20.863 [INFO][4494] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-spgv2" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-coredns--674b8bbfcf--spgv2-eth0" Sep 10 23:49:20.903413 containerd[1544]: time="2025-09-10T23:49:20.903219117Z" level=info msg="connecting to shim d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf" address="unix:///run/containerd/s/11205565f9bb53803eba00435a14e48d32188011d173bafbf9d9ee6de87100e7" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:49:20.940732 kubelet[2760]: I0910 23:49:20.929496 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-557f7db7d4-mbxbt" podStartSLOduration=2.567876964 podStartE2EDuration="6.929477787s" podCreationTimestamp="2025-09-10 23:49:14 +0000 UTC" firstStartedPulling="2025-09-10 23:49:15.827355452 +0000 UTC m=+40.341516485" lastFinishedPulling="2025-09-10 23:49:20.188956195 +0000 UTC m=+44.703117308" observedRunningTime="2025-09-10 23:49:20.927477512 +0000 UTC m=+45.441638545" watchObservedRunningTime="2025-09-10 23:49:20.929477787 +0000 UTC m=+45.443638820" Sep 10 23:49:20.974530 systemd[1]: Started cri-containerd-d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf.scope - libcontainer container d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf. Sep 10 23:49:21.014175 systemd-networkd[1426]: cali1221ddb261f: Link UP Sep 10 23:49:21.018048 systemd-networkd[1426]: cali1221ddb261f: Gained carrier Sep 10 23:49:21.045705 containerd[1544]: 2025-09-10 23:49:20.730 [INFO][4500] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--n--474f3036a8-k8s-csi--node--driver--4vgs4-eth0 csi-node-driver- calico-system 3865e030-478e-4bf4-875f-3e63ea712016 715 0 2025-09-10 23:48:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372-1-0-n-474f3036a8 csi-node-driver-4vgs4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1221ddb261f [] [] }} ContainerID="9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa" Namespace="calico-system" Pod="csi-node-driver-4vgs4" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-csi--node--driver--4vgs4-" Sep 10 23:49:21.045705 containerd[1544]: 2025-09-10 23:49:20.731 [INFO][4500] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa" Namespace="calico-system" Pod="csi-node-driver-4vgs4" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-csi--node--driver--4vgs4-eth0" Sep 10 23:49:21.045705 containerd[1544]: 2025-09-10 23:49:20.787 [INFO][4524] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa" HandleID="k8s-pod-network.9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa" Workload="ci--4372--1--0--n--474f3036a8-k8s-csi--node--driver--4vgs4-eth0" Sep 10 23:49:21.045705 containerd[1544]: 2025-09-10 23:49:20.787 [INFO][4524] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa" HandleID="k8s-pod-network.9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa" Workload="ci--4372--1--0--n--474f3036a8-k8s-csi--node--driver--4vgs4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b640), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-1-0-n-474f3036a8", "pod":"csi-node-driver-4vgs4", "timestamp":"2025-09-10 23:49:20.787019074 +0000 UTC"}, Hostname:"ci-4372-1-0-n-474f3036a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:49:21.045705 containerd[1544]: 2025-09-10 23:49:20.787 [INFO][4524] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:49:21.045705 containerd[1544]: 2025-09-10 23:49:20.825 [INFO][4524] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:49:21.045705 containerd[1544]: 2025-09-10 23:49:20.825 [INFO][4524] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-n-474f3036a8' Sep 10 23:49:21.045705 containerd[1544]: 2025-09-10 23:49:20.889 [INFO][4524] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:21.045705 containerd[1544]: 2025-09-10 23:49:20.911 [INFO][4524] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:21.045705 containerd[1544]: 2025-09-10 23:49:20.933 [INFO][4524] ipam/ipam.go 511: Trying affinity for 192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:21.045705 containerd[1544]: 2025-09-10 23:49:20.949 [INFO][4524] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:21.045705 containerd[1544]: 2025-09-10 23:49:20.959 [INFO][4524] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:21.045705 containerd[1544]: 2025-09-10 23:49:20.961 [INFO][4524] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:21.045705 containerd[1544]: 2025-09-10 23:49:20.973 [INFO][4524] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa Sep 10 23:49:21.045705 containerd[1544]: 2025-09-10 23:49:20.988 [INFO][4524] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:21.045705 containerd[1544]: 2025-09-10 23:49:21.004 [INFO][4524] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.135/26] block=192.168.92.128/26 handle="k8s-pod-network.9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:21.045705 containerd[1544]: 2025-09-10 23:49:21.004 [INFO][4524] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.135/26] handle="k8s-pod-network.9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:21.045705 containerd[1544]: 2025-09-10 23:49:21.004 [INFO][4524] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:49:21.045705 containerd[1544]: 2025-09-10 23:49:21.004 [INFO][4524] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.135/26] IPv6=[] ContainerID="9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa" HandleID="k8s-pod-network.9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa" Workload="ci--4372--1--0--n--474f3036a8-k8s-csi--node--driver--4vgs4-eth0" Sep 10 23:49:21.046309 containerd[1544]: 2025-09-10 23:49:21.009 [INFO][4500] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa" Namespace="calico-system" Pod="csi-node-driver-4vgs4" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-csi--node--driver--4vgs4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--n--474f3036a8-k8s-csi--node--driver--4vgs4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3865e030-478e-4bf4-875f-3e63ea712016", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 48, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-n-474f3036a8", ContainerID:"", Pod:"csi-node-driver-4vgs4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1221ddb261f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:49:21.046309 containerd[1544]: 2025-09-10 23:49:21.009 [INFO][4500] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.135/32] ContainerID="9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa" Namespace="calico-system" Pod="csi-node-driver-4vgs4" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-csi--node--driver--4vgs4-eth0" Sep 10 23:49:21.046309 containerd[1544]: 2025-09-10 23:49:21.009 [INFO][4500] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1221ddb261f ContainerID="9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa" Namespace="calico-system" Pod="csi-node-driver-4vgs4" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-csi--node--driver--4vgs4-eth0" Sep 10 23:49:21.046309 containerd[1544]: 2025-09-10 23:49:21.012 [INFO][4500] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa" Namespace="calico-system" Pod="csi-node-driver-4vgs4" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-csi--node--driver--4vgs4-eth0" Sep 10 23:49:21.046309 containerd[1544]: 2025-09-10 23:49:21.017 [INFO][4500] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa" Namespace="calico-system" Pod="csi-node-driver-4vgs4" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-csi--node--driver--4vgs4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--n--474f3036a8-k8s-csi--node--driver--4vgs4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3865e030-478e-4bf4-875f-3e63ea712016", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 48, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-n-474f3036a8", ContainerID:"9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa", Pod:"csi-node-driver-4vgs4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1221ddb261f", MAC:"86:bc:a6:05:8b:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:49:21.046309 containerd[1544]: 2025-09-10 23:49:21.040 [INFO][4500] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa" Namespace="calico-system" Pod="csi-node-driver-4vgs4" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-csi--node--driver--4vgs4-eth0" Sep 10 23:49:21.068834 containerd[1544]: time="2025-09-10T23:49:21.068722629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-spgv2,Uid:e51822d2-3192-476b-8d88-be0574f27d69,Namespace:kube-system,Attempt:0,} returns sandbox id \"d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf\"" Sep 10 23:49:21.076900 containerd[1544]: time="2025-09-10T23:49:21.076793252Z" level=info msg="CreateContainer within sandbox \"d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:49:21.080583 containerd[1544]: time="2025-09-10T23:49:21.080459877Z" level=info msg="connecting to shim 9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa" address="unix:///run/containerd/s/38db1b84cc7895d6b37279bbd87c3a51766ca42068ae76957ed443035c65df1d" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:49:21.089449 containerd[1544]: time="2025-09-10T23:49:21.088979908Z" level=info msg="Container 2f6d69838364b919f254f1ab1c60e83c7aa831b265b33dce6b27a5181516e8c2: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:21.098358 containerd[1544]: time="2025-09-10T23:49:21.098232552Z" level=info msg="CreateContainer within sandbox \"d464e4e35064f852a6dad4ba0a16820a195ed8e9ffe8999d4d148a033ed380cf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f6d69838364b919f254f1ab1c60e83c7aa831b265b33dce6b27a5181516e8c2\"" Sep 10 23:49:21.099632 containerd[1544]: time="2025-09-10T23:49:21.099353491Z" level=info msg="StartContainer for \"2f6d69838364b919f254f1ab1c60e83c7aa831b265b33dce6b27a5181516e8c2\"" Sep 10 23:49:21.103141 containerd[1544]: time="2025-09-10T23:49:21.102983676Z" level=info msg="connecting to shim 2f6d69838364b919f254f1ab1c60e83c7aa831b265b33dce6b27a5181516e8c2" address="unix:///run/containerd/s/11205565f9bb53803eba00435a14e48d32188011d173bafbf9d9ee6de87100e7" protocol=ttrpc version=3 Sep 10 23:49:21.122816 systemd[1]: Started cri-containerd-9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa.scope - libcontainer container 9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa. Sep 10 23:49:21.130965 systemd[1]: Started cri-containerd-2f6d69838364b919f254f1ab1c60e83c7aa831b265b33dce6b27a5181516e8c2.scope - libcontainer container 2f6d69838364b919f254f1ab1c60e83c7aa831b265b33dce6b27a5181516e8c2. Sep 10 23:49:21.174854 containerd[1544]: time="2025-09-10T23:49:21.174743307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4vgs4,Uid:3865e030-478e-4bf4-875f-3e63ea712016,Namespace:calico-system,Attempt:0,} returns sandbox id \"9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa\"" Sep 10 23:49:21.189007 containerd[1544]: time="2025-09-10T23:49:21.188916438Z" level=info msg="StartContainer for \"2f6d69838364b919f254f1ab1c60e83c7aa831b265b33dce6b27a5181516e8c2\" returns successfully" Sep 10 23:49:21.634013 containerd[1544]: time="2025-09-10T23:49:21.633870520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868f5f4df9-gk4bh,Uid:195bfbbb-cd1a-4b25-aa6f-d7e8a3139277,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:49:21.699811 systemd-networkd[1426]: cali0edd4a98f2d: Gained IPv6LL Sep 10 23:49:21.797346 systemd-networkd[1426]: cali7faac49c994: Link UP Sep 10 23:49:21.799001 systemd-networkd[1426]: cali7faac49c994: Gained carrier Sep 10 23:49:21.819818 containerd[1544]: 2025-09-10 23:49:21.682 [INFO][4685] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--gk4bh-eth0 calico-apiserver-868f5f4df9- calico-apiserver 195bfbbb-cd1a-4b25-aa6f-d7e8a3139277 848 0 2025-09-10 23:48:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:868f5f4df9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372-1-0-n-474f3036a8 calico-apiserver-868f5f4df9-gk4bh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7faac49c994 [] [] }} ContainerID="5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321" Namespace="calico-apiserver" Pod="calico-apiserver-868f5f4df9-gk4bh" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--gk4bh-" Sep 10 23:49:21.819818 containerd[1544]: 2025-09-10 23:49:21.682 [INFO][4685] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321" Namespace="calico-apiserver" Pod="calico-apiserver-868f5f4df9-gk4bh" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--gk4bh-eth0" Sep 10 23:49:21.819818 containerd[1544]: 2025-09-10 23:49:21.726 [INFO][4696] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321" HandleID="k8s-pod-network.5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321" Workload="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--gk4bh-eth0" Sep 10 23:49:21.819818 containerd[1544]: 2025-09-10 23:49:21.726 [INFO][4696] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321" HandleID="k8s-pod-network.5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321" Workload="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--gk4bh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372-1-0-n-474f3036a8", "pod":"calico-apiserver-868f5f4df9-gk4bh", "timestamp":"2025-09-10 23:49:21.726799886 +0000 UTC"}, Hostname:"ci-4372-1-0-n-474f3036a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:49:21.819818 containerd[1544]: 2025-09-10 23:49:21.727 [INFO][4696] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:49:21.819818 containerd[1544]: 2025-09-10 23:49:21.727 [INFO][4696] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:49:21.819818 containerd[1544]: 2025-09-10 23:49:21.727 [INFO][4696] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-n-474f3036a8' Sep 10 23:49:21.819818 containerd[1544]: 2025-09-10 23:49:21.742 [INFO][4696] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:21.819818 containerd[1544]: 2025-09-10 23:49:21.750 [INFO][4696] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:21.819818 containerd[1544]: 2025-09-10 23:49:21.761 [INFO][4696] ipam/ipam.go 511: Trying affinity for 192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:21.819818 containerd[1544]: 2025-09-10 23:49:21.764 [INFO][4696] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:21.819818 containerd[1544]: 2025-09-10 23:49:21.769 [INFO][4696] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:21.819818 containerd[1544]: 2025-09-10 23:49:21.769 [INFO][4696] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:21.819818 containerd[1544]: 2025-09-10 23:49:21.771 [INFO][4696] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321 Sep 10 23:49:21.819818 containerd[1544]: 2025-09-10 23:49:21.777 [INFO][4696] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:21.819818 containerd[1544]: 2025-09-10 23:49:21.786 [INFO][4696] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.92.136/26] block=192.168.92.128/26 handle="k8s-pod-network.5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:21.819818 containerd[1544]: 2025-09-10 23:49:21.786 [INFO][4696] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.136/26] handle="k8s-pod-network.5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321" host="ci-4372-1-0-n-474f3036a8" Sep 10 23:49:21.819818 containerd[1544]: 2025-09-10 23:49:21.786 [INFO][4696] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:49:21.819818 containerd[1544]: 2025-09-10 23:49:21.786 [INFO][4696] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.136/26] IPv6=[] ContainerID="5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321" HandleID="k8s-pod-network.5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321" Workload="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--gk4bh-eth0" Sep 10 23:49:21.820426 containerd[1544]: 2025-09-10 23:49:21.789 [INFO][4685] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321" Namespace="calico-apiserver" Pod="calico-apiserver-868f5f4df9-gk4bh" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--gk4bh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--gk4bh-eth0", GenerateName:"calico-apiserver-868f5f4df9-", Namespace:"calico-apiserver", SelfLink:"", UID:"195bfbbb-cd1a-4b25-aa6f-d7e8a3139277", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 48, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"868f5f4df9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-n-474f3036a8", ContainerID:"", Pod:"calico-apiserver-868f5f4df9-gk4bh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7faac49c994", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:49:21.820426 containerd[1544]: 2025-09-10 23:49:21.790 [INFO][4685] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.136/32] ContainerID="5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321" Namespace="calico-apiserver" Pod="calico-apiserver-868f5f4df9-gk4bh" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--gk4bh-eth0" Sep 10 23:49:21.820426 containerd[1544]: 2025-09-10 23:49:21.790 [INFO][4685] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7faac49c994 ContainerID="5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321" Namespace="calico-apiserver" Pod="calico-apiserver-868f5f4df9-gk4bh" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--gk4bh-eth0" Sep 10 23:49:21.820426 containerd[1544]: 2025-09-10 23:49:21.799 [INFO][4685] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321" Namespace="calico-apiserver" Pod="calico-apiserver-868f5f4df9-gk4bh" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--gk4bh-eth0" Sep 10 23:49:21.820426 containerd[1544]: 2025-09-10 23:49:21.800 [INFO][4685] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321" Namespace="calico-apiserver" Pod="calico-apiserver-868f5f4df9-gk4bh" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--gk4bh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--gk4bh-eth0", GenerateName:"calico-apiserver-868f5f4df9-", Namespace:"calico-apiserver", SelfLink:"", UID:"195bfbbb-cd1a-4b25-aa6f-d7e8a3139277", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 48, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"868f5f4df9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-n-474f3036a8", ContainerID:"5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321", Pod:"calico-apiserver-868f5f4df9-gk4bh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7faac49c994", MAC:"9a:69:22:ef:ae:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:49:21.820426 containerd[1544]: 2025-09-10 23:49:21.813 [INFO][4685] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321" Namespace="calico-apiserver" Pod="calico-apiserver-868f5f4df9-gk4bh" WorkloadEndpoint="ci--4372--1--0--n--474f3036a8-k8s-calico--apiserver--868f5f4df9--gk4bh-eth0" Sep 10 23:49:21.859227 containerd[1544]: time="2025-09-10T23:49:21.859147430Z" level=info msg="connecting to shim 5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321" address="unix:///run/containerd/s/503f204d144922269e16e18238fd241a8ebbe2ac44b6182d5104a1bc74bf5af3" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:49:21.889012 systemd[1]: Started cri-containerd-5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321.scope - libcontainer container 5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321. Sep 10 23:49:21.968849 containerd[1544]: time="2025-09-10T23:49:21.968793413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868f5f4df9-gk4bh,Uid:195bfbbb-cd1a-4b25-aa6f-d7e8a3139277,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321\"" Sep 10 23:49:21.975444 kubelet[2760]: I0910 23:49:21.975367 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-spgv2" podStartSLOduration=40.975349529 podStartE2EDuration="40.975349529s" podCreationTimestamp="2025-09-10 23:48:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:49:21.949359108 +0000 UTC m=+46.463520141" watchObservedRunningTime="2025-09-10 23:49:21.975349529 +0000 UTC m=+46.489510562" Sep 10 23:49:22.340441 systemd-networkd[1426]: calid19fa490474: Gained IPv6LL Sep 10 23:49:22.916667 systemd-networkd[1426]: cali7faac49c994: Gained IPv6LL Sep 10 23:49:22.979793 systemd-networkd[1426]: cali1221ddb261f: Gained IPv6LL Sep 10 23:49:23.289549 containerd[1544]: time="2025-09-10T23:49:23.289388019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:23.291378 containerd[1544]: time="2025-09-10T23:49:23.290706122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 10 23:49:23.291838 containerd[1544]: time="2025-09-10T23:49:23.291808741Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:23.294468 containerd[1544]: time="2025-09-10T23:49:23.294428907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:23.295382 containerd[1544]: time="2025-09-10T23:49:23.295272521Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 3.105084544s" Sep 10 23:49:23.295382 containerd[1544]: time="2025-09-10T23:49:23.295304482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 10 23:49:23.296768 containerd[1544]: time="2025-09-10T23:49:23.296742747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 10 23:49:23.301595 containerd[1544]: time="2025-09-10T23:49:23.301545110Z" level=info msg="CreateContainer within sandbox \"3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 23:49:23.313517 containerd[1544]: time="2025-09-10T23:49:23.313466117Z" level=info msg="Container d1ee3155065e7709fceb731edc211feeaa99ff6eb25611ecb98d7a70169165a5: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:23.328300 containerd[1544]: time="2025-09-10T23:49:23.328247173Z" level=info msg="CreateContainer within sandbox \"3b6d8f1337960870fbf3407cd41dcd4c52ff1df8f584a4220c7edcba5d563f53\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d1ee3155065e7709fceb731edc211feeaa99ff6eb25611ecb98d7a70169165a5\"" Sep 10 23:49:23.329716 containerd[1544]: time="2025-09-10T23:49:23.329385792Z" level=info msg="StartContainer for \"d1ee3155065e7709fceb731edc211feeaa99ff6eb25611ecb98d7a70169165a5\"" Sep 10 23:49:23.332824 containerd[1544]: time="2025-09-10T23:49:23.332689450Z" level=info msg="connecting to shim d1ee3155065e7709fceb731edc211feeaa99ff6eb25611ecb98d7a70169165a5" address="unix:///run/containerd/s/dfcec907f21ef2b8e4c166cbb9e5db6d6b168e9434b06d6e0fee9021361b22ee" protocol=ttrpc version=3 Sep 10 23:49:23.366828 systemd[1]: Started cri-containerd-d1ee3155065e7709fceb731edc211feeaa99ff6eb25611ecb98d7a70169165a5.scope - libcontainer container d1ee3155065e7709fceb731edc211feeaa99ff6eb25611ecb98d7a70169165a5. Sep 10 23:49:23.438089 containerd[1544]: time="2025-09-10T23:49:23.438037636Z" level=info msg="StartContainer for \"d1ee3155065e7709fceb731edc211feeaa99ff6eb25611ecb98d7a70169165a5\" returns successfully" Sep 10 23:49:24.688082 kubelet[2760]: I0910 23:49:24.688016 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-868f5f4df9-98krg" podStartSLOduration=26.335926881 podStartE2EDuration="31.687897502s" podCreationTimestamp="2025-09-10 23:48:53 +0000 UTC" firstStartedPulling="2025-09-10 23:49:17.944253637 +0000 UTC m=+42.458414670" lastFinishedPulling="2025-09-10 23:49:23.296224258 +0000 UTC m=+47.810385291" observedRunningTime="2025-09-10 23:49:23.959003946 +0000 UTC m=+48.473164979" watchObservedRunningTime="2025-09-10 23:49:24.687897502 +0000 UTC m=+49.202058535" Sep 10 23:49:25.235907 kubelet[2760]: I0910 23:49:25.235469 2760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:49:25.410167 containerd[1544]: time="2025-09-10T23:49:25.410116909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"216c31e801ee5173b2d7024b85a67cffb790646428f54a2be6781089c51589b9\" id:\"0eb40d43e7b77fda3b55e79e27b6c15ae1a65794579274cf8fc0fd074859ea89\" pid:4822 exited_at:{seconds:1757548165 nanos:409725503}" Sep 10 23:49:25.558965 containerd[1544]: time="2025-09-10T23:49:25.558823917Z" level=info msg="TaskExit event in podsandbox handler container_id:\"216c31e801ee5173b2d7024b85a67cffb790646428f54a2be6781089c51589b9\" id:\"ecbd6aba2681c6220423bab72efb130d8be4ce67a6fd7699b9aeba4a755e9096\" pid:4854 exited_at:{seconds:1757548165 nanos:558212667}" Sep 10 23:49:26.008860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount447114905.mount: Deactivated successfully. Sep 10 23:49:26.536679 containerd[1544]: time="2025-09-10T23:49:26.536276852Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:26.538057 containerd[1544]: time="2025-09-10T23:49:26.538023802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 10 23:49:26.538867 containerd[1544]: time="2025-09-10T23:49:26.538805375Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:26.542091 containerd[1544]: time="2025-09-10T23:49:26.542057109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:26.542772 containerd[1544]: time="2025-09-10T23:49:26.542591918Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 3.245714369s" Sep 10 23:49:26.542772 containerd[1544]: time="2025-09-10T23:49:26.542684760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 10 23:49:26.544194 containerd[1544]: time="2025-09-10T23:49:26.544145865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 10 23:49:26.548164 containerd[1544]: time="2025-09-10T23:49:26.548082691Z" level=info msg="CreateContainer within sandbox \"f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 10 23:49:26.560664 containerd[1544]: time="2025-09-10T23:49:26.558352384Z" level=info msg="Container a9c4f099279f970e9fd7c925f4726fb0c9e41304676970515924c22de220eb28: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:26.572782 containerd[1544]: time="2025-09-10T23:49:26.572738506Z" level=info msg="CreateContainer within sandbox \"f41c5816509d7c8b7cb0ae0f8638cc2016ac807e3694b1b7f065202d011e31a2\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a9c4f099279f970e9fd7c925f4726fb0c9e41304676970515924c22de220eb28\"" Sep 10 23:49:26.573866 containerd[1544]: time="2025-09-10T23:49:26.573792044Z" level=info msg="StartContainer for \"a9c4f099279f970e9fd7c925f4726fb0c9e41304676970515924c22de220eb28\"" Sep 10 23:49:26.576549 containerd[1544]: time="2025-09-10T23:49:26.576513930Z" level=info msg="connecting to shim a9c4f099279f970e9fd7c925f4726fb0c9e41304676970515924c22de220eb28" address="unix:///run/containerd/s/1726f66c1481e76787cd7ec58b4f5d4dc199ba5ee1303311da85dae613482095" protocol=ttrpc version=3 Sep 10 23:49:26.609886 systemd[1]: Started cri-containerd-a9c4f099279f970e9fd7c925f4726fb0c9e41304676970515924c22de220eb28.scope - libcontainer container a9c4f099279f970e9fd7c925f4726fb0c9e41304676970515924c22de220eb28. Sep 10 23:49:26.657793 containerd[1544]: time="2025-09-10T23:49:26.657713378Z" level=info msg="StartContainer for \"a9c4f099279f970e9fd7c925f4726fb0c9e41304676970515924c22de220eb28\" returns successfully" Sep 10 23:49:26.974934 kubelet[2760]: I0910 23:49:26.974751 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-g2jf2" podStartSLOduration=22.488278484 podStartE2EDuration="29.974149549s" podCreationTimestamp="2025-09-10 23:48:57 +0000 UTC" firstStartedPulling="2025-09-10 23:49:19.058138757 +0000 UTC m=+43.572299790" lastFinishedPulling="2025-09-10 23:49:26.544009822 +0000 UTC m=+51.058170855" observedRunningTime="2025-09-10 23:49:26.970696331 +0000 UTC m=+51.484857404" watchObservedRunningTime="2025-09-10 23:49:26.974149549 +0000 UTC m=+51.488310582" Sep 10 23:49:27.076503 containerd[1544]: time="2025-09-10T23:49:27.076256219Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9c4f099279f970e9fd7c925f4726fb0c9e41304676970515924c22de220eb28\" id:\"13deef4f91cdc394fccb8bc2761b7190698680734366220942aa6f56e9ce2246\" pid:4935 exit_status:1 exited_at:{seconds:1757548167 nanos:67907879}" Sep 10 23:49:28.054907 containerd[1544]: time="2025-09-10T23:49:28.054825839Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9c4f099279f970e9fd7c925f4726fb0c9e41304676970515924c22de220eb28\" id:\"815eb0cfd3b4b5c7e65b1194c1a0c54af1391950cd7014e6666d680658f57eb3\" pid:4959 exit_status:1 exited_at:{seconds:1757548168 nanos:54412952}" Sep 10 23:49:31.210698 containerd[1544]: time="2025-09-10T23:49:31.209951126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:31.212029 containerd[1544]: time="2025-09-10T23:49:31.211433339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 10 23:49:31.213828 containerd[1544]: time="2025-09-10T23:49:31.213756919Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:31.219007 containerd[1544]: time="2025-09-10T23:49:31.218737202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:31.219434 containerd[1544]: time="2025-09-10T23:49:31.219400088Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 4.675228342s" Sep 10 23:49:31.219434 containerd[1544]: time="2025-09-10T23:49:31.219432928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 10 23:49:31.221853 containerd[1544]: time="2025-09-10T23:49:31.221596466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 10 23:49:31.244526 containerd[1544]: time="2025-09-10T23:49:31.244481784Z" level=info msg="CreateContainer within sandbox \"0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 10 23:49:31.260309 containerd[1544]: time="2025-09-10T23:49:31.257866259Z" level=info msg="Container e7bf2c7f7c3a601de223350b573885f6249ab5f0809bdcde72dda9577caf3e31: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:31.267295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2016480743.mount: Deactivated successfully. Sep 10 23:49:31.277475 containerd[1544]: time="2025-09-10T23:49:31.277403267Z" level=info msg="CreateContainer within sandbox \"0b7a08778fb0a82a3ed5784ec3276f862b25d83f7b1651aae1beb9e119dfe38f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e7bf2c7f7c3a601de223350b573885f6249ab5f0809bdcde72dda9577caf3e31\"" Sep 10 23:49:31.278199 containerd[1544]: time="2025-09-10T23:49:31.278146674Z" level=info msg="StartContainer for \"e7bf2c7f7c3a601de223350b573885f6249ab5f0809bdcde72dda9577caf3e31\"" Sep 10 23:49:31.281083 containerd[1544]: time="2025-09-10T23:49:31.281042139Z" level=info msg="connecting to shim e7bf2c7f7c3a601de223350b573885f6249ab5f0809bdcde72dda9577caf3e31" address="unix:///run/containerd/s/2d5770aa679457c210ed3124f39851011590f832858e4886a088aa761565405f" protocol=ttrpc version=3 Sep 10 23:49:31.320049 systemd[1]: Started cri-containerd-e7bf2c7f7c3a601de223350b573885f6249ab5f0809bdcde72dda9577caf3e31.scope - libcontainer container e7bf2c7f7c3a601de223350b573885f6249ab5f0809bdcde72dda9577caf3e31. Sep 10 23:49:31.397217 containerd[1544]: time="2025-09-10T23:49:31.397139899Z" level=info msg="StartContainer for \"e7bf2c7f7c3a601de223350b573885f6249ab5f0809bdcde72dda9577caf3e31\" returns successfully" Sep 10 23:49:32.025201 kubelet[2760]: I0910 23:49:32.025055 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-596bb59679-rdc8k" podStartSLOduration=22.926936021 podStartE2EDuration="34.025038032s" podCreationTimestamp="2025-09-10 23:48:58 +0000 UTC" firstStartedPulling="2025-09-10 23:49:20.122774929 +0000 UTC m=+44.636936002" lastFinishedPulling="2025-09-10 23:49:31.22087698 +0000 UTC m=+55.735038013" observedRunningTime="2025-09-10 23:49:32.023383097 +0000 UTC m=+56.537544130" watchObservedRunningTime="2025-09-10 23:49:32.025038032 +0000 UTC m=+56.539199185" Sep 10 23:49:32.046396 containerd[1544]: time="2025-09-10T23:49:32.046352139Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7bf2c7f7c3a601de223350b573885f6249ab5f0809bdcde72dda9577caf3e31\" id:\"b80de4bf1b7895cab58a048c9954c5371454eeab3c183a3948b8912dfeab54a2\" pid:5038 exited_at:{seconds:1757548172 nanos:45057048}" Sep 10 23:49:33.083281 containerd[1544]: time="2025-09-10T23:49:33.083108294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:33.085431 containerd[1544]: time="2025-09-10T23:49:33.085142272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 10 23:49:33.087663 containerd[1544]: time="2025-09-10T23:49:33.086430323Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:33.091256 containerd[1544]: time="2025-09-10T23:49:33.091213566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:33.092587 containerd[1544]: time="2025-09-10T23:49:33.092529578Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.870545388s" Sep 10 23:49:33.092587 containerd[1544]: time="2025-09-10T23:49:33.092574178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 10 23:49:33.095671 containerd[1544]: time="2025-09-10T23:49:33.094129112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 23:49:33.100062 containerd[1544]: time="2025-09-10T23:49:33.100027205Z" level=info msg="CreateContainer within sandbox \"9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 10 23:49:33.118829 containerd[1544]: time="2025-09-10T23:49:33.118785493Z" level=info msg="Container 3ded54394b273cf2c575d15b7458cc4c63f067a8f652177cd722c22039c3b76e: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:33.130905 containerd[1544]: time="2025-09-10T23:49:33.130859360Z" level=info msg="CreateContainer within sandbox \"9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3ded54394b273cf2c575d15b7458cc4c63f067a8f652177cd722c22039c3b76e\"" Sep 10 23:49:33.131871 containerd[1544]: time="2025-09-10T23:49:33.131809329Z" level=info msg="StartContainer for \"3ded54394b273cf2c575d15b7458cc4c63f067a8f652177cd722c22039c3b76e\"" Sep 10 23:49:33.136056 containerd[1544]: time="2025-09-10T23:49:33.136002646Z" level=info msg="connecting to shim 3ded54394b273cf2c575d15b7458cc4c63f067a8f652177cd722c22039c3b76e" address="unix:///run/containerd/s/38db1b84cc7895d6b37279bbd87c3a51766ca42068ae76957ed443035c65df1d" protocol=ttrpc version=3 Sep 10 23:49:33.173165 systemd[1]: Started cri-containerd-3ded54394b273cf2c575d15b7458cc4c63f067a8f652177cd722c22039c3b76e.scope - libcontainer container 3ded54394b273cf2c575d15b7458cc4c63f067a8f652177cd722c22039c3b76e. Sep 10 23:49:33.303073 containerd[1544]: time="2025-09-10T23:49:33.303011379Z" level=info msg="StartContainer for \"3ded54394b273cf2c575d15b7458cc4c63f067a8f652177cd722c22039c3b76e\" returns successfully" Sep 10 23:49:33.495917 containerd[1544]: time="2025-09-10T23:49:33.495300538Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:33.495917 containerd[1544]: time="2025-09-10T23:49:33.495843103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 10 23:49:33.499892 containerd[1544]: time="2025-09-10T23:49:33.499834418Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 405.666026ms" Sep 10 23:49:33.499892 containerd[1544]: time="2025-09-10T23:49:33.499881459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 10 23:49:33.501650 containerd[1544]: time="2025-09-10T23:49:33.501439793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 10 23:49:33.505953 containerd[1544]: time="2025-09-10T23:49:33.505819392Z" level=info msg="CreateContainer within sandbox \"5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 23:49:33.526958 containerd[1544]: time="2025-09-10T23:49:33.522265779Z" level=info msg="Container a5a45f4081557e3106597b049c3db0aec6042b4b37195f602aa3d9f3294df702: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:33.525247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1375773231.mount: Deactivated successfully. Sep 10 23:49:33.540152 containerd[1544]: time="2025-09-10T23:49:33.540094218Z" level=info msg="CreateContainer within sandbox \"5332434990c1f3f3f1cf942c5968ed50a49bdeb272f287c09044459e58cec321\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a5a45f4081557e3106597b049c3db0aec6042b4b37195f602aa3d9f3294df702\"" Sep 10 23:49:33.541724 containerd[1544]: time="2025-09-10T23:49:33.541675232Z" level=info msg="StartContainer for \"a5a45f4081557e3106597b049c3db0aec6042b4b37195f602aa3d9f3294df702\"" Sep 10 23:49:33.544685 containerd[1544]: time="2025-09-10T23:49:33.544285536Z" level=info msg="connecting to shim a5a45f4081557e3106597b049c3db0aec6042b4b37195f602aa3d9f3294df702" address="unix:///run/containerd/s/503f204d144922269e16e18238fd241a8ebbe2ac44b6182d5104a1bc74bf5af3" protocol=ttrpc version=3 Sep 10 23:49:33.569954 systemd[1]: Started cri-containerd-a5a45f4081557e3106597b049c3db0aec6042b4b37195f602aa3d9f3294df702.scope - libcontainer container a5a45f4081557e3106597b049c3db0aec6042b4b37195f602aa3d9f3294df702. Sep 10 23:49:33.689885 containerd[1544]: time="2025-09-10T23:49:33.689841077Z" level=info msg="StartContainer for \"a5a45f4081557e3106597b049c3db0aec6042b4b37195f602aa3d9f3294df702\" returns successfully" Sep 10 23:49:34.027308 kubelet[2760]: I0910 23:49:34.026788 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-868f5f4df9-gk4bh" podStartSLOduration=29.496730793 podStartE2EDuration="41.026769772s" podCreationTimestamp="2025-09-10 23:48:53 +0000 UTC" firstStartedPulling="2025-09-10 23:49:21.970794248 +0000 UTC m=+46.484955241" lastFinishedPulling="2025-09-10 23:49:33.500833187 +0000 UTC m=+58.014994220" observedRunningTime="2025-09-10 23:49:34.02653397 +0000 UTC m=+58.540695003" watchObservedRunningTime="2025-09-10 23:49:34.026769772 +0000 UTC m=+58.540930765" Sep 10 23:49:34.998665 kubelet[2760]: I0910 23:49:34.998529 2760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:49:35.228868 containerd[1544]: time="2025-09-10T23:49:35.228814056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:35.230738 containerd[1544]: time="2025-09-10T23:49:35.230377270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 10 23:49:35.231959 containerd[1544]: time="2025-09-10T23:49:35.231879844Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:35.234904 containerd[1544]: time="2025-09-10T23:49:35.234796231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:35.235628 containerd[1544]: time="2025-09-10T23:49:35.235597758Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.734059684s" Sep 10 23:49:35.235840 containerd[1544]: time="2025-09-10T23:49:35.235742320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 10 23:49:35.241982 containerd[1544]: time="2025-09-10T23:49:35.241940297Z" level=info msg="CreateContainer within sandbox \"9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 10 23:49:35.259325 containerd[1544]: time="2025-09-10T23:49:35.257431720Z" level=info msg="Container f69202b5dcaf38604a4f4d00bdede7d1cc3017e7291e3f4a792702e097189bdd: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:35.258937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount428998341.mount: Deactivated successfully. Sep 10 23:49:35.275763 containerd[1544]: time="2025-09-10T23:49:35.275715689Z" level=info msg="CreateContainer within sandbox \"9857786713b8b60a6645322e5dc91b63b20c022c52fa57d1448e4c365ed2ffaa\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f69202b5dcaf38604a4f4d00bdede7d1cc3017e7291e3f4a792702e097189bdd\"" Sep 10 23:49:35.278935 containerd[1544]: time="2025-09-10T23:49:35.278694277Z" level=info msg="StartContainer for \"f69202b5dcaf38604a4f4d00bdede7d1cc3017e7291e3f4a792702e097189bdd\"" Sep 10 23:49:35.281732 containerd[1544]: time="2025-09-10T23:49:35.281625304Z" level=info msg="connecting to shim f69202b5dcaf38604a4f4d00bdede7d1cc3017e7291e3f4a792702e097189bdd" address="unix:///run/containerd/s/38db1b84cc7895d6b37279bbd87c3a51766ca42068ae76957ed443035c65df1d" protocol=ttrpc version=3 Sep 10 23:49:35.347714 systemd[1]: Started cri-containerd-f69202b5dcaf38604a4f4d00bdede7d1cc3017e7291e3f4a792702e097189bdd.scope - libcontainer container f69202b5dcaf38604a4f4d00bdede7d1cc3017e7291e3f4a792702e097189bdd. Sep 10 23:49:35.422839 containerd[1544]: time="2025-09-10T23:49:35.422793568Z" level=info msg="StartContainer for \"f69202b5dcaf38604a4f4d00bdede7d1cc3017e7291e3f4a792702e097189bdd\" returns successfully" Sep 10 23:49:35.781244 kubelet[2760]: I0910 23:49:35.781121 2760 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 10 23:49:35.789905 kubelet[2760]: I0910 23:49:35.789806 2760 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 10 23:49:36.032115 kubelet[2760]: I0910 23:49:36.031932 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4vgs4" podStartSLOduration=24.97470887 podStartE2EDuration="39.031916122s" podCreationTimestamp="2025-09-10 23:48:57 +0000 UTC" firstStartedPulling="2025-09-10 23:49:21.180171643 +0000 UTC m=+45.694332676" lastFinishedPulling="2025-09-10 23:49:35.237378935 +0000 UTC m=+59.751539928" observedRunningTime="2025-09-10 23:49:36.031303797 +0000 UTC m=+60.545464830" watchObservedRunningTime="2025-09-10 23:49:36.031916122 +0000 UTC m=+60.546077115" Sep 10 23:49:55.559707 containerd[1544]: time="2025-09-10T23:49:55.559586687Z" level=info msg="TaskExit event in podsandbox handler container_id:\"216c31e801ee5173b2d7024b85a67cffb790646428f54a2be6781089c51589b9\" id:\"fd04be8e7891a81a6186eab904ead650cd505878f8dcf2b7ed41cd4ac3ae8cfa\" pid:5188 exited_at:{seconds:1757548195 nanos:559248123}" Sep 10 23:49:58.097978 containerd[1544]: time="2025-09-10T23:49:58.097818559Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9c4f099279f970e9fd7c925f4726fb0c9e41304676970515924c22de220eb28\" id:\"8f972b518b9de19b10e645333c23789a854f4c965839af19e4b42fe82eae9bb4\" pid:5218 exited_at:{seconds:1757548198 nanos:96484384}" Sep 10 23:50:02.040162 containerd[1544]: time="2025-09-10T23:50:02.040116474Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7bf2c7f7c3a601de223350b573885f6249ab5f0809bdcde72dda9577caf3e31\" id:\"7efcea5084a744f2fc4e2c399884cff27edcf5711377365bc639405065cb45df\" pid:5244 exited_at:{seconds:1757548202 nanos:39814590}" Sep 10 23:50:02.248676 kubelet[2760]: I0910 23:50:02.248149 2760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:50:13.665787 containerd[1544]: time="2025-09-10T23:50:13.665735333Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9c4f099279f970e9fd7c925f4726fb0c9e41304676970515924c22de220eb28\" id:\"95ceefae69a33d8827f74c095a2245d739ee5e46e49c50fce4baf439863fd8e1\" pid:5269 exited_at:{seconds:1757548213 nanos:665246207}" Sep 10 23:50:25.499382 containerd[1544]: time="2025-09-10T23:50:25.499277417Z" level=info msg="TaskExit event in podsandbox handler container_id:\"216c31e801ee5173b2d7024b85a67cffb790646428f54a2be6781089c51589b9\" id:\"9f2680f684637149f902d1d5a63d4ca237c3ea61bd8f9b902984991ec07384f2\" pid:5292 exited_at:{seconds:1757548225 nanos:498820531}" Sep 10 23:50:28.028805 containerd[1544]: time="2025-09-10T23:50:28.028759604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9c4f099279f970e9fd7c925f4726fb0c9e41304676970515924c22de220eb28\" id:\"7fae1d90a4f4e0cefb0f4977bbf52b386199d633c4fb4d6a3a48afcb539c204f\" pid:5316 exited_at:{seconds:1757548228 nanos:28423799}" Sep 10 23:50:32.011781 containerd[1544]: time="2025-09-10T23:50:32.011587707Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7bf2c7f7c3a601de223350b573885f6249ab5f0809bdcde72dda9577caf3e31\" id:\"7bf31607f9f487e1b8c93e6519530f40f34d63e51f60bbb59c9de13505cf9686\" pid:5338 exited_at:{seconds:1757548232 nanos:11354224}" Sep 10 23:50:40.147059 containerd[1544]: time="2025-09-10T23:50:40.147014129Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7bf2c7f7c3a601de223350b573885f6249ab5f0809bdcde72dda9577caf3e31\" id:\"730bfa4a01aa158c2e12452996dca4e771bd0ef4be70d834c2f62fb59c647abd\" pid:5368 exited_at:{seconds:1757548240 nanos:145800113}" Sep 10 23:50:55.506878 containerd[1544]: time="2025-09-10T23:50:55.506829064Z" level=info msg="TaskExit event in podsandbox handler container_id:\"216c31e801ee5173b2d7024b85a67cffb790646428f54a2be6781089c51589b9\" id:\"842381ff8d475bf3de21780309600874fca309dc7df9d5d42eaf255d7d085acc\" pid:5416 exited_at:{seconds:1757548255 nanos:506266056}" Sep 10 23:50:58.118837 containerd[1544]: time="2025-09-10T23:50:58.118783583Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9c4f099279f970e9fd7c925f4726fb0c9e41304676970515924c22de220eb28\" id:\"80d8db96338e59936e7a657b2402d028787d97ee3112fceb695aa420f41c748e\" pid:5440 exited_at:{seconds:1757548258 nanos:118417498}" Sep 10 23:51:02.014067 containerd[1544]: time="2025-09-10T23:51:02.013994748Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7bf2c7f7c3a601de223350b573885f6249ab5f0809bdcde72dda9577caf3e31\" id:\"3f0fcca0715a08d6e61e0efd51f5595aaa0b6698d2b70edd75f137eed2806652\" pid:5463 exited_at:{seconds:1757548262 nanos:13535982}" Sep 10 23:51:12.852915 systemd[1]: Started sshd@7-157.90.149.201:22-139.178.89.65:49690.service - OpenSSH per-connection server daemon (139.178.89.65:49690). Sep 10 23:51:13.629875 containerd[1544]: time="2025-09-10T23:51:13.629814847Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9c4f099279f970e9fd7c925f4726fb0c9e41304676970515924c22de220eb28\" id:\"2d6a1913e203d4b3f106250713d9da21abef0be6c455742cb771fe21a128b8f3\" pid:5493 exited_at:{seconds:1757548273 nanos:629177718}" Sep 10 23:51:13.865855 sshd[5478]: Accepted publickey for core from 139.178.89.65 port 49690 ssh2: RSA SHA256:WhS/KOZ1o/uklv7h/4WLPYbUs/Yyh1JayZdeMawA7QM Sep 10 23:51:13.868092 sshd-session[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:13.875141 systemd-logind[1466]: New session 8 of user core. Sep 10 23:51:13.882016 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 23:51:14.683006 sshd[5504]: Connection closed by 139.178.89.65 port 49690 Sep 10 23:51:14.684672 sshd-session[5478]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:14.691391 systemd-logind[1466]: Session 8 logged out. Waiting for processes to exit. Sep 10 23:51:14.692113 systemd[1]: sshd@7-157.90.149.201:22-139.178.89.65:49690.service: Deactivated successfully. Sep 10 23:51:14.696249 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 23:51:14.699809 systemd-logind[1466]: Removed session 8. Sep 10 23:51:19.855948 systemd[1]: Started sshd@8-157.90.149.201:22-139.178.89.65:49698.service - OpenSSH per-connection server daemon (139.178.89.65:49698). Sep 10 23:51:20.854270 sshd[5519]: Accepted publickey for core from 139.178.89.65 port 49698 ssh2: RSA SHA256:WhS/KOZ1o/uklv7h/4WLPYbUs/Yyh1JayZdeMawA7QM Sep 10 23:51:20.856434 sshd-session[5519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:20.864584 systemd-logind[1466]: New session 9 of user core. Sep 10 23:51:20.873012 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 23:51:21.617476 sshd[5521]: Connection closed by 139.178.89.65 port 49698 Sep 10 23:51:21.616429 sshd-session[5519]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:21.622476 systemd[1]: sshd@8-157.90.149.201:22-139.178.89.65:49698.service: Deactivated successfully. Sep 10 23:51:21.628250 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 23:51:21.629578 systemd-logind[1466]: Session 9 logged out. Waiting for processes to exit. Sep 10 23:51:21.633477 systemd-logind[1466]: Removed session 9. Sep 10 23:51:25.516788 containerd[1544]: time="2025-09-10T23:51:25.516728798Z" level=info msg="TaskExit event in podsandbox handler container_id:\"216c31e801ee5173b2d7024b85a67cffb790646428f54a2be6781089c51589b9\" id:\"d151dd9a10f28dc5f74e908449c37a17f16d70664a4b7670ca7a71535ae9f2cf\" pid:5547 exited_at:{seconds:1757548285 nanos:516090629}" Sep 10 23:51:26.792419 systemd[1]: Started sshd@9-157.90.149.201:22-139.178.89.65:33900.service - OpenSSH per-connection server daemon (139.178.89.65:33900). Sep 10 23:51:27.820425 sshd[5559]: Accepted publickey for core from 139.178.89.65 port 33900 ssh2: RSA SHA256:WhS/KOZ1o/uklv7h/4WLPYbUs/Yyh1JayZdeMawA7QM Sep 10 23:51:27.823051 sshd-session[5559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:27.828991 systemd-logind[1466]: New session 10 of user core. Sep 10 23:51:27.834853 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 23:51:28.035187 containerd[1544]: time="2025-09-10T23:51:28.035138834Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9c4f099279f970e9fd7c925f4726fb0c9e41304676970515924c22de220eb28\" id:\"bf11322ed01243bf95c4c6ce6372079f2539667d3df516281810d464fc4aca3a\" pid:5574 exited_at:{seconds:1757548288 nanos:34780029}" Sep 10 23:51:28.583723 sshd[5561]: Connection closed by 139.178.89.65 port 33900 Sep 10 23:51:28.584830 sshd-session[5559]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:28.590096 systemd-logind[1466]: Session 10 logged out. Waiting for processes to exit. Sep 10 23:51:28.590183 systemd[1]: sshd@9-157.90.149.201:22-139.178.89.65:33900.service: Deactivated successfully. Sep 10 23:51:28.592162 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 23:51:28.597955 systemd-logind[1466]: Removed session 10. Sep 10 23:51:28.756474 systemd[1]: Started sshd@10-157.90.149.201:22-139.178.89.65:33914.service - OpenSSH per-connection server daemon (139.178.89.65:33914). Sep 10 23:51:29.752841 sshd[5595]: Accepted publickey for core from 139.178.89.65 port 33914 ssh2: RSA SHA256:WhS/KOZ1o/uklv7h/4WLPYbUs/Yyh1JayZdeMawA7QM Sep 10 23:51:29.755376 sshd-session[5595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:29.762144 systemd-logind[1466]: New session 11 of user core. Sep 10 23:51:29.772052 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 23:51:30.559942 sshd[5601]: Connection closed by 139.178.89.65 port 33914 Sep 10 23:51:30.560240 sshd-session[5595]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:30.566717 systemd[1]: sshd@10-157.90.149.201:22-139.178.89.65:33914.service: Deactivated successfully. Sep 10 23:51:30.570319 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 23:51:30.573902 systemd-logind[1466]: Session 11 logged out. Waiting for processes to exit. Sep 10 23:51:30.575085 systemd-logind[1466]: Removed session 11. Sep 10 23:51:30.743818 systemd[1]: Started sshd@11-157.90.149.201:22-139.178.89.65:35690.service - OpenSSH per-connection server daemon (139.178.89.65:35690). Sep 10 23:51:31.809660 sshd[5610]: Accepted publickey for core from 139.178.89.65 port 35690 ssh2: RSA SHA256:WhS/KOZ1o/uklv7h/4WLPYbUs/Yyh1JayZdeMawA7QM Sep 10 23:51:31.812215 sshd-session[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:31.819509 systemd-logind[1466]: New session 12 of user core. Sep 10 23:51:31.823861 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 23:51:32.022588 containerd[1544]: time="2025-09-10T23:51:32.022354900Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7bf2c7f7c3a601de223350b573885f6249ab5f0809bdcde72dda9577caf3e31\" id:\"5ce1adfb6f1d860b74cdd6375fe0dbf68885e7357b83b1b1e1746420d6ff9fcd\" pid:5626 exited_at:{seconds:1757548292 nanos:21187684}" Sep 10 23:51:32.611334 sshd[5612]: Connection closed by 139.178.89.65 port 35690 Sep 10 23:51:32.612473 sshd-session[5610]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:32.616970 systemd-logind[1466]: Session 12 logged out. Waiting for processes to exit. Sep 10 23:51:32.617520 systemd[1]: sshd@11-157.90.149.201:22-139.178.89.65:35690.service: Deactivated successfully. Sep 10 23:51:32.619580 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 23:51:32.622505 systemd-logind[1466]: Removed session 12. Sep 10 23:51:37.783467 systemd[1]: Started sshd@12-157.90.149.201:22-139.178.89.65:35700.service - OpenSSH per-connection server daemon (139.178.89.65:35700). Sep 10 23:51:38.783050 sshd[5648]: Accepted publickey for core from 139.178.89.65 port 35700 ssh2: RSA SHA256:WhS/KOZ1o/uklv7h/4WLPYbUs/Yyh1JayZdeMawA7QM Sep 10 23:51:38.785987 sshd-session[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:38.791687 systemd-logind[1466]: New session 13 of user core. Sep 10 23:51:38.801998 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 23:51:39.547689 sshd[5650]: Connection closed by 139.178.89.65 port 35700 Sep 10 23:51:39.546911 sshd-session[5648]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:39.553008 systemd[1]: sshd@12-157.90.149.201:22-139.178.89.65:35700.service: Deactivated successfully. Sep 10 23:51:39.555472 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 23:51:39.558252 systemd-logind[1466]: Session 13 logged out. Waiting for processes to exit. Sep 10 23:51:39.560743 systemd-logind[1466]: Removed session 13. Sep 10 23:51:39.715778 systemd[1]: Started sshd@13-157.90.149.201:22-139.178.89.65:35708.service - OpenSSH per-connection server daemon (139.178.89.65:35708). Sep 10 23:51:40.147415 containerd[1544]: time="2025-09-10T23:51:40.147157155Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7bf2c7f7c3a601de223350b573885f6249ab5f0809bdcde72dda9577caf3e31\" id:\"cff9723b4ae7dbdfcb8b9d15ad8af8ee43bf17adf9491c1bf0fc60b55f51de9d\" pid:5677 exited_at:{seconds:1757548300 nanos:144796525}" Sep 10 23:51:40.711723 sshd[5662]: Accepted publickey for core from 139.178.89.65 port 35708 ssh2: RSA SHA256:WhS/KOZ1o/uklv7h/4WLPYbUs/Yyh1JayZdeMawA7QM Sep 10 23:51:40.713453 sshd-session[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:40.719181 systemd-logind[1466]: New session 14 of user core. Sep 10 23:51:40.731314 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 23:51:41.611120 sshd[5686]: Connection closed by 139.178.89.65 port 35708 Sep 10 23:51:41.612097 sshd-session[5662]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:41.618291 systemd[1]: sshd@13-157.90.149.201:22-139.178.89.65:35708.service: Deactivated successfully. Sep 10 23:51:41.620615 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 23:51:41.622123 systemd-logind[1466]: Session 14 logged out. Waiting for processes to exit. Sep 10 23:51:41.624075 systemd-logind[1466]: Removed session 14. Sep 10 23:51:41.791856 systemd[1]: Started sshd@14-157.90.149.201:22-139.178.89.65:56980.service - OpenSSH per-connection server daemon (139.178.89.65:56980). Sep 10 23:51:42.804361 sshd[5695]: Accepted publickey for core from 139.178.89.65 port 56980 ssh2: RSA SHA256:WhS/KOZ1o/uklv7h/4WLPYbUs/Yyh1JayZdeMawA7QM Sep 10 23:51:42.807403 sshd-session[5695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:42.815770 systemd-logind[1466]: New session 15 of user core. Sep 10 23:51:42.822965 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 23:51:44.175673 sshd[5699]: Connection closed by 139.178.89.65 port 56980 Sep 10 23:51:44.177212 sshd-session[5695]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:44.182707 systemd[1]: sshd@14-157.90.149.201:22-139.178.89.65:56980.service: Deactivated successfully. Sep 10 23:51:44.182780 systemd-logind[1466]: Session 15 logged out. Waiting for processes to exit. Sep 10 23:51:44.186241 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 23:51:44.189302 systemd-logind[1466]: Removed session 15. Sep 10 23:51:44.350799 systemd[1]: Started sshd@15-157.90.149.201:22-139.178.89.65:56996.service - OpenSSH per-connection server daemon (139.178.89.65:56996). Sep 10 23:51:45.372873 sshd[5717]: Accepted publickey for core from 139.178.89.65 port 56996 ssh2: RSA SHA256:WhS/KOZ1o/uklv7h/4WLPYbUs/Yyh1JayZdeMawA7QM Sep 10 23:51:45.375116 sshd-session[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:45.381191 systemd-logind[1466]: New session 16 of user core. Sep 10 23:51:45.395897 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 23:51:46.336479 sshd[5719]: Connection closed by 139.178.89.65 port 56996 Sep 10 23:51:46.334060 sshd-session[5717]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:46.341089 systemd[1]: sshd@15-157.90.149.201:22-139.178.89.65:56996.service: Deactivated successfully. Sep 10 23:51:46.343632 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 23:51:46.346922 systemd-logind[1466]: Session 16 logged out. Waiting for processes to exit. Sep 10 23:51:46.348337 systemd-logind[1466]: Removed session 16. Sep 10 23:51:46.505906 systemd[1]: Started sshd@16-157.90.149.201:22-139.178.89.65:57012.service - OpenSSH per-connection server daemon (139.178.89.65:57012). Sep 10 23:51:47.518176 sshd[5729]: Accepted publickey for core from 139.178.89.65 port 57012 ssh2: RSA SHA256:WhS/KOZ1o/uklv7h/4WLPYbUs/Yyh1JayZdeMawA7QM Sep 10 23:51:47.520389 sshd-session[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:47.526344 systemd-logind[1466]: New session 17 of user core. Sep 10 23:51:47.533977 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 23:51:48.294845 sshd[5731]: Connection closed by 139.178.89.65 port 57012 Sep 10 23:51:48.295342 sshd-session[5729]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:48.305966 systemd[1]: sshd@16-157.90.149.201:22-139.178.89.65:57012.service: Deactivated successfully. Sep 10 23:51:48.314300 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 23:51:48.316098 systemd-logind[1466]: Session 17 logged out. Waiting for processes to exit. Sep 10 23:51:48.318264 systemd-logind[1466]: Removed session 17. Sep 10 23:51:53.469333 systemd[1]: Started sshd@17-157.90.149.201:22-139.178.89.65:45586.service - OpenSSH per-connection server daemon (139.178.89.65:45586). Sep 10 23:51:54.483592 sshd[5746]: Accepted publickey for core from 139.178.89.65 port 45586 ssh2: RSA SHA256:WhS/KOZ1o/uklv7h/4WLPYbUs/Yyh1JayZdeMawA7QM Sep 10 23:51:54.485481 sshd-session[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:54.493279 systemd-logind[1466]: New session 18 of user core. Sep 10 23:51:54.498844 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 23:51:55.244849 sshd[5748]: Connection closed by 139.178.89.65 port 45586 Sep 10 23:51:55.245686 sshd-session[5746]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:55.251955 systemd[1]: sshd@17-157.90.149.201:22-139.178.89.65:45586.service: Deactivated successfully. Sep 10 23:51:55.254190 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 23:51:55.255519 systemd-logind[1466]: Session 18 logged out. Waiting for processes to exit. Sep 10 23:51:55.257567 systemd-logind[1466]: Removed session 18. Sep 10 23:51:55.501997 containerd[1544]: time="2025-09-10T23:51:55.501396740Z" level=info msg="TaskExit event in podsandbox handler container_id:\"216c31e801ee5173b2d7024b85a67cffb790646428f54a2be6781089c51589b9\" id:\"f9d75517371049eed64b3ed6be2a0e2aa415efe9bec564b6cf7795a80a5a4b54\" pid:5770 exited_at:{seconds:1757548315 nanos:500831621}" Sep 10 23:51:58.024398 containerd[1544]: time="2025-09-10T23:51:58.024338909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9c4f099279f970e9fd7c925f4726fb0c9e41304676970515924c22de220eb28\" id:\"36efbd79ca6c664f8f8281b00dc70b38d5c5eb96d4591960d48a8433946fd721\" pid:5799 exited_at:{seconds:1757548318 nanos:23802709}" Sep 10 23:52:00.418564 systemd[1]: Started sshd@18-157.90.149.201:22-139.178.89.65:60070.service - OpenSSH per-connection server daemon (139.178.89.65:60070). Sep 10 23:52:01.421926 sshd[5809]: Accepted publickey for core from 139.178.89.65 port 60070 ssh2: RSA SHA256:WhS/KOZ1o/uklv7h/4WLPYbUs/Yyh1JayZdeMawA7QM Sep 10 23:52:01.424680 sshd-session[5809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:52:01.431574 systemd-logind[1466]: New session 19 of user core. Sep 10 23:52:01.436876 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 23:52:02.032537 containerd[1544]: time="2025-09-10T23:52:02.032494246Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7bf2c7f7c3a601de223350b573885f6249ab5f0809bdcde72dda9577caf3e31\" id:\"dd2b9546046bd590331e7d8dbdecbf9ed9c7355b121be8c42e2ca60577a62d4f\" pid:5833 exited_at:{seconds:1757548322 nanos:31586206}" Sep 10 23:52:02.197470 sshd[5811]: Connection closed by 139.178.89.65 port 60070 Sep 10 23:52:02.198452 sshd-session[5809]: pam_unix(sshd:session): session closed for user core Sep 10 23:52:02.204060 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 23:52:02.205199 systemd[1]: sshd@18-157.90.149.201:22-139.178.89.65:60070.service: Deactivated successfully. Sep 10 23:52:02.209498 systemd-logind[1466]: Session 19 logged out. Waiting for processes to exit. Sep 10 23:52:02.211775 systemd-logind[1466]: Removed session 19. Sep 10 23:52:13.650158 containerd[1544]: time="2025-09-10T23:52:13.650069685Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9c4f099279f970e9fd7c925f4726fb0c9e41304676970515924c22de220eb28\" id:\"5a8801fc4744ec7a11365b34820b6c2f2f424f6140ba78625226d256ab8529d0\" pid:5859 exited_at:{seconds:1757548333 nanos:649684365}" Sep 10 23:52:19.623249 kubelet[2760]: E0910 23:52:19.623117 2760 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52062->10.0.0.2:2379: read: connection timed out" Sep 10 23:52:19.634638 systemd[1]: cri-containerd-368e67cc4a9ee04594fdfd0ff18ea3d31f26309e1b9c1dd239908f80d0ff533e.scope: Deactivated successfully. Sep 10 23:52:19.635285 systemd[1]: cri-containerd-368e67cc4a9ee04594fdfd0ff18ea3d31f26309e1b9c1dd239908f80d0ff533e.scope: Consumed 3.213s CPU time, 26.1M memory peak, 3.1M read from disk. Sep 10 23:52:19.638992 containerd[1544]: time="2025-09-10T23:52:19.638950066Z" level=info msg="TaskExit event in podsandbox handler container_id:\"368e67cc4a9ee04594fdfd0ff18ea3d31f26309e1b9c1dd239908f80d0ff533e\" id:\"368e67cc4a9ee04594fdfd0ff18ea3d31f26309e1b9c1dd239908f80d0ff533e\" pid:2619 exit_status:1 exited_at:{seconds:1757548339 nanos:638293144}" Sep 10 23:52:19.640479 containerd[1544]: time="2025-09-10T23:52:19.640346710Z" level=info msg="received exit event container_id:\"368e67cc4a9ee04594fdfd0ff18ea3d31f26309e1b9c1dd239908f80d0ff533e\" id:\"368e67cc4a9ee04594fdfd0ff18ea3d31f26309e1b9c1dd239908f80d0ff533e\" pid:2619 exit_status:1 exited_at:{seconds:1757548339 nanos:638293144}" Sep 10 23:52:19.664292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-368e67cc4a9ee04594fdfd0ff18ea3d31f26309e1b9c1dd239908f80d0ff533e-rootfs.mount: Deactivated successfully. Sep 10 23:52:20.296214 systemd[1]: cri-containerd-49c6aed8d062a217938223aad3f2d9d2c6b4369dd651568ad727883acc6182b4.scope: Deactivated successfully. Sep 10 23:52:20.296834 systemd[1]: cri-containerd-49c6aed8d062a217938223aad3f2d9d2c6b4369dd651568ad727883acc6182b4.scope: Consumed 21.532s CPU time, 104.4M memory peak, 4.8M read from disk. Sep 10 23:52:20.299091 containerd[1544]: time="2025-09-10T23:52:20.299027077Z" level=info msg="TaskExit event in podsandbox handler container_id:\"49c6aed8d062a217938223aad3f2d9d2c6b4369dd651568ad727883acc6182b4\" id:\"49c6aed8d062a217938223aad3f2d9d2c6b4369dd651568ad727883acc6182b4\" pid:3088 exit_status:1 exited_at:{seconds:1757548340 nanos:298453916}" Sep 10 23:52:20.299377 containerd[1544]: time="2025-09-10T23:52:20.299242118Z" level=info msg="received exit event container_id:\"49c6aed8d062a217938223aad3f2d9d2c6b4369dd651568ad727883acc6182b4\" id:\"49c6aed8d062a217938223aad3f2d9d2c6b4369dd651568ad727883acc6182b4\" pid:3088 exit_status:1 exited_at:{seconds:1757548340 nanos:298453916}" Sep 10 23:52:20.336219 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49c6aed8d062a217938223aad3f2d9d2c6b4369dd651568ad727883acc6182b4-rootfs.mount: Deactivated successfully. Sep 10 23:52:20.340456 systemd[1]: cri-containerd-0064b5f793c527dd7c7e6d9708765d2fdaf7656f7f1e07803388f28c42a1ac36.scope: Deactivated successfully. Sep 10 23:52:20.341614 systemd[1]: cri-containerd-0064b5f793c527dd7c7e6d9708765d2fdaf7656f7f1e07803388f28c42a1ac36.scope: Consumed 4.655s CPU time, 64.4M memory peak, 3.6M read from disk. Sep 10 23:52:20.347010 containerd[1544]: time="2025-09-10T23:52:20.346454794Z" level=info msg="received exit event container_id:\"0064b5f793c527dd7c7e6d9708765d2fdaf7656f7f1e07803388f28c42a1ac36\" id:\"0064b5f793c527dd7c7e6d9708765d2fdaf7656f7f1e07803388f28c42a1ac36\" pid:2592 exit_status:1 exited_at:{seconds:1757548340 nanos:345831632}" Sep 10 23:52:20.348261 containerd[1544]: time="2025-09-10T23:52:20.348081079Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0064b5f793c527dd7c7e6d9708765d2fdaf7656f7f1e07803388f28c42a1ac36\" id:\"0064b5f793c527dd7c7e6d9708765d2fdaf7656f7f1e07803388f28c42a1ac36\" pid:2592 exit_status:1 exited_at:{seconds:1757548340 nanos:345831632}" Sep 10 23:52:20.373020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0064b5f793c527dd7c7e6d9708765d2fdaf7656f7f1e07803388f28c42a1ac36-rootfs.mount: Deactivated successfully. Sep 10 23:52:20.532422 kubelet[2760]: I0910 23:52:20.532363 2760 scope.go:117] "RemoveContainer" containerID="368e67cc4a9ee04594fdfd0ff18ea3d31f26309e1b9c1dd239908f80d0ff533e" Sep 10 23:52:20.532713 kubelet[2760]: I0910 23:52:20.532701 2760 scope.go:117] "RemoveContainer" containerID="0064b5f793c527dd7c7e6d9708765d2fdaf7656f7f1e07803388f28c42a1ac36" Sep 10 23:52:20.532947 kubelet[2760]: I0910 23:52:20.532838 2760 scope.go:117] "RemoveContainer" containerID="49c6aed8d062a217938223aad3f2d9d2c6b4369dd651568ad727883acc6182b4" Sep 10 23:52:20.536037 containerd[1544]: time="2025-09-10T23:52:20.535997100Z" level=info msg="CreateContainer within sandbox \"214faeb2cbdc77465bb36ebf552007656ec188b64151e3a3628351b8c4f0eaad\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 10 23:52:20.536451 containerd[1544]: time="2025-09-10T23:52:20.536418542Z" level=info msg="CreateContainer within sandbox \"2cc00c03ea7af449618a1340adb01cf9371947b506216903c19cfbf7bad69d73\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 10 23:52:20.538449 containerd[1544]: time="2025-09-10T23:52:20.538194107Z" level=info msg="CreateContainer within sandbox \"441d9668504a0d2ce6ef17e2d7edb95cddd776833d182f084e7ec48d675c7693\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 10 23:52:20.563876 containerd[1544]: time="2025-09-10T23:52:20.563759312Z" level=info msg="Container 8636f37991469001e8bdf95761a97f461b7e51dbaf0f3b5d205e978bc297608a: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:52:20.571622 containerd[1544]: time="2025-09-10T23:52:20.571455217Z" level=info msg="Container c80ceadaa762c51875c1491d81a33c5e53621a8a85cb25da8dc725e6397729e4: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:52:20.573517 containerd[1544]: time="2025-09-10T23:52:20.573468784Z" level=info msg="Container 20a592ddc791eaf558ada314ef5b9c19b9e3b59188f16d7ac9ac3db2e5ee9b6f: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:52:20.589249 containerd[1544]: time="2025-09-10T23:52:20.589012635Z" level=info msg="CreateContainer within sandbox \"214faeb2cbdc77465bb36ebf552007656ec188b64151e3a3628351b8c4f0eaad\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c80ceadaa762c51875c1491d81a33c5e53621a8a85cb25da8dc725e6397729e4\"" Sep 10 23:52:20.590558 containerd[1544]: time="2025-09-10T23:52:20.590294400Z" level=info msg="CreateContainer within sandbox \"441d9668504a0d2ce6ef17e2d7edb95cddd776833d182f084e7ec48d675c7693\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"8636f37991469001e8bdf95761a97f461b7e51dbaf0f3b5d205e978bc297608a\"" Sep 10 23:52:20.590558 containerd[1544]: time="2025-09-10T23:52:20.590527280Z" level=info msg="StartContainer for \"c80ceadaa762c51875c1491d81a33c5e53621a8a85cb25da8dc725e6397729e4\"" Sep 10 23:52:20.590853 containerd[1544]: time="2025-09-10T23:52:20.590818521Z" level=info msg="CreateContainer within sandbox \"2cc00c03ea7af449618a1340adb01cf9371947b506216903c19cfbf7bad69d73\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"20a592ddc791eaf558ada314ef5b9c19b9e3b59188f16d7ac9ac3db2e5ee9b6f\"" Sep 10 23:52:20.591044 containerd[1544]: time="2025-09-10T23:52:20.591021842Z" level=info msg="StartContainer for \"8636f37991469001e8bdf95761a97f461b7e51dbaf0f3b5d205e978bc297608a\"" Sep 10 23:52:20.592005 containerd[1544]: time="2025-09-10T23:52:20.591962605Z" level=info msg="connecting to shim c80ceadaa762c51875c1491d81a33c5e53621a8a85cb25da8dc725e6397729e4" address="unix:///run/containerd/s/192c1f699b3f47e3e3cb4a6097f5c39774bc7ef83b878fed4f0034c4716037a6" protocol=ttrpc version=3 Sep 10 23:52:20.592317 containerd[1544]: time="2025-09-10T23:52:20.591978645Z" level=info msg="connecting to shim 8636f37991469001e8bdf95761a97f461b7e51dbaf0f3b5d205e978bc297608a" address="unix:///run/containerd/s/10d7f5c09bdf45de0b1db30ec9d091a7c4d3d4de80928e739dca86bd8916e071" protocol=ttrpc version=3 Sep 10 23:52:20.593371 containerd[1544]: time="2025-09-10T23:52:20.592863128Z" level=info msg="StartContainer for \"20a592ddc791eaf558ada314ef5b9c19b9e3b59188f16d7ac9ac3db2e5ee9b6f\"" Sep 10 23:52:20.595555 containerd[1544]: time="2025-09-10T23:52:20.595519657Z" level=info msg="connecting to shim 20a592ddc791eaf558ada314ef5b9c19b9e3b59188f16d7ac9ac3db2e5ee9b6f" address="unix:///run/containerd/s/d36feb9b3d7612d5043376f78d3e88a6f294c646e58a8daa9177aba882709f09" protocol=ttrpc version=3 Sep 10 23:52:20.620944 systemd[1]: Started cri-containerd-8636f37991469001e8bdf95761a97f461b7e51dbaf0f3b5d205e978bc297608a.scope - libcontainer container 8636f37991469001e8bdf95761a97f461b7e51dbaf0f3b5d205e978bc297608a. Sep 10 23:52:20.632049 systemd[1]: Started cri-containerd-c80ceadaa762c51875c1491d81a33c5e53621a8a85cb25da8dc725e6397729e4.scope - libcontainer container c80ceadaa762c51875c1491d81a33c5e53621a8a85cb25da8dc725e6397729e4. Sep 10 23:52:20.644137 systemd[1]: Started cri-containerd-20a592ddc791eaf558ada314ef5b9c19b9e3b59188f16d7ac9ac3db2e5ee9b6f.scope - libcontainer container 20a592ddc791eaf558ada314ef5b9c19b9e3b59188f16d7ac9ac3db2e5ee9b6f. Sep 10 23:52:20.694143 containerd[1544]: time="2025-09-10T23:52:20.694039582Z" level=info msg="StartContainer for \"8636f37991469001e8bdf95761a97f461b7e51dbaf0f3b5d205e978bc297608a\" returns successfully" Sep 10 23:52:20.739188 containerd[1544]: time="2025-09-10T23:52:20.739055091Z" level=info msg="StartContainer for \"c80ceadaa762c51875c1491d81a33c5e53621a8a85cb25da8dc725e6397729e4\" returns successfully" Sep 10 23:52:20.745360 containerd[1544]: time="2025-09-10T23:52:20.745292552Z" level=info msg="StartContainer for \"20a592ddc791eaf558ada314ef5b9c19b9e3b59188f16d7ac9ac3db2e5ee9b6f\" returns successfully"