Oct 13 00:04:42.805485 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 13 00:04:42.805511 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Oct 12 22:32:01 -00 2025 Oct 13 00:04:42.805521 kernel: KASLR enabled Oct 13 00:04:42.805526 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Oct 13 00:04:42.805532 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Oct 13 00:04:42.805537 kernel: random: crng init done Oct 13 00:04:42.805544 kernel: secureboot: Secure boot disabled Oct 13 00:04:42.805549 kernel: ACPI: Early table checksum verification disabled Oct 13 00:04:42.805555 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Oct 13 00:04:42.805561 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Oct 13 00:04:42.805568 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:04:42.805574 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:04:42.805579 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:04:42.805585 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:04:42.805592 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:04:42.805599 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:04:42.805606 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:04:42.805611 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:04:42.805618 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:04:42.807152 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Oct 13 00:04:42.807161 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Oct 13 00:04:42.807167 kernel: ACPI: Use ACPI SPCR as default console: No Oct 13 00:04:42.807174 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Oct 13 00:04:42.807180 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Oct 13 00:04:42.807186 kernel: Zone ranges: Oct 13 00:04:42.807192 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 13 00:04:42.807204 kernel: DMA32 empty Oct 13 00:04:42.807210 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Oct 13 00:04:42.807216 kernel: Device empty Oct 13 00:04:42.807222 kernel: Movable zone start for each node Oct 13 00:04:42.807228 kernel: Early memory node ranges Oct 13 00:04:42.807234 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Oct 13 00:04:42.807240 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Oct 13 00:04:42.807247 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Oct 13 00:04:42.807253 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Oct 13 00:04:42.807259 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Oct 13 00:04:42.807265 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Oct 13 00:04:42.807271 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Oct 13 00:04:42.807278 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Oct 13 00:04:42.807322 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Oct 13 00:04:42.807335 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Oct 13 00:04:42.807342 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Oct 13 00:04:42.807348 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Oct 13 00:04:42.807356 kernel: psci: probing for conduit method from ACPI. Oct 13 00:04:42.807362 kernel: psci: PSCIv1.1 detected in firmware. Oct 13 00:04:42.807369 kernel: psci: Using standard PSCI v0.2 function IDs Oct 13 00:04:42.807375 kernel: psci: Trusted OS migration not required Oct 13 00:04:42.807382 kernel: psci: SMC Calling Convention v1.1 Oct 13 00:04:42.807389 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 13 00:04:42.807395 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 13 00:04:42.807402 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 13 00:04:42.807408 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 13 00:04:42.807415 kernel: Detected PIPT I-cache on CPU0 Oct 13 00:04:42.807422 kernel: CPU features: detected: GIC system register CPU interface Oct 13 00:04:42.807430 kernel: CPU features: detected: Spectre-v4 Oct 13 00:04:42.807436 kernel: CPU features: detected: Spectre-BHB Oct 13 00:04:42.807443 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 13 00:04:42.807449 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 13 00:04:42.807456 kernel: CPU features: detected: ARM erratum 1418040 Oct 13 00:04:42.807462 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 13 00:04:42.807469 kernel: alternatives: applying boot alternatives Oct 13 00:04:42.807478 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=37fc523060a9b8894388e25ab0f082059dd744d472a2b8577211d4b3dd66a910 Oct 13 00:04:42.807485 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 00:04:42.807492 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 13 00:04:42.807500 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 00:04:42.807507 kernel: Fallback order for Node 0: 0 Oct 13 00:04:42.807513 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Oct 13 00:04:42.807520 kernel: Policy zone: Normal Oct 13 00:04:42.807526 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 00:04:42.807533 kernel: software IO TLB: area num 2. Oct 13 00:04:42.807539 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Oct 13 00:04:42.807546 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 13 00:04:42.807552 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 00:04:42.807560 kernel: rcu: RCU event tracing is enabled. Oct 13 00:04:42.807566 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 13 00:04:42.807573 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 00:04:42.807581 kernel: Tracing variant of Tasks RCU enabled. Oct 13 00:04:42.807587 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 00:04:42.807594 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 13 00:04:42.807601 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 00:04:42.807607 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 00:04:42.807614 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 13 00:04:42.807620 kernel: GICv3: 256 SPIs implemented Oct 13 00:04:42.807627 kernel: GICv3: 0 Extended SPIs implemented Oct 13 00:04:42.807633 kernel: Root IRQ handler: gic_handle_irq Oct 13 00:04:42.807640 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 13 00:04:42.807646 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 13 00:04:42.807652 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 13 00:04:42.807661 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 13 00:04:42.807668 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Oct 13 00:04:42.807674 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Oct 13 00:04:42.807681 kernel: GICv3: using LPI property table @0x0000000100120000 Oct 13 00:04:42.807687 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Oct 13 00:04:42.807694 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 00:04:42.807701 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 00:04:42.807707 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 13 00:04:42.807714 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 13 00:04:42.807720 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 13 00:04:42.807727 kernel: Console: colour dummy device 80x25 Oct 13 00:04:42.807735 kernel: ACPI: Core revision 20240827 Oct 13 00:04:42.807742 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 13 00:04:42.807749 kernel: pid_max: default: 32768 minimum: 301 Oct 13 00:04:42.807756 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 00:04:42.807763 kernel: landlock: Up and running. Oct 13 00:04:42.807769 kernel: SELinux: Initializing. Oct 13 00:04:42.807776 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 00:04:42.807783 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 00:04:42.807790 kernel: rcu: Hierarchical SRCU implementation. Oct 13 00:04:42.807798 kernel: rcu: Max phase no-delay instances is 400. Oct 13 00:04:42.807805 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 00:04:42.807811 kernel: Remapping and enabling EFI services. Oct 13 00:04:42.807818 kernel: smp: Bringing up secondary CPUs ... Oct 13 00:04:42.807825 kernel: Detected PIPT I-cache on CPU1 Oct 13 00:04:42.807832 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 13 00:04:42.807839 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Oct 13 00:04:42.807845 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 00:04:42.807852 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 13 00:04:42.807860 kernel: smp: Brought up 1 node, 2 CPUs Oct 13 00:04:42.807872 kernel: SMP: Total of 2 processors activated. Oct 13 00:04:42.807879 kernel: CPU: All CPU(s) started at EL1 Oct 13 00:04:42.807888 kernel: CPU features: detected: 32-bit EL0 Support Oct 13 00:04:42.807912 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 13 00:04:42.807920 kernel: CPU features: detected: Common not Private translations Oct 13 00:04:42.807927 kernel: CPU features: detected: CRC32 instructions Oct 13 00:04:42.807934 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 13 00:04:42.807943 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 13 00:04:42.807950 kernel: CPU features: detected: LSE atomic instructions Oct 13 00:04:42.807957 kernel: CPU features: detected: Privileged Access Never Oct 13 00:04:42.807965 kernel: CPU features: detected: RAS Extension Support Oct 13 00:04:42.807972 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 13 00:04:42.807979 kernel: alternatives: applying system-wide alternatives Oct 13 00:04:42.807986 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Oct 13 00:04:42.807994 kernel: Memory: 3859556K/4096000K available (11136K kernel code, 2450K rwdata, 9076K rodata, 38976K init, 1038K bss, 214964K reserved, 16384K cma-reserved) Oct 13 00:04:42.808001 kernel: devtmpfs: initialized Oct 13 00:04:42.808009 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 00:04:42.808017 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 13 00:04:42.808024 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 13 00:04:42.808031 kernel: 0 pages in range for non-PLT usage Oct 13 00:04:42.808038 kernel: 508560 pages in range for PLT usage Oct 13 00:04:42.808045 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 00:04:42.808052 kernel: SMBIOS 3.0.0 present. Oct 13 00:04:42.808059 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Oct 13 00:04:42.808066 kernel: DMI: Memory slots populated: 1/1 Oct 13 00:04:42.808074 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 00:04:42.808082 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 13 00:04:42.808089 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 13 00:04:42.808096 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 13 00:04:42.808103 kernel: audit: initializing netlink subsys (disabled) Oct 13 00:04:42.808110 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Oct 13 00:04:42.808118 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 00:04:42.808125 kernel: cpuidle: using governor menu Oct 13 00:04:42.808132 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 13 00:04:42.808140 kernel: ASID allocator initialised with 32768 entries Oct 13 00:04:42.808148 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 00:04:42.808155 kernel: Serial: AMBA PL011 UART driver Oct 13 00:04:42.808162 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 00:04:42.808169 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 00:04:42.808176 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 13 00:04:42.808183 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 13 00:04:42.808190 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 00:04:42.808197 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 00:04:42.808206 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 13 00:04:42.808213 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 13 00:04:42.808220 kernel: ACPI: Added _OSI(Module Device) Oct 13 00:04:42.808227 kernel: ACPI: Added _OSI(Processor Device) Oct 13 00:04:42.808234 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 00:04:42.808241 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 00:04:42.808248 kernel: ACPI: Interpreter enabled Oct 13 00:04:42.808255 kernel: ACPI: Using GIC for interrupt routing Oct 13 00:04:42.808262 kernel: ACPI: MCFG table detected, 1 entries Oct 13 00:04:42.808270 kernel: ACPI: CPU0 has been hot-added Oct 13 00:04:42.808277 kernel: ACPI: CPU1 has been hot-added Oct 13 00:04:42.808291 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 13 00:04:42.808299 kernel: printk: legacy console [ttyAMA0] enabled Oct 13 00:04:42.808306 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 13 00:04:42.808475 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 13 00:04:42.808540 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 13 00:04:42.808598 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 13 00:04:42.808658 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 13 00:04:42.808714 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 13 00:04:42.808723 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 13 00:04:42.808730 kernel: PCI host bridge to bus 0000:00 Oct 13 00:04:42.808796 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 13 00:04:42.808850 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 13 00:04:42.810825 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 13 00:04:42.810970 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 13 00:04:42.811070 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Oct 13 00:04:42.811142 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Oct 13 00:04:42.811202 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Oct 13 00:04:42.811262 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Oct 13 00:04:42.811385 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 00:04:42.811453 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Oct 13 00:04:42.811512 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 13 00:04:42.811569 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Oct 13 00:04:42.811627 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Oct 13 00:04:42.811691 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 00:04:42.811749 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Oct 13 00:04:42.811807 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 13 00:04:42.811867 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Oct 13 00:04:42.813029 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 00:04:42.813109 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Oct 13 00:04:42.813169 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 13 00:04:42.813228 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Oct 13 00:04:42.813300 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Oct 13 00:04:42.813382 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 00:04:42.813458 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Oct 13 00:04:42.813516 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 13 00:04:42.813574 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Oct 13 00:04:42.813631 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Oct 13 00:04:42.813696 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 00:04:42.813756 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Oct 13 00:04:42.813813 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 13 00:04:42.813872 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Oct 13 00:04:42.815004 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Oct 13 00:04:42.815103 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 00:04:42.815167 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Oct 13 00:04:42.815226 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 13 00:04:42.815283 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Oct 13 00:04:42.815362 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Oct 13 00:04:42.815433 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 00:04:42.815500 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Oct 13 00:04:42.815558 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 13 00:04:42.815616 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Oct 13 00:04:42.815675 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Oct 13 00:04:42.815739 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 00:04:42.815797 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Oct 13 00:04:42.815856 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 13 00:04:42.816998 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Oct 13 00:04:42.817085 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 00:04:42.817146 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Oct 13 00:04:42.817205 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 13 00:04:42.817281 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Oct 13 00:04:42.817383 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Oct 13 00:04:42.817457 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Oct 13 00:04:42.817588 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Oct 13 00:04:42.817684 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Oct 13 00:04:42.817750 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Oct 13 00:04:42.817815 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Oct 13 00:04:42.820041 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Oct 13 00:04:42.820159 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Oct 13 00:04:42.820244 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Oct 13 00:04:42.820336 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Oct 13 00:04:42.820402 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Oct 13 00:04:42.820485 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Oct 13 00:04:42.820548 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Oct 13 00:04:42.820623 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Oct 13 00:04:42.820690 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Oct 13 00:04:42.820749 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Oct 13 00:04:42.820822 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Oct 13 00:04:42.820883 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Oct 13 00:04:42.820984 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Oct 13 00:04:42.821056 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Oct 13 00:04:42.821121 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Oct 13 00:04:42.821180 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Oct 13 00:04:42.821239 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Oct 13 00:04:42.821344 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Oct 13 00:04:42.821410 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Oct 13 00:04:42.821468 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Oct 13 00:04:42.821529 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Oct 13 00:04:42.821592 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Oct 13 00:04:42.821649 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Oct 13 00:04:42.821711 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Oct 13 00:04:42.821770 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Oct 13 00:04:42.821827 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Oct 13 00:04:42.821923 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Oct 13 00:04:42.822003 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Oct 13 00:04:42.822069 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Oct 13 00:04:42.822131 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Oct 13 00:04:42.822189 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Oct 13 00:04:42.822247 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Oct 13 00:04:42.822320 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 13 00:04:42.822380 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Oct 13 00:04:42.822437 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Oct 13 00:04:42.822504 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 13 00:04:42.822563 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Oct 13 00:04:42.822620 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Oct 13 00:04:42.822682 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 13 00:04:42.822740 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Oct 13 00:04:42.822797 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Oct 13 00:04:42.822862 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 13 00:04:42.822935 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Oct 13 00:04:42.822995 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Oct 13 00:04:42.823057 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Oct 13 00:04:42.823115 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Oct 13 00:04:42.823172 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Oct 13 00:04:42.823237 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Oct 13 00:04:42.823305 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Oct 13 00:04:42.823369 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Oct 13 00:04:42.823427 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Oct 13 00:04:42.823484 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Oct 13 00:04:42.823541 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Oct 13 00:04:42.823597 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Oct 13 00:04:42.823655 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Oct 13 00:04:42.823712 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Oct 13 00:04:42.823770 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Oct 13 00:04:42.823829 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Oct 13 00:04:42.823887 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Oct 13 00:04:42.828049 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Oct 13 00:04:42.828124 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Oct 13 00:04:42.828185 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Oct 13 00:04:42.828250 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Oct 13 00:04:42.828354 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Oct 13 00:04:42.828430 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Oct 13 00:04:42.828489 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Oct 13 00:04:42.828550 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Oct 13 00:04:42.828609 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Oct 13 00:04:42.828670 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Oct 13 00:04:42.828732 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Oct 13 00:04:42.828797 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Oct 13 00:04:42.828859 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Oct 13 00:04:42.828958 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Oct 13 00:04:42.829030 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Oct 13 00:04:42.829098 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Oct 13 00:04:42.829164 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Oct 13 00:04:42.829226 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Oct 13 00:04:42.830046 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Oct 13 00:04:42.830129 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Oct 13 00:04:42.830188 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Oct 13 00:04:42.830249 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Oct 13 00:04:42.830331 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Oct 13 00:04:42.830400 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Oct 13 00:04:42.830466 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Oct 13 00:04:42.830527 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Oct 13 00:04:42.830595 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Oct 13 00:04:42.830657 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 13 00:04:42.830716 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Oct 13 00:04:42.830774 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Oct 13 00:04:42.830836 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Oct 13 00:04:42.830940 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Oct 13 00:04:42.831063 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 13 00:04:42.831129 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Oct 13 00:04:42.831187 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Oct 13 00:04:42.831245 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Oct 13 00:04:42.831359 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Oct 13 00:04:42.831427 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Oct 13 00:04:42.831488 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 13 00:04:42.831549 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Oct 13 00:04:42.831611 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Oct 13 00:04:42.831670 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Oct 13 00:04:42.831736 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Oct 13 00:04:42.831797 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 13 00:04:42.831856 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Oct 13 00:04:42.831932 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Oct 13 00:04:42.831998 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Oct 13 00:04:42.832064 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Oct 13 00:04:42.832124 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Oct 13 00:04:42.832183 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 13 00:04:42.832242 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Oct 13 00:04:42.832316 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Oct 13 00:04:42.832379 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Oct 13 00:04:42.832447 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Oct 13 00:04:42.833338 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Oct 13 00:04:42.833439 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 13 00:04:42.833501 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Oct 13 00:04:42.833561 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Oct 13 00:04:42.833619 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 13 00:04:42.833685 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Oct 13 00:04:42.833746 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Oct 13 00:04:42.833808 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Oct 13 00:04:42.833869 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 13 00:04:42.835330 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Oct 13 00:04:42.835422 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Oct 13 00:04:42.835491 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 13 00:04:42.835554 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 13 00:04:42.835613 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Oct 13 00:04:42.835672 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Oct 13 00:04:42.835731 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 13 00:04:42.835792 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 13 00:04:42.835854 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Oct 13 00:04:42.835968 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Oct 13 00:04:42.836033 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Oct 13 00:04:42.836095 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 13 00:04:42.836147 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 13 00:04:42.836200 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 13 00:04:42.836267 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Oct 13 00:04:42.836344 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Oct 13 00:04:42.836410 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Oct 13 00:04:42.836482 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Oct 13 00:04:42.836538 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Oct 13 00:04:42.836592 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Oct 13 00:04:42.836653 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Oct 13 00:04:42.836708 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Oct 13 00:04:42.836764 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Oct 13 00:04:42.836831 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Oct 13 00:04:42.836884 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Oct 13 00:04:42.836974 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Oct 13 00:04:42.837041 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Oct 13 00:04:42.837096 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Oct 13 00:04:42.837153 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Oct 13 00:04:42.837214 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Oct 13 00:04:42.837269 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Oct 13 00:04:42.837336 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 13 00:04:42.837402 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Oct 13 00:04:42.837458 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Oct 13 00:04:42.837517 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 13 00:04:42.837583 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Oct 13 00:04:42.837637 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Oct 13 00:04:42.837691 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 13 00:04:42.837752 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Oct 13 00:04:42.837806 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Oct 13 00:04:42.837862 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Oct 13 00:04:42.837871 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 13 00:04:42.837881 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 13 00:04:42.837889 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 13 00:04:42.837929 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 13 00:04:42.837938 kernel: iommu: Default domain type: Translated Oct 13 00:04:42.837952 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 13 00:04:42.837959 kernel: efivars: Registered efivars operations Oct 13 00:04:42.837966 kernel: vgaarb: loaded Oct 13 00:04:42.837974 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 13 00:04:42.837981 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 00:04:42.837991 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 00:04:42.837998 kernel: pnp: PnP ACPI init Oct 13 00:04:42.838075 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 13 00:04:42.838087 kernel: pnp: PnP ACPI: found 1 devices Oct 13 00:04:42.838094 kernel: NET: Registered PF_INET protocol family Oct 13 00:04:42.838102 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 13 00:04:42.838109 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 13 00:04:42.838117 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 00:04:42.838126 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 00:04:42.838134 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 13 00:04:42.838141 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 13 00:04:42.838149 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 00:04:42.838156 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 00:04:42.838164 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 00:04:42.838231 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Oct 13 00:04:42.838241 kernel: PCI: CLS 0 bytes, default 64 Oct 13 00:04:42.838248 kernel: kvm [1]: HYP mode not available Oct 13 00:04:42.838257 kernel: Initialise system trusted keyrings Oct 13 00:04:42.838265 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 13 00:04:42.838273 kernel: Key type asymmetric registered Oct 13 00:04:42.838280 kernel: Asymmetric key parser 'x509' registered Oct 13 00:04:42.838318 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 13 00:04:42.838327 kernel: io scheduler mq-deadline registered Oct 13 00:04:42.838334 kernel: io scheduler kyber registered Oct 13 00:04:42.838342 kernel: io scheduler bfq registered Oct 13 00:04:42.838350 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 13 00:04:42.838427 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Oct 13 00:04:42.838491 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Oct 13 00:04:42.838550 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 13 00:04:42.838611 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Oct 13 00:04:42.838675 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Oct 13 00:04:42.838735 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 13 00:04:42.838799 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Oct 13 00:04:42.838861 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Oct 13 00:04:42.838983 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 13 00:04:42.839050 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Oct 13 00:04:42.839109 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Oct 13 00:04:42.839167 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 13 00:04:42.839229 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Oct 13 00:04:42.839301 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Oct 13 00:04:42.839372 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 13 00:04:42.839437 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Oct 13 00:04:42.839501 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Oct 13 00:04:42.839559 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 13 00:04:42.839618 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Oct 13 00:04:42.839677 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Oct 13 00:04:42.839735 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 13 00:04:42.839796 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Oct 13 00:04:42.839854 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Oct 13 00:04:42.839930 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 13 00:04:42.839943 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Oct 13 00:04:42.840005 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Oct 13 00:04:42.840064 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Oct 13 00:04:42.840130 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 13 00:04:42.840141 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 13 00:04:42.840149 kernel: ACPI: button: Power Button [PWRB] Oct 13 00:04:42.840159 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 13 00:04:42.840223 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Oct 13 00:04:42.840302 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Oct 13 00:04:42.840318 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 00:04:42.840326 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 13 00:04:42.840394 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Oct 13 00:04:42.840405 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Oct 13 00:04:42.840413 kernel: thunder_xcv, ver 1.0 Oct 13 00:04:42.840420 kernel: thunder_bgx, ver 1.0 Oct 13 00:04:42.840427 kernel: nicpf, ver 1.0 Oct 13 00:04:42.840435 kernel: nicvf, ver 1.0 Oct 13 00:04:42.840518 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 13 00:04:42.840575 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-13T00:04:42 UTC (1760313882) Oct 13 00:04:42.840585 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 13 00:04:42.840593 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 13 00:04:42.840601 kernel: NET: Registered PF_INET6 protocol family Oct 13 00:04:42.840608 kernel: watchdog: NMI not fully supported Oct 13 00:04:42.840615 kernel: watchdog: Hard watchdog permanently disabled Oct 13 00:04:42.840623 kernel: Segment Routing with IPv6 Oct 13 00:04:42.840632 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 00:04:42.840639 kernel: NET: Registered PF_PACKET protocol family Oct 13 00:04:42.840647 kernel: Key type dns_resolver registered Oct 13 00:04:42.840654 kernel: registered taskstats version 1 Oct 13 00:04:42.840661 kernel: Loading compiled-in X.509 certificates Oct 13 00:04:42.840669 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: b8447a1087a9e9c4d5b9d4c2f2bba5a69a74f139' Oct 13 00:04:42.840676 kernel: Demotion targets for Node 0: null Oct 13 00:04:42.840684 kernel: Key type .fscrypt registered Oct 13 00:04:42.840691 kernel: Key type fscrypt-provisioning registered Oct 13 00:04:42.840700 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 00:04:42.840707 kernel: ima: Allocated hash algorithm: sha1 Oct 13 00:04:42.840715 kernel: ima: No architecture policies found Oct 13 00:04:42.840723 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 13 00:04:42.840730 kernel: clk: Disabling unused clocks Oct 13 00:04:42.840738 kernel: PM: genpd: Disabling unused power domains Oct 13 00:04:42.840746 kernel: Warning: unable to open an initial console. Oct 13 00:04:42.840753 kernel: Freeing unused kernel memory: 38976K Oct 13 00:04:42.840762 kernel: Run /init as init process Oct 13 00:04:42.840770 kernel: with arguments: Oct 13 00:04:42.840777 kernel: /init Oct 13 00:04:42.840784 kernel: with environment: Oct 13 00:04:42.840792 kernel: HOME=/ Oct 13 00:04:42.840799 kernel: TERM=linux Oct 13 00:04:42.840806 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 00:04:42.840814 systemd[1]: Successfully made /usr/ read-only. Oct 13 00:04:42.840825 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 00:04:42.840834 systemd[1]: Detected virtualization kvm. Oct 13 00:04:42.840842 systemd[1]: Detected architecture arm64. Oct 13 00:04:42.840850 systemd[1]: Running in initrd. Oct 13 00:04:42.840858 systemd[1]: No hostname configured, using default hostname. Oct 13 00:04:42.840866 systemd[1]: Hostname set to . Oct 13 00:04:42.840874 systemd[1]: Initializing machine ID from VM UUID. Oct 13 00:04:42.840881 systemd[1]: Queued start job for default target initrd.target. Oct 13 00:04:42.840891 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 00:04:42.840926 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 00:04:42.840935 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 00:04:42.840943 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 00:04:42.840951 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 00:04:42.840960 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 00:04:42.840969 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 13 00:04:42.840979 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 13 00:04:42.840987 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 00:04:42.840995 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 00:04:42.841003 systemd[1]: Reached target paths.target - Path Units. Oct 13 00:04:42.841011 systemd[1]: Reached target slices.target - Slice Units. Oct 13 00:04:42.841019 systemd[1]: Reached target swap.target - Swaps. Oct 13 00:04:42.841026 systemd[1]: Reached target timers.target - Timer Units. Oct 13 00:04:42.841034 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 00:04:42.841044 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 00:04:42.841053 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 00:04:42.841061 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 00:04:42.841069 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 00:04:42.841078 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 00:04:42.841085 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 00:04:42.841093 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 00:04:42.841101 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 00:04:42.841109 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 00:04:42.841118 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 00:04:42.841127 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 00:04:42.841134 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 00:04:42.841143 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 00:04:42.841151 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 00:04:42.841159 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:04:42.841167 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 00:04:42.841176 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 00:04:42.841185 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 00:04:42.841217 systemd-journald[245]: Collecting audit messages is disabled. Oct 13 00:04:42.841240 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 00:04:42.841248 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 00:04:42.841256 kernel: Bridge firewalling registered Oct 13 00:04:42.841264 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 00:04:42.841272 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 00:04:42.841280 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:04:42.841321 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 00:04:42.841334 systemd-journald[245]: Journal started Oct 13 00:04:42.841353 systemd-journald[245]: Runtime Journal (/run/log/journal/764f0938bc71449ea51447b18170da1e) is 8M, max 76.5M, 68.5M free. Oct 13 00:04:42.817967 systemd-modules-load[246]: Inserted module 'overlay' Oct 13 00:04:42.836391 systemd-modules-load[246]: Inserted module 'br_netfilter' Oct 13 00:04:42.845989 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 00:04:42.849332 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 00:04:42.851031 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 00:04:42.861073 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 00:04:42.867305 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 00:04:42.874078 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 00:04:42.878851 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 00:04:42.881154 systemd-tmpfiles[268]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 00:04:42.885074 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 00:04:42.886455 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 00:04:42.891079 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 00:04:42.916008 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=37fc523060a9b8894388e25ab0f082059dd744d472a2b8577211d4b3dd66a910 Oct 13 00:04:42.933550 systemd-resolved[286]: Positive Trust Anchors: Oct 13 00:04:42.934262 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 00:04:42.934329 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 00:04:42.944244 systemd-resolved[286]: Defaulting to hostname 'linux'. Oct 13 00:04:42.945803 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 00:04:42.947056 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 00:04:43.005011 kernel: SCSI subsystem initialized Oct 13 00:04:43.008935 kernel: Loading iSCSI transport class v2.0-870. Oct 13 00:04:43.017308 kernel: iscsi: registered transport (tcp) Oct 13 00:04:43.029965 kernel: iscsi: registered transport (qla4xxx) Oct 13 00:04:43.030053 kernel: QLogic iSCSI HBA Driver Oct 13 00:04:43.052530 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 00:04:43.076351 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 00:04:43.080866 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 00:04:43.142051 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 00:04:43.145037 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 00:04:43.207943 kernel: raid6: neonx8 gen() 15657 MB/s Oct 13 00:04:43.224955 kernel: raid6: neonx4 gen() 15748 MB/s Oct 13 00:04:43.241972 kernel: raid6: neonx2 gen() 13154 MB/s Oct 13 00:04:43.258958 kernel: raid6: neonx1 gen() 10423 MB/s Oct 13 00:04:43.275968 kernel: raid6: int64x8 gen() 6878 MB/s Oct 13 00:04:43.292966 kernel: raid6: int64x4 gen() 7319 MB/s Oct 13 00:04:43.310046 kernel: raid6: int64x2 gen() 6060 MB/s Oct 13 00:04:43.326968 kernel: raid6: int64x1 gen() 5039 MB/s Oct 13 00:04:43.327066 kernel: raid6: using algorithm neonx4 gen() 15748 MB/s Oct 13 00:04:43.343982 kernel: raid6: .... xor() 12304 MB/s, rmw enabled Oct 13 00:04:43.344080 kernel: raid6: using neon recovery algorithm Oct 13 00:04:43.349075 kernel: xor: measuring software checksum speed Oct 13 00:04:43.349144 kernel: 8regs : 20686 MB/sec Oct 13 00:04:43.349166 kernel: 32regs : 21693 MB/sec Oct 13 00:04:43.349186 kernel: arm64_neon : 28013 MB/sec Oct 13 00:04:43.349943 kernel: xor: using function: arm64_neon (28013 MB/sec) Oct 13 00:04:43.403969 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 00:04:43.412598 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 00:04:43.416231 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 00:04:43.449965 systemd-udevd[494]: Using default interface naming scheme 'v255'. Oct 13 00:04:43.454433 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 00:04:43.458924 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 00:04:43.484838 dracut-pre-trigger[503]: rd.md=0: removing MD RAID activation Oct 13 00:04:43.516708 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 00:04:43.519166 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 00:04:43.575722 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 00:04:43.578680 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 00:04:43.669952 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Oct 13 00:04:43.671920 kernel: scsi host0: Virtio SCSI HBA Oct 13 00:04:43.677643 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 13 00:04:43.677745 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Oct 13 00:04:43.712023 kernel: ACPI: bus type USB registered Oct 13 00:04:43.712087 kernel: usbcore: registered new interface driver usbfs Oct 13 00:04:43.712100 kernel: usbcore: registered new interface driver hub Oct 13 00:04:43.712109 kernel: usbcore: registered new device driver usb Oct 13 00:04:43.725249 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 00:04:43.725410 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:04:43.727423 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:04:43.730983 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:04:43.735248 kernel: sd 0:0:0:1: Power-on or device reset occurred Oct 13 00:04:43.735504 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Oct 13 00:04:43.735959 kernel: sd 0:0:0:1: [sda] Write Protect is off Oct 13 00:04:43.736679 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Oct 13 00:04:43.736776 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 13 00:04:43.740114 kernel: sr 0:0:0:0: Power-on or device reset occurred Oct 13 00:04:43.740313 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Oct 13 00:04:43.740399 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 13 00:04:43.740911 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Oct 13 00:04:43.750820 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 13 00:04:43.750926 kernel: GPT:17805311 != 80003071 Oct 13 00:04:43.750958 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 13 00:04:43.752087 kernel: GPT:17805311 != 80003071 Oct 13 00:04:43.752128 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 13 00:04:43.752138 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 13 00:04:43.754702 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Oct 13 00:04:43.764940 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 13 00:04:43.765174 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Oct 13 00:04:43.766238 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:04:43.771551 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 13 00:04:43.771739 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 13 00:04:43.771819 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Oct 13 00:04:43.771893 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Oct 13 00:04:43.772006 kernel: hub 1-0:1.0: USB hub found Oct 13 00:04:43.772103 kernel: hub 1-0:1.0: 4 ports detected Oct 13 00:04:43.772173 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 13 00:04:43.772260 kernel: hub 2-0:1.0: USB hub found Oct 13 00:04:43.772982 kernel: hub 2-0:1.0: 4 ports detected Oct 13 00:04:43.831325 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Oct 13 00:04:43.849533 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Oct 13 00:04:43.859963 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 13 00:04:43.867539 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Oct 13 00:04:43.868260 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Oct 13 00:04:43.870061 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 00:04:43.875559 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 00:04:43.876361 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 00:04:43.878452 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 00:04:43.880386 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 00:04:43.881676 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 00:04:43.906123 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 00:04:43.911011 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 13 00:04:43.911154 disk-uuid[601]: Primary Header is updated. Oct 13 00:04:43.911154 disk-uuid[601]: Secondary Entries is updated. Oct 13 00:04:43.911154 disk-uuid[601]: Secondary Header is updated. Oct 13 00:04:44.014974 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 13 00:04:44.150686 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Oct 13 00:04:44.150749 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Oct 13 00:04:44.151927 kernel: usbcore: registered new interface driver usbhid Oct 13 00:04:44.152108 kernel: usbhid: USB HID core driver Oct 13 00:04:44.254965 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Oct 13 00:04:44.382961 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Oct 13 00:04:44.435958 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Oct 13 00:04:44.938934 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 13 00:04:44.939917 disk-uuid[609]: The operation has completed successfully. Oct 13 00:04:45.001021 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 00:04:45.001163 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 00:04:45.024602 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 13 00:04:45.047953 sh[625]: Success Oct 13 00:04:45.064020 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 00:04:45.064087 kernel: device-mapper: uevent: version 1.0.3 Oct 13 00:04:45.064099 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 00:04:45.073945 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 13 00:04:45.124534 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 13 00:04:45.128514 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 13 00:04:45.144236 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 13 00:04:45.158297 kernel: BTRFS: device fsid e4495086-3456-43e0-be7b-4c3c53a67174 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (637) Oct 13 00:04:45.158367 kernel: BTRFS info (device dm-0): first mount of filesystem e4495086-3456-43e0-be7b-4c3c53a67174 Oct 13 00:04:45.158388 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 13 00:04:45.166363 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 13 00:04:45.166445 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 00:04:45.166470 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 00:04:45.169009 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 13 00:04:45.169688 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 00:04:45.170933 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 00:04:45.171717 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 00:04:45.175831 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 00:04:45.205230 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (668) Oct 13 00:04:45.205326 kernel: BTRFS info (device sda6): first mount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:04:45.205340 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 00:04:45.211936 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 13 00:04:45.212008 kernel: BTRFS info (device sda6): turning on async discard Oct 13 00:04:45.212019 kernel: BTRFS info (device sda6): enabling free space tree Oct 13 00:04:45.218725 kernel: BTRFS info (device sda6): last unmount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:04:45.218008 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 00:04:45.221744 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 00:04:45.322932 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 00:04:45.328105 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 00:04:45.366129 ignition[716]: Ignition 2.22.0 Oct 13 00:04:45.366143 ignition[716]: Stage: fetch-offline Oct 13 00:04:45.366174 ignition[716]: no configs at "/usr/lib/ignition/base.d" Oct 13 00:04:45.366181 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 13 00:04:45.366278 ignition[716]: parsed url from cmdline: "" Oct 13 00:04:45.366281 ignition[716]: no config URL provided Oct 13 00:04:45.366286 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 00:04:45.366293 ignition[716]: no config at "/usr/lib/ignition/user.ign" Oct 13 00:04:45.366298 ignition[716]: failed to fetch config: resource requires networking Oct 13 00:04:45.370647 systemd-networkd[811]: lo: Link UP Oct 13 00:04:45.366458 ignition[716]: Ignition finished successfully Oct 13 00:04:45.370651 systemd-networkd[811]: lo: Gained carrier Oct 13 00:04:45.370997 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 00:04:45.373937 systemd-networkd[811]: Enumeration completed Oct 13 00:04:45.374049 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 00:04:45.374728 systemd[1]: Reached target network.target - Network. Oct 13 00:04:45.376506 systemd-networkd[811]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:04:45.376510 systemd-networkd[811]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 00:04:45.377011 systemd-networkd[811]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:04:45.377014 systemd-networkd[811]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 00:04:45.377340 systemd-networkd[811]: eth0: Link UP Oct 13 00:04:45.377439 systemd-networkd[811]: eth1: Link UP Oct 13 00:04:45.377938 systemd-networkd[811]: eth0: Gained carrier Oct 13 00:04:45.377948 systemd-networkd[811]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:04:45.378532 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 13 00:04:45.385234 systemd-networkd[811]: eth1: Gained carrier Oct 13 00:04:45.385250 systemd-networkd[811]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:04:45.412988 systemd-networkd[811]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Oct 13 00:04:45.416651 ignition[816]: Ignition 2.22.0 Oct 13 00:04:45.416665 ignition[816]: Stage: fetch Oct 13 00:04:45.416855 ignition[816]: no configs at "/usr/lib/ignition/base.d" Oct 13 00:04:45.416865 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 13 00:04:45.417008 ignition[816]: parsed url from cmdline: "" Oct 13 00:04:45.417012 ignition[816]: no config URL provided Oct 13 00:04:45.417018 ignition[816]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 00:04:45.417025 ignition[816]: no config at "/usr/lib/ignition/user.ign" Oct 13 00:04:45.417069 ignition[816]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Oct 13 00:04:45.417508 ignition[816]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Oct 13 00:04:45.423979 systemd-networkd[811]: eth0: DHCPv4 address 5.75.247.119/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 13 00:04:45.617769 ignition[816]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Oct 13 00:04:45.624656 ignition[816]: GET result: OK Oct 13 00:04:45.624805 ignition[816]: parsing config with SHA512: 111f4f502e16a9127ab330fe658ad895dadcd2e3ed901163a9c11f3c5448d6af4ec8e34bde86d3bca7e57fe1ba1e72ec5b294823c5f6debb1cc385962b0a6fb4 Oct 13 00:04:45.631811 unknown[816]: fetched base config from "system" Oct 13 00:04:45.631822 unknown[816]: fetched base config from "system" Oct 13 00:04:45.632237 ignition[816]: fetch: fetch complete Oct 13 00:04:45.631826 unknown[816]: fetched user config from "hetzner" Oct 13 00:04:45.632243 ignition[816]: fetch: fetch passed Oct 13 00:04:45.634023 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 13 00:04:45.632366 ignition[816]: Ignition finished successfully Oct 13 00:04:45.636067 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 00:04:45.673348 ignition[823]: Ignition 2.22.0 Oct 13 00:04:45.673365 ignition[823]: Stage: kargs Oct 13 00:04:45.673518 ignition[823]: no configs at "/usr/lib/ignition/base.d" Oct 13 00:04:45.673528 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 13 00:04:45.677593 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 00:04:45.674711 ignition[823]: kargs: kargs passed Oct 13 00:04:45.680493 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 00:04:45.674803 ignition[823]: Ignition finished successfully Oct 13 00:04:45.715283 ignition[830]: Ignition 2.22.0 Oct 13 00:04:45.715892 ignition[830]: Stage: disks Oct 13 00:04:45.716090 ignition[830]: no configs at "/usr/lib/ignition/base.d" Oct 13 00:04:45.716100 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 13 00:04:45.717040 ignition[830]: disks: disks passed Oct 13 00:04:45.717122 ignition[830]: Ignition finished successfully Oct 13 00:04:45.719893 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 00:04:45.721966 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 00:04:45.723201 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 00:04:45.724648 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 00:04:45.725722 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 00:04:45.726329 systemd[1]: Reached target basic.target - Basic System. Oct 13 00:04:45.727763 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 00:04:45.774228 systemd-fsck[839]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Oct 13 00:04:45.779891 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 00:04:45.783775 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 00:04:45.870981 kernel: EXT4-fs (sda9): mounted filesystem 1aa1d0b4-cbac-4728-b9e0-662fa574e9ad r/w with ordered data mode. Quota mode: none. Oct 13 00:04:45.870391 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 00:04:45.872491 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 00:04:45.876779 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 00:04:45.880609 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 00:04:45.892934 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 13 00:04:45.897365 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 00:04:45.897444 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 00:04:45.905361 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 00:04:45.909937 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 00:04:45.919579 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (847) Oct 13 00:04:45.921496 kernel: BTRFS info (device sda6): first mount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:04:45.921545 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 00:04:45.932953 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 13 00:04:45.933026 kernel: BTRFS info (device sda6): turning on async discard Oct 13 00:04:45.934922 kernel: BTRFS info (device sda6): enabling free space tree Oct 13 00:04:45.938793 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 00:04:45.973120 coreos-metadata[849]: Oct 13 00:04:45.972 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Oct 13 00:04:45.975752 initrd-setup-root[874]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 00:04:45.977087 coreos-metadata[849]: Oct 13 00:04:45.976 INFO Fetch successful Oct 13 00:04:45.977087 coreos-metadata[849]: Oct 13 00:04:45.976 INFO wrote hostname ci-4459-1-0-c-ccbbacf556 to /sysroot/etc/hostname Oct 13 00:04:45.981307 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 13 00:04:45.985459 initrd-setup-root[882]: cut: /sysroot/etc/group: No such file or directory Oct 13 00:04:45.989922 initrd-setup-root[889]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 00:04:45.996821 initrd-setup-root[896]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 00:04:46.100719 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 00:04:46.102589 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 00:04:46.103947 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 00:04:46.120036 kernel: BTRFS info (device sda6): last unmount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:04:46.140227 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 00:04:46.153586 ignition[965]: INFO : Ignition 2.22.0 Oct 13 00:04:46.155624 ignition[965]: INFO : Stage: mount Oct 13 00:04:46.155624 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 00:04:46.155624 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 13 00:04:46.155624 ignition[965]: INFO : mount: mount passed Oct 13 00:04:46.155624 ignition[965]: INFO : Ignition finished successfully Oct 13 00:04:46.158690 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 00:04:46.159129 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 00:04:46.161435 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 00:04:46.186930 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 00:04:46.222525 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (975) Oct 13 00:04:46.222662 kernel: BTRFS info (device sda6): first mount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:04:46.222687 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 00:04:46.227273 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 13 00:04:46.227340 kernel: BTRFS info (device sda6): turning on async discard Oct 13 00:04:46.227356 kernel: BTRFS info (device sda6): enabling free space tree Oct 13 00:04:46.230658 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 00:04:46.260688 ignition[992]: INFO : Ignition 2.22.0 Oct 13 00:04:46.260688 ignition[992]: INFO : Stage: files Oct 13 00:04:46.261893 ignition[992]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 00:04:46.261893 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 13 00:04:46.263936 ignition[992]: DEBUG : files: compiled without relabeling support, skipping Oct 13 00:04:46.263936 ignition[992]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 00:04:46.263936 ignition[992]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 00:04:46.267046 ignition[992]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 00:04:46.267948 ignition[992]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 00:04:46.269175 unknown[992]: wrote ssh authorized keys file for user: core Oct 13 00:04:46.270816 ignition[992]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 00:04:46.272730 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 13 00:04:46.273966 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Oct 13 00:04:46.404349 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 00:04:46.485545 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 13 00:04:46.487255 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 00:04:46.487255 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 00:04:46.487255 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 00:04:46.487255 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 00:04:46.487255 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 00:04:46.487255 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 00:04:46.487255 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 00:04:46.487255 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 00:04:46.495694 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 00:04:46.495694 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 00:04:46.495694 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 13 00:04:46.495694 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 13 00:04:46.495694 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 13 00:04:46.495694 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Oct 13 00:04:46.500304 systemd-networkd[811]: eth1: Gained IPv6LL Oct 13 00:04:46.857964 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 00:04:47.076032 systemd-networkd[811]: eth0: Gained IPv6LL Oct 13 00:04:47.415143 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 13 00:04:47.415143 ignition[992]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 13 00:04:47.418495 ignition[992]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 00:04:47.421012 ignition[992]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 00:04:47.421012 ignition[992]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 13 00:04:47.421012 ignition[992]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 13 00:04:47.424420 ignition[992]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 13 00:04:47.424420 ignition[992]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 13 00:04:47.424420 ignition[992]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 13 00:04:47.424420 ignition[992]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Oct 13 00:04:47.424420 ignition[992]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 00:04:47.424420 ignition[992]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 00:04:47.424420 ignition[992]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 00:04:47.424420 ignition[992]: INFO : files: files passed Oct 13 00:04:47.424420 ignition[992]: INFO : Ignition finished successfully Oct 13 00:04:47.424950 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 00:04:47.427918 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 00:04:47.433184 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 00:04:47.443660 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 00:04:47.445033 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 00:04:47.455436 initrd-setup-root-after-ignition[1022]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 00:04:47.455436 initrd-setup-root-after-ignition[1022]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 00:04:47.458678 initrd-setup-root-after-ignition[1026]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 00:04:47.461113 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 00:04:47.462127 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 00:04:47.464565 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 00:04:47.520697 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 00:04:47.520929 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 00:04:47.523551 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 00:04:47.525019 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 00:04:47.525923 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 00:04:47.526738 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 00:04:47.569266 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 00:04:47.573617 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 00:04:47.606348 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 00:04:47.607929 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 00:04:47.609357 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 00:04:47.609954 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 00:04:47.610084 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 00:04:47.612500 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 00:04:47.614503 systemd[1]: Stopped target basic.target - Basic System. Oct 13 00:04:47.615302 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 00:04:47.616201 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 00:04:47.617214 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 00:04:47.618233 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 00:04:47.619203 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 00:04:47.620120 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 00:04:47.621113 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 00:04:47.622098 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 00:04:47.622976 systemd[1]: Stopped target swap.target - Swaps. Oct 13 00:04:47.623758 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 00:04:47.623965 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 00:04:47.625059 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 00:04:47.626085 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 00:04:47.627014 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 00:04:47.627543 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 00:04:47.628367 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 00:04:47.628536 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 00:04:47.629870 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 00:04:47.630063 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 00:04:47.631127 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 00:04:47.631346 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 00:04:47.632083 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 13 00:04:47.632235 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 13 00:04:47.635012 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 00:04:47.635723 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 00:04:47.638057 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 00:04:47.642040 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 00:04:47.642524 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 00:04:47.642695 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 00:04:47.646091 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 00:04:47.646299 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 00:04:47.651517 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 00:04:47.654987 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 00:04:47.665329 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 00:04:47.673753 ignition[1046]: INFO : Ignition 2.22.0 Oct 13 00:04:47.673753 ignition[1046]: INFO : Stage: umount Oct 13 00:04:47.674769 ignition[1046]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 00:04:47.674769 ignition[1046]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 13 00:04:47.680576 ignition[1046]: INFO : umount: umount passed Oct 13 00:04:47.680576 ignition[1046]: INFO : Ignition finished successfully Oct 13 00:04:47.680354 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 00:04:47.680529 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 00:04:47.684504 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 00:04:47.684603 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 00:04:47.687647 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 00:04:47.687743 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 00:04:47.690576 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 13 00:04:47.690657 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 13 00:04:47.692316 systemd[1]: Stopped target network.target - Network. Oct 13 00:04:47.693538 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 00:04:47.693641 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 00:04:47.695234 systemd[1]: Stopped target paths.target - Path Units. Oct 13 00:04:47.696146 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 00:04:47.698823 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 00:04:47.703963 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 00:04:47.707494 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 00:04:47.708058 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 00:04:47.708102 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 00:04:47.709192 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 00:04:47.709243 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 00:04:47.710015 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 00:04:47.710069 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 00:04:47.711365 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 00:04:47.711405 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 00:04:47.712683 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 00:04:47.713990 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 00:04:47.717940 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 00:04:47.718082 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 00:04:47.719432 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 00:04:47.719544 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 00:04:47.724848 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 00:04:47.725067 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 00:04:47.729292 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 13 00:04:47.729563 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 00:04:47.730951 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 00:04:47.732938 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 13 00:04:47.733622 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 00:04:47.734345 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 00:04:47.734404 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 00:04:47.737061 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 00:04:47.737547 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 00:04:47.737610 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 00:04:47.738364 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 00:04:47.738407 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 00:04:47.742169 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 00:04:47.743602 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 00:04:47.745163 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 00:04:47.745512 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 00:04:47.748812 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 00:04:47.750522 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 13 00:04:47.750584 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 13 00:04:47.770543 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 00:04:47.771781 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 00:04:47.773703 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 00:04:47.773868 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 00:04:47.778851 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 00:04:47.778971 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 00:04:47.780053 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 00:04:47.780085 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 00:04:47.780922 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 00:04:47.780989 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 00:04:47.782681 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 00:04:47.782730 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 00:04:47.784155 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 00:04:47.784209 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 00:04:47.786638 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 00:04:47.789064 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 00:04:47.789150 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 00:04:47.792052 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 00:04:47.792125 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 00:04:47.798077 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 00:04:47.798169 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:04:47.803507 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Oct 13 00:04:47.803572 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 13 00:04:47.803608 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 13 00:04:47.806139 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 00:04:47.808038 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 00:04:47.810208 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 00:04:47.812150 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 00:04:47.851975 systemd[1]: Switching root. Oct 13 00:04:47.897398 systemd-journald[245]: Journal stopped Oct 13 00:04:48.863319 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Oct 13 00:04:48.863398 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 00:04:48.863411 kernel: SELinux: policy capability open_perms=1 Oct 13 00:04:48.863421 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 00:04:48.863430 kernel: SELinux: policy capability always_check_network=0 Oct 13 00:04:48.863439 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 00:04:48.863448 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 00:04:48.863459 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 00:04:48.863473 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 00:04:48.863482 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 00:04:48.863491 kernel: audit: type=1403 audit(1760313888.035:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 00:04:48.863502 systemd[1]: Successfully loaded SELinux policy in 63.367ms. Oct 13 00:04:48.863521 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.492ms. Oct 13 00:04:48.863532 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 00:04:48.863548 systemd[1]: Detected virtualization kvm. Oct 13 00:04:48.863559 systemd[1]: Detected architecture arm64. Oct 13 00:04:48.863569 systemd[1]: Detected first boot. Oct 13 00:04:48.863579 systemd[1]: Hostname set to . Oct 13 00:04:48.863588 systemd[1]: Initializing machine ID from VM UUID. Oct 13 00:04:48.863598 zram_generator::config[1089]: No configuration found. Oct 13 00:04:48.863608 kernel: NET: Registered PF_VSOCK protocol family Oct 13 00:04:48.863621 systemd[1]: Populated /etc with preset unit settings. Oct 13 00:04:48.863633 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 13 00:04:48.863644 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 00:04:48.863654 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 00:04:48.863664 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 00:04:48.863674 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 00:04:48.863684 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 00:04:48.863693 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 00:04:48.863709 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 00:04:48.863719 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 00:04:48.863729 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 00:04:48.863738 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 00:04:48.863748 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 00:04:48.863762 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 00:04:48.863773 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 00:04:48.863783 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 00:04:48.863793 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 00:04:48.863805 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 00:04:48.863815 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 00:04:48.863826 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 13 00:04:48.863836 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 00:04:48.863847 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 00:04:48.863860 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 00:04:48.863871 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 00:04:48.863881 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 00:04:48.863891 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 00:04:48.865017 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 00:04:48.865041 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 00:04:48.865052 systemd[1]: Reached target slices.target - Slice Units. Oct 13 00:04:48.865062 systemd[1]: Reached target swap.target - Swaps. Oct 13 00:04:48.865072 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 00:04:48.865082 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 00:04:48.865093 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 00:04:48.865108 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 00:04:48.865118 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 00:04:48.865128 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 00:04:48.865139 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 00:04:48.865149 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 00:04:48.865163 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 00:04:48.865173 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 00:04:48.865183 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 00:04:48.865193 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 00:04:48.865243 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 00:04:48.865259 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 00:04:48.865270 systemd[1]: Reached target machines.target - Containers. Oct 13 00:04:48.865280 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 00:04:48.865291 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 00:04:48.865301 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 00:04:48.865311 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 00:04:48.865321 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 00:04:48.865333 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 00:04:48.865344 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 00:04:48.865354 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 00:04:48.865364 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 00:04:48.865375 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 00:04:48.865387 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 00:04:48.865397 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 00:04:48.865409 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 00:04:48.865419 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 00:04:48.865430 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 00:04:48.865440 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 00:04:48.865451 kernel: loop: module loaded Oct 13 00:04:48.865464 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 00:04:48.865474 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 00:04:48.865484 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 00:04:48.865495 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 00:04:48.865505 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 00:04:48.865517 systemd[1]: verity-setup.service: Deactivated successfully. Oct 13 00:04:48.865532 systemd[1]: Stopped verity-setup.service. Oct 13 00:04:48.865542 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 00:04:48.865552 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 00:04:48.865563 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 00:04:48.865573 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 00:04:48.865583 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 00:04:48.865593 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 00:04:48.865603 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 00:04:48.865613 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 00:04:48.865625 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 00:04:48.865635 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 00:04:48.865645 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 00:04:48.865655 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 00:04:48.865666 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 00:04:48.865675 kernel: fuse: init (API version 7.41) Oct 13 00:04:48.865686 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 00:04:48.865697 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 00:04:48.865708 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 00:04:48.865719 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 00:04:48.865730 kernel: ACPI: bus type drm_connector registered Oct 13 00:04:48.865741 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 00:04:48.865751 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 00:04:48.865761 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 00:04:48.865771 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 00:04:48.865781 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 00:04:48.865791 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 00:04:48.865804 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 00:04:48.865814 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 00:04:48.865824 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 00:04:48.865835 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 00:04:48.865846 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 00:04:48.865856 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 00:04:48.865867 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 00:04:48.865877 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 00:04:48.865887 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 00:04:48.868012 systemd-journald[1153]: Collecting audit messages is disabled. Oct 13 00:04:48.868059 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 00:04:48.868071 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 00:04:48.868082 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 00:04:48.868092 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 00:04:48.868103 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 00:04:48.868115 systemd-journald[1153]: Journal started Oct 13 00:04:48.868138 systemd-journald[1153]: Runtime Journal (/run/log/journal/764f0938bc71449ea51447b18170da1e) is 8M, max 76.5M, 68.5M free. Oct 13 00:04:48.544789 systemd[1]: Queued start job for default target multi-user.target. Oct 13 00:04:48.571885 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 13 00:04:48.572442 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 00:04:48.874979 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 00:04:48.875040 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 00:04:48.876953 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 00:04:48.880977 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 00:04:48.908539 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 00:04:48.917045 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 00:04:48.928006 kernel: loop0: detected capacity change from 0 to 200800 Oct 13 00:04:48.938100 systemd-journald[1153]: Time spent on flushing to /var/log/journal/764f0938bc71449ea51447b18170da1e is 69.785ms for 1175 entries. Oct 13 00:04:48.938100 systemd-journald[1153]: System Journal (/var/log/journal/764f0938bc71449ea51447b18170da1e) is 8M, max 584.8M, 576.8M free. Oct 13 00:04:49.034570 systemd-journald[1153]: Received client request to flush runtime journal. Oct 13 00:04:49.034631 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 00:04:49.034656 kernel: loop1: detected capacity change from 0 to 100632 Oct 13 00:04:49.034672 kernel: loop2: detected capacity change from 0 to 8 Oct 13 00:04:49.034693 kernel: loop3: detected capacity change from 0 to 119368 Oct 13 00:04:48.941147 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 00:04:48.945607 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 00:04:48.947154 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 00:04:48.951519 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 00:04:48.999993 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 00:04:49.006459 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 00:04:49.032304 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 00:04:49.037158 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 00:04:49.039524 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 00:04:49.068292 kernel: loop4: detected capacity change from 0 to 200800 Oct 13 00:04:49.078883 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Oct 13 00:04:49.078914 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Oct 13 00:04:49.087966 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 00:04:49.101940 kernel: loop5: detected capacity change from 0 to 100632 Oct 13 00:04:49.124064 kernel: loop6: detected capacity change from 0 to 8 Oct 13 00:04:49.125933 kernel: loop7: detected capacity change from 0 to 119368 Oct 13 00:04:49.142625 (sd-merge)[1232]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Oct 13 00:04:49.143585 (sd-merge)[1232]: Merged extensions into '/usr'. Oct 13 00:04:49.151067 systemd[1]: Reload requested from client PID 1190 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 00:04:49.151085 systemd[1]: Reloading... Oct 13 00:04:49.288940 zram_generator::config[1263]: No configuration found. Oct 13 00:04:49.428547 ldconfig[1187]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 00:04:49.528476 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 00:04:49.528673 systemd[1]: Reloading finished in 377 ms. Oct 13 00:04:49.563777 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 00:04:49.566068 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 00:04:49.577137 systemd[1]: Starting ensure-sysext.service... Oct 13 00:04:49.580837 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 00:04:49.593719 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 00:04:49.599091 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 00:04:49.605020 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 00:04:49.605432 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 00:04:49.605751 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 00:04:49.606066 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 00:04:49.606802 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 00:04:49.607136 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Oct 13 00:04:49.607282 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Oct 13 00:04:49.610710 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 00:04:49.610862 systemd-tmpfiles[1299]: Skipping /boot Oct 13 00:04:49.611281 systemd[1]: Reload requested from client PID 1298 ('systemctl') (unit ensure-sysext.service)... Oct 13 00:04:49.611291 systemd[1]: Reloading... Oct 13 00:04:49.617518 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 00:04:49.617654 systemd-tmpfiles[1299]: Skipping /boot Oct 13 00:04:49.660159 systemd-udevd[1302]: Using default interface naming scheme 'v255'. Oct 13 00:04:49.694925 zram_generator::config[1327]: No configuration found. Oct 13 00:04:49.972928 kernel: mousedev: PS/2 mouse device common for all mice Oct 13 00:04:49.979489 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 13 00:04:49.979845 systemd[1]: Reloading finished in 368 ms. Oct 13 00:04:49.998007 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 00:04:49.999446 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 00:04:50.024209 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 00:04:50.030230 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 00:04:50.033728 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 00:04:50.040315 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 00:04:50.047185 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 00:04:50.051175 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 00:04:50.069345 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 00:04:50.072753 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 00:04:50.086314 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 00:04:50.094272 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 00:04:50.097091 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 00:04:50.097288 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 00:04:50.114875 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 00:04:50.133458 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 13 00:04:50.138756 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 00:04:50.140082 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 00:04:50.142654 systemd[1]: Finished ensure-sysext.service. Oct 13 00:04:50.144251 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 00:04:50.153368 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 00:04:50.153585 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 00:04:50.161962 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 00:04:50.165095 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 00:04:50.168624 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 00:04:50.169882 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 00:04:50.175139 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 00:04:50.176437 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 00:04:50.176497 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 00:04:50.185129 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 13 00:04:50.189265 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 00:04:50.195224 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 00:04:50.198055 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 00:04:50.198447 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 00:04:50.203671 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Oct 13 00:04:50.203767 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 13 00:04:50.203786 kernel: [drm] features: -context_init Oct 13 00:04:50.203801 kernel: [drm] number of scanouts: 1 Oct 13 00:04:50.203814 kernel: [drm] number of cap sets: 0 Oct 13 00:04:50.203824 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Oct 13 00:04:50.202454 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 00:04:50.211494 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 00:04:50.219251 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 00:04:50.219486 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 00:04:50.225017 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 00:04:50.230356 augenrules[1455]: No rules Oct 13 00:04:50.231960 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 00:04:50.237988 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 00:04:50.245796 kernel: Console: switching to colour frame buffer device 160x50 Oct 13 00:04:50.247595 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 00:04:50.254930 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 13 00:04:50.290682 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Oct 13 00:04:50.292108 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 00:04:50.293951 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 00:04:50.297024 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 00:04:50.300582 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 00:04:50.301959 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 00:04:50.302005 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 00:04:50.302034 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 00:04:50.332430 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 00:04:50.347332 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 00:04:50.347529 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 00:04:50.348724 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 00:04:50.348955 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 00:04:50.350400 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 00:04:50.350680 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 00:04:50.355668 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 00:04:50.355719 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 00:04:50.383238 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:04:50.395088 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 00:04:50.395361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:04:50.400163 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:04:50.497816 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:04:50.521734 systemd-networkd[1413]: lo: Link UP Oct 13 00:04:50.521742 systemd-networkd[1413]: lo: Gained carrier Oct 13 00:04:50.523868 systemd-networkd[1413]: Enumeration completed Oct 13 00:04:50.525471 systemd-networkd[1413]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:04:50.525483 systemd-networkd[1413]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 00:04:50.525592 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 00:04:50.526660 systemd-networkd[1413]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:04:50.526664 systemd-networkd[1413]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 00:04:50.527462 systemd-networkd[1413]: eth0: Link UP Oct 13 00:04:50.527570 systemd-networkd[1413]: eth0: Gained carrier Oct 13 00:04:50.527589 systemd-networkd[1413]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:04:50.531176 systemd-networkd[1413]: eth1: Link UP Oct 13 00:04:50.531634 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 00:04:50.531800 systemd-networkd[1413]: eth1: Gained carrier Oct 13 00:04:50.531822 systemd-networkd[1413]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:04:50.537135 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 00:04:50.556970 systemd-networkd[1413]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Oct 13 00:04:50.570181 systemd-resolved[1414]: Positive Trust Anchors: Oct 13 00:04:50.570213 systemd-resolved[1414]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 00:04:50.571171 systemd-resolved[1414]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 00:04:50.572760 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 00:04:50.575000 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 13 00:04:50.575832 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 00:04:50.579528 systemd-resolved[1414]: Using system hostname 'ci-4459-1-0-c-ccbbacf556'. Oct 13 00:04:50.581345 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 00:04:50.582048 systemd[1]: Reached target network.target - Network. Oct 13 00:04:50.582589 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 00:04:50.583282 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 00:04:50.584011 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 00:04:50.584684 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 00:04:50.585610 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 00:04:50.586402 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 00:04:50.587055 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 00:04:50.587740 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 00:04:50.587777 systemd[1]: Reached target paths.target - Path Units. Oct 13 00:04:50.588349 systemd[1]: Reached target timers.target - Timer Units. Oct 13 00:04:50.590011 systemd-networkd[1413]: eth0: DHCPv4 address 5.75.247.119/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 13 00:04:50.590372 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 00:04:50.592590 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 00:04:50.594603 systemd-timesyncd[1444]: Network configuration changed, trying to establish connection. Oct 13 00:04:50.595200 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 00:04:50.596056 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 00:04:50.596739 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 00:04:50.600765 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 00:04:50.602381 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 00:04:50.603802 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 00:04:50.604613 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 00:04:50.605171 systemd[1]: Reached target basic.target - Basic System. Oct 13 00:04:50.606021 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 00:04:50.606055 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 00:04:50.607643 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 00:04:50.612109 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 13 00:04:50.629123 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 00:04:50.634125 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 00:04:50.635974 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 00:04:50.640240 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 00:04:50.640830 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 00:04:50.642703 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 00:04:50.645011 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 00:04:50.648331 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Oct 13 00:04:50.652264 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 00:04:50.662145 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 00:04:50.668150 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 00:04:50.670924 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 00:04:50.671517 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 00:04:50.674624 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 00:04:50.684768 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 00:04:50.688206 jq[1512]: false Oct 13 00:04:50.689856 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 00:04:50.690983 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 00:04:50.695432 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 00:04:50.713465 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 00:04:50.714305 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 00:04:50.730476 extend-filesystems[1513]: Found /dev/sda6 Oct 13 00:04:50.737216 coreos-metadata[1507]: Oct 13 00:04:50.734 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Oct 13 00:04:50.739616 tar[1525]: linux-arm64/LICENSE Oct 13 00:04:50.742062 jq[1521]: true Oct 13 00:04:50.742346 tar[1525]: linux-arm64/helm Oct 13 00:04:50.742833 coreos-metadata[1507]: Oct 13 00:04:50.742 INFO Fetch successful Oct 13 00:04:50.742833 coreos-metadata[1507]: Oct 13 00:04:50.742 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Oct 13 00:04:50.747437 coreos-metadata[1507]: Oct 13 00:04:50.746 INFO Fetch successful Oct 13 00:04:50.749107 extend-filesystems[1513]: Found /dev/sda9 Oct 13 00:04:50.760867 extend-filesystems[1513]: Checking size of /dev/sda9 Oct 13 00:04:50.762596 dbus-daemon[1508]: [system] SELinux support is enabled Oct 13 00:04:50.762804 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 00:04:50.767024 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 00:04:50.767061 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 00:04:50.768447 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 00:04:50.768472 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 00:04:50.770244 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 00:04:50.771946 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 00:04:50.787295 update_engine[1520]: I20251013 00:04:50.786482 1520 main.cc:92] Flatcar Update Engine starting Oct 13 00:04:50.791974 (ntainerd)[1544]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 00:04:50.798320 jq[1546]: true Oct 13 00:04:50.797817 systemd[1]: Started update-engine.service - Update Engine. Oct 13 00:04:50.798968 update_engine[1520]: I20251013 00:04:50.798799 1520 update_check_scheduler.cc:74] Next update check in 9m12s Oct 13 00:04:50.804361 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 00:04:50.832289 extend-filesystems[1513]: Resized partition /dev/sda9 Oct 13 00:04:50.843658 extend-filesystems[1562]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 00:04:50.858190 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Oct 13 00:04:50.925146 systemd-logind[1519]: New seat seat0. Oct 13 00:04:50.937802 systemd-logind[1519]: Watching system buttons on /dev/input/event0 (Power Button) Oct 13 00:04:50.937825 systemd-logind[1519]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Oct 13 00:04:50.938250 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 00:04:50.950348 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 13 00:04:50.951617 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 00:04:51.031662 bash[1587]: Updated "/home/core/.ssh/authorized_keys" Oct 13 00:04:51.038466 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 00:04:51.042320 systemd[1]: Starting sshkeys.service... Oct 13 00:04:51.064915 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Oct 13 00:04:51.084435 extend-filesystems[1562]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 13 00:04:51.084435 extend-filesystems[1562]: old_desc_blocks = 1, new_desc_blocks = 5 Oct 13 00:04:51.084435 extend-filesystems[1562]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Oct 13 00:04:51.090707 extend-filesystems[1513]: Resized filesystem in /dev/sda9 Oct 13 00:04:51.090322 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 00:04:51.091316 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 00:04:51.103266 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 13 00:04:51.106133 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 13 00:04:51.144420 locksmithd[1555]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 00:04:51.166575 coreos-metadata[1596]: Oct 13 00:04:51.165 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Oct 13 00:04:51.167019 coreos-metadata[1596]: Oct 13 00:04:51.166 INFO Fetch successful Oct 13 00:04:51.174300 containerd[1544]: time="2025-10-13T00:04:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 00:04:51.174169 unknown[1596]: wrote ssh authorized keys file for user: core Oct 13 00:04:51.184198 containerd[1544]: time="2025-10-13T00:04:51.184138080Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 00:04:51.211914 containerd[1544]: time="2025-10-13T00:04:51.210548960Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="23.04µs" Oct 13 00:04:51.211914 containerd[1544]: time="2025-10-13T00:04:51.210597440Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 00:04:51.211914 containerd[1544]: time="2025-10-13T00:04:51.210618480Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 00:04:51.211914 containerd[1544]: time="2025-10-13T00:04:51.210768920Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 00:04:51.211914 containerd[1544]: time="2025-10-13T00:04:51.210786000Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 00:04:51.211914 containerd[1544]: time="2025-10-13T00:04:51.210811800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 00:04:51.211914 containerd[1544]: time="2025-10-13T00:04:51.210869960Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 00:04:51.211914 containerd[1544]: time="2025-10-13T00:04:51.210881280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 00:04:51.211914 containerd[1544]: time="2025-10-13T00:04:51.211131560Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 00:04:51.211914 containerd[1544]: time="2025-10-13T00:04:51.211149360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 00:04:51.211914 containerd[1544]: time="2025-10-13T00:04:51.211159760Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 00:04:51.211914 containerd[1544]: time="2025-10-13T00:04:51.211168080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 00:04:51.212211 containerd[1544]: time="2025-10-13T00:04:51.211285160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 00:04:51.212211 containerd[1544]: time="2025-10-13T00:04:51.211477600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 00:04:51.212211 containerd[1544]: time="2025-10-13T00:04:51.211505800Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 00:04:51.212211 containerd[1544]: time="2025-10-13T00:04:51.211515560Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 00:04:51.213251 containerd[1544]: time="2025-10-13T00:04:51.213117320Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 00:04:51.214011 containerd[1544]: time="2025-10-13T00:04:51.213980360Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 00:04:51.214136 containerd[1544]: time="2025-10-13T00:04:51.214096800Z" level=info msg="metadata content store policy set" policy=shared Oct 13 00:04:51.219748 update-ssh-keys[1602]: Updated "/home/core/.ssh/authorized_keys" Oct 13 00:04:51.220082 containerd[1544]: time="2025-10-13T00:04:51.219735440Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 00:04:51.220082 containerd[1544]: time="2025-10-13T00:04:51.219944400Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 00:04:51.220082 containerd[1544]: time="2025-10-13T00:04:51.219963680Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 00:04:51.220082 containerd[1544]: time="2025-10-13T00:04:51.219980400Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 00:04:51.220082 containerd[1544]: time="2025-10-13T00:04:51.219993840Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 00:04:51.220082 containerd[1544]: time="2025-10-13T00:04:51.220006880Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 00:04:51.220082 containerd[1544]: time="2025-10-13T00:04:51.220024200Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 00:04:51.220082 containerd[1544]: time="2025-10-13T00:04:51.220038000Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 00:04:51.220082 containerd[1544]: time="2025-10-13T00:04:51.220050440Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 00:04:51.220082 containerd[1544]: time="2025-10-13T00:04:51.220060880Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 00:04:51.220082 containerd[1544]: time="2025-10-13T00:04:51.220070720Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 00:04:51.220082 containerd[1544]: time="2025-10-13T00:04:51.220083560Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 00:04:51.220342 containerd[1544]: time="2025-10-13T00:04:51.220258160Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 00:04:51.220342 containerd[1544]: time="2025-10-13T00:04:51.220282840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 00:04:51.220342 containerd[1544]: time="2025-10-13T00:04:51.220304440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 00:04:51.220342 containerd[1544]: time="2025-10-13T00:04:51.220315960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 00:04:51.220342 containerd[1544]: time="2025-10-13T00:04:51.220326920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 00:04:51.220342 containerd[1544]: time="2025-10-13T00:04:51.220338080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 00:04:51.220436 containerd[1544]: time="2025-10-13T00:04:51.220350600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 00:04:51.220436 containerd[1544]: time="2025-10-13T00:04:51.220361800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 00:04:51.220436 containerd[1544]: time="2025-10-13T00:04:51.220373800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 00:04:51.220436 containerd[1544]: time="2025-10-13T00:04:51.220384320Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 00:04:51.220436 containerd[1544]: time="2025-10-13T00:04:51.220394640Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 00:04:51.221900 containerd[1544]: time="2025-10-13T00:04:51.220596680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 00:04:51.221900 containerd[1544]: time="2025-10-13T00:04:51.220624320Z" level=info msg="Start snapshots syncer" Oct 13 00:04:51.221900 containerd[1544]: time="2025-10-13T00:04:51.221567400Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 00:04:51.222773 containerd[1544]: time="2025-10-13T00:04:51.222232960Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 00:04:51.222773 containerd[1544]: time="2025-10-13T00:04:51.222296600Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 00:04:51.223975 containerd[1544]: time="2025-10-13T00:04:51.223478840Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 00:04:51.224721 containerd[1544]: time="2025-10-13T00:04:51.224041640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 00:04:51.224721 containerd[1544]: time="2025-10-13T00:04:51.224087400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 00:04:51.224721 containerd[1544]: time="2025-10-13T00:04:51.224100680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 00:04:51.224721 containerd[1544]: time="2025-10-13T00:04:51.224114960Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 00:04:51.224721 containerd[1544]: time="2025-10-13T00:04:51.224128000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 00:04:51.224721 containerd[1544]: time="2025-10-13T00:04:51.224138920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 00:04:51.224721 containerd[1544]: time="2025-10-13T00:04:51.224150360Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 00:04:51.224721 containerd[1544]: time="2025-10-13T00:04:51.224225240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 00:04:51.224721 containerd[1544]: time="2025-10-13T00:04:51.224242400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 00:04:51.224721 containerd[1544]: time="2025-10-13T00:04:51.224255120Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 00:04:51.224081 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 13 00:04:51.225019 containerd[1544]: time="2025-10-13T00:04:51.224800440Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 00:04:51.229029 containerd[1544]: time="2025-10-13T00:04:51.224827320Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 00:04:51.229029 containerd[1544]: time="2025-10-13T00:04:51.225238600Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 00:04:51.229029 containerd[1544]: time="2025-10-13T00:04:51.225260880Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 00:04:51.229029 containerd[1544]: time="2025-10-13T00:04:51.225270040Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 00:04:51.229029 containerd[1544]: time="2025-10-13T00:04:51.225288600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 00:04:51.229029 containerd[1544]: time="2025-10-13T00:04:51.225301160Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 00:04:51.229029 containerd[1544]: time="2025-10-13T00:04:51.225381720Z" level=info msg="runtime interface created" Oct 13 00:04:51.229029 containerd[1544]: time="2025-10-13T00:04:51.225388400Z" level=info msg="created NRI interface" Oct 13 00:04:51.229029 containerd[1544]: time="2025-10-13T00:04:51.225400200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 00:04:51.229029 containerd[1544]: time="2025-10-13T00:04:51.225416080Z" level=info msg="Connect containerd service" Oct 13 00:04:51.229029 containerd[1544]: time="2025-10-13T00:04:51.225455320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 00:04:51.233387 containerd[1544]: time="2025-10-13T00:04:51.231830520Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 00:04:51.231944 systemd[1]: Finished sshkeys.service. Oct 13 00:04:51.404426 containerd[1544]: time="2025-10-13T00:04:51.403581640Z" level=info msg="Start subscribing containerd event" Oct 13 00:04:51.404426 containerd[1544]: time="2025-10-13T00:04:51.403670640Z" level=info msg="Start recovering state" Oct 13 00:04:51.404426 containerd[1544]: time="2025-10-13T00:04:51.403761640Z" level=info msg="Start event monitor" Oct 13 00:04:51.404426 containerd[1544]: time="2025-10-13T00:04:51.403774080Z" level=info msg="Start cni network conf syncer for default" Oct 13 00:04:51.404426 containerd[1544]: time="2025-10-13T00:04:51.403781360Z" level=info msg="Start streaming server" Oct 13 00:04:51.404426 containerd[1544]: time="2025-10-13T00:04:51.403791760Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 00:04:51.404426 containerd[1544]: time="2025-10-13T00:04:51.403798400Z" level=info msg="runtime interface starting up..." Oct 13 00:04:51.404426 containerd[1544]: time="2025-10-13T00:04:51.403803600Z" level=info msg="starting plugins..." Oct 13 00:04:51.404426 containerd[1544]: time="2025-10-13T00:04:51.403816600Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 00:04:51.406021 containerd[1544]: time="2025-10-13T00:04:51.405972920Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 00:04:51.410775 containerd[1544]: time="2025-10-13T00:04:51.407934600Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 00:04:51.410775 containerd[1544]: time="2025-10-13T00:04:51.408046520Z" level=info msg="containerd successfully booted in 0.235558s" Oct 13 00:04:51.409060 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 00:04:51.482324 tar[1525]: linux-arm64/README.md Oct 13 00:04:51.501939 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 00:04:51.940118 systemd-networkd[1413]: eth0: Gained IPv6LL Oct 13 00:04:51.943563 systemd-timesyncd[1444]: Network configuration changed, trying to establish connection. Oct 13 00:04:51.947099 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 00:04:51.948636 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 00:04:51.953082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:04:51.959217 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 00:04:52.007530 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 00:04:52.024652 sshd_keygen[1547]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 00:04:52.053958 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 00:04:52.058069 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 00:04:52.076524 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 00:04:52.076750 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 00:04:52.080356 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 00:04:52.106431 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 00:04:52.111543 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 00:04:52.116302 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 13 00:04:52.117128 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 00:04:52.452260 systemd-networkd[1413]: eth1: Gained IPv6LL Oct 13 00:04:52.453134 systemd-timesyncd[1444]: Network configuration changed, trying to establish connection. Oct 13 00:04:52.737486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:04:52.739257 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 00:04:52.740485 systemd[1]: Startup finished in 2.239s (kernel) + 5.408s (initrd) + 4.767s (userspace) = 12.415s. Oct 13 00:04:52.749260 (kubelet)[1655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 00:04:53.215025 kubelet[1655]: E1013 00:04:53.214919 1655 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 00:04:53.217682 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 00:04:53.218106 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 00:04:53.219999 systemd[1]: kubelet.service: Consumed 807ms CPU time, 246.8M memory peak. Oct 13 00:05:03.468935 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 00:05:03.477308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:05:03.658771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:05:03.672429 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 00:05:03.728733 kubelet[1675]: E1013 00:05:03.728616 1675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 00:05:03.732227 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 00:05:03.732356 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 00:05:03.733209 systemd[1]: kubelet.service: Consumed 196ms CPU time, 105.9M memory peak. Oct 13 00:05:13.983133 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 13 00:05:13.985549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:05:14.169938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:05:14.178420 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 00:05:14.222701 kubelet[1690]: E1013 00:05:14.222634 1690 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 00:05:14.225436 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 00:05:14.225606 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 00:05:14.226393 systemd[1]: kubelet.service: Consumed 164ms CPU time, 106.6M memory peak. Oct 13 00:05:19.786053 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 00:05:19.787700 systemd[1]: Started sshd@0-5.75.247.119:22-139.178.89.65:37502.service - OpenSSH per-connection server daemon (139.178.89.65:37502). Oct 13 00:05:20.775758 sshd[1698]: Accepted publickey for core from 139.178.89.65 port 37502 ssh2: RSA SHA256:9hygYNV3qUvJXdEWXIOx7wVbZ1g8nxoR791t75pOQbs Oct 13 00:05:20.779452 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:05:20.791246 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 00:05:20.792217 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 00:05:20.797116 systemd-logind[1519]: New session 1 of user core. Oct 13 00:05:20.817728 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 00:05:20.821491 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 00:05:20.835346 (systemd)[1703]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 00:05:20.838779 systemd-logind[1519]: New session c1 of user core. Oct 13 00:05:20.984145 systemd[1703]: Queued start job for default target default.target. Oct 13 00:05:20.993081 systemd[1703]: Created slice app.slice - User Application Slice. Oct 13 00:05:20.993129 systemd[1703]: Reached target paths.target - Paths. Oct 13 00:05:20.993182 systemd[1703]: Reached target timers.target - Timers. Oct 13 00:05:20.996629 systemd[1703]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 00:05:21.029315 systemd[1703]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 00:05:21.029641 systemd[1703]: Reached target sockets.target - Sockets. Oct 13 00:05:21.029812 systemd[1703]: Reached target basic.target - Basic System. Oct 13 00:05:21.030032 systemd[1703]: Reached target default.target - Main User Target. Oct 13 00:05:21.030082 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 00:05:21.030212 systemd[1703]: Startup finished in 183ms. Oct 13 00:05:21.037200 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 00:05:21.718113 systemd[1]: Started sshd@1-5.75.247.119:22-139.178.89.65:37506.service - OpenSSH per-connection server daemon (139.178.89.65:37506). Oct 13 00:05:22.717586 sshd[1714]: Accepted publickey for core from 139.178.89.65 port 37506 ssh2: RSA SHA256:9hygYNV3qUvJXdEWXIOx7wVbZ1g8nxoR791t75pOQbs Oct 13 00:05:22.719462 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:05:22.728481 systemd-logind[1519]: New session 2 of user core. Oct 13 00:05:22.738221 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 00:05:22.841791 systemd-timesyncd[1444]: Contacted time server 162.159.200.1:123 (2.flatcar.pool.ntp.org). Oct 13 00:05:22.842258 systemd-timesyncd[1444]: Initial clock synchronization to Mon 2025-10-13 00:05:22.937944 UTC. Oct 13 00:05:23.391984 sshd[1717]: Connection closed by 139.178.89.65 port 37506 Oct 13 00:05:23.392886 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Oct 13 00:05:23.399125 systemd[1]: sshd@1-5.75.247.119:22-139.178.89.65:37506.service: Deactivated successfully. Oct 13 00:05:23.401079 systemd[1]: session-2.scope: Deactivated successfully. Oct 13 00:05:23.403354 systemd-logind[1519]: Session 2 logged out. Waiting for processes to exit. Oct 13 00:05:23.405186 systemd-logind[1519]: Removed session 2. Oct 13 00:05:23.564229 systemd[1]: Started sshd@2-5.75.247.119:22-139.178.89.65:39982.service - OpenSSH per-connection server daemon (139.178.89.65:39982). Oct 13 00:05:24.380171 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 13 00:05:24.384061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:05:24.543633 sshd[1723]: Accepted publickey for core from 139.178.89.65 port 39982 ssh2: RSA SHA256:9hygYNV3qUvJXdEWXIOx7wVbZ1g8nxoR791t75pOQbs Oct 13 00:05:24.545213 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:05:24.549977 systemd-logind[1519]: New session 3 of user core. Oct 13 00:05:24.552462 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 00:05:24.555579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:05:24.566457 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 00:05:24.619233 kubelet[1734]: E1013 00:05:24.618965 1734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 00:05:24.622625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 00:05:24.622878 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 00:05:24.625110 systemd[1]: kubelet.service: Consumed 181ms CPU time, 107.3M memory peak. Oct 13 00:05:25.207997 sshd[1735]: Connection closed by 139.178.89.65 port 39982 Oct 13 00:05:25.208972 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Oct 13 00:05:25.214839 systemd[1]: sshd@2-5.75.247.119:22-139.178.89.65:39982.service: Deactivated successfully. Oct 13 00:05:25.218493 systemd[1]: session-3.scope: Deactivated successfully. Oct 13 00:05:25.219621 systemd-logind[1519]: Session 3 logged out. Waiting for processes to exit. Oct 13 00:05:25.221111 systemd-logind[1519]: Removed session 3. Oct 13 00:05:25.377821 systemd[1]: Started sshd@3-5.75.247.119:22-139.178.89.65:39996.service - OpenSSH per-connection server daemon (139.178.89.65:39996). Oct 13 00:05:26.368202 sshd[1747]: Accepted publickey for core from 139.178.89.65 port 39996 ssh2: RSA SHA256:9hygYNV3qUvJXdEWXIOx7wVbZ1g8nxoR791t75pOQbs Oct 13 00:05:26.371493 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:05:26.380966 systemd-logind[1519]: New session 4 of user core. Oct 13 00:05:26.387188 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 00:05:27.035077 sshd[1750]: Connection closed by 139.178.89.65 port 39996 Oct 13 00:05:27.036109 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Oct 13 00:05:27.041657 systemd[1]: sshd@3-5.75.247.119:22-139.178.89.65:39996.service: Deactivated successfully. Oct 13 00:05:27.043753 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 00:05:27.044819 systemd-logind[1519]: Session 4 logged out. Waiting for processes to exit. Oct 13 00:05:27.046705 systemd-logind[1519]: Removed session 4. Oct 13 00:05:27.210513 systemd[1]: Started sshd@4-5.75.247.119:22-139.178.89.65:40002.service - OpenSSH per-connection server daemon (139.178.89.65:40002). Oct 13 00:05:28.215508 sshd[1756]: Accepted publickey for core from 139.178.89.65 port 40002 ssh2: RSA SHA256:9hygYNV3qUvJXdEWXIOx7wVbZ1g8nxoR791t75pOQbs Oct 13 00:05:28.218471 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:05:28.224720 systemd-logind[1519]: New session 5 of user core. Oct 13 00:05:28.232195 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 00:05:28.748550 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 00:05:28.749417 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:05:28.769650 sudo[1760]: pam_unix(sudo:session): session closed for user root Oct 13 00:05:28.930919 sshd[1759]: Connection closed by 139.178.89.65 port 40002 Oct 13 00:05:28.932272 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Oct 13 00:05:28.938288 systemd-logind[1519]: Session 5 logged out. Waiting for processes to exit. Oct 13 00:05:28.939116 systemd[1]: sshd@4-5.75.247.119:22-139.178.89.65:40002.service: Deactivated successfully. Oct 13 00:05:28.941729 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 00:05:28.945781 systemd-logind[1519]: Removed session 5. Oct 13 00:05:29.099351 systemd[1]: Started sshd@5-5.75.247.119:22-139.178.89.65:40010.service - OpenSSH per-connection server daemon (139.178.89.65:40010). Oct 13 00:05:30.087591 sshd[1766]: Accepted publickey for core from 139.178.89.65 port 40010 ssh2: RSA SHA256:9hygYNV3qUvJXdEWXIOx7wVbZ1g8nxoR791t75pOQbs Oct 13 00:05:30.089921 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:05:30.095233 systemd-logind[1519]: New session 6 of user core. Oct 13 00:05:30.108227 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 00:05:30.604785 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 00:05:30.605509 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:05:30.613397 sudo[1771]: pam_unix(sudo:session): session closed for user root Oct 13 00:05:30.622487 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 00:05:30.623281 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:05:30.639358 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 00:05:30.693721 augenrules[1793]: No rules Oct 13 00:05:30.695247 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 00:05:30.695469 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 00:05:30.697438 sudo[1770]: pam_unix(sudo:session): session closed for user root Oct 13 00:05:30.855216 sshd[1769]: Connection closed by 139.178.89.65 port 40010 Oct 13 00:05:30.855876 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Oct 13 00:05:30.862192 systemd-logind[1519]: Session 6 logged out. Waiting for processes to exit. Oct 13 00:05:30.863150 systemd[1]: sshd@5-5.75.247.119:22-139.178.89.65:40010.service: Deactivated successfully. Oct 13 00:05:30.866966 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 00:05:30.869401 systemd-logind[1519]: Removed session 6. Oct 13 00:05:31.033623 systemd[1]: Started sshd@6-5.75.247.119:22-139.178.89.65:40016.service - OpenSSH per-connection server daemon (139.178.89.65:40016). Oct 13 00:05:32.050944 sshd[1802]: Accepted publickey for core from 139.178.89.65 port 40016 ssh2: RSA SHA256:9hygYNV3qUvJXdEWXIOx7wVbZ1g8nxoR791t75pOQbs Oct 13 00:05:32.053060 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:05:32.059405 systemd-logind[1519]: New session 7 of user core. Oct 13 00:05:32.064231 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 00:05:32.572832 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 00:05:32.573670 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:05:32.917496 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 00:05:32.932520 (dockerd)[1823]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 00:05:33.186872 dockerd[1823]: time="2025-10-13T00:05:33.186404348Z" level=info msg="Starting up" Oct 13 00:05:33.190575 dockerd[1823]: time="2025-10-13T00:05:33.189798194Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 00:05:33.203619 dockerd[1823]: time="2025-10-13T00:05:33.203568473Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 00:05:33.236792 systemd[1]: var-lib-docker-metacopy\x2dcheck148265405-merged.mount: Deactivated successfully. Oct 13 00:05:33.248333 dockerd[1823]: time="2025-10-13T00:05:33.248281543Z" level=info msg="Loading containers: start." Oct 13 00:05:33.260975 kernel: Initializing XFRM netlink socket Oct 13 00:05:33.522167 systemd-networkd[1413]: docker0: Link UP Oct 13 00:05:33.527032 dockerd[1823]: time="2025-10-13T00:05:33.526892620Z" level=info msg="Loading containers: done." Oct 13 00:05:33.548305 dockerd[1823]: time="2025-10-13T00:05:33.547816066Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 00:05:33.548305 dockerd[1823]: time="2025-10-13T00:05:33.547959921Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 00:05:33.548305 dockerd[1823]: time="2025-10-13T00:05:33.548080181Z" level=info msg="Initializing buildkit" Oct 13 00:05:33.578960 dockerd[1823]: time="2025-10-13T00:05:33.578853275Z" level=info msg="Completed buildkit initialization" Oct 13 00:05:33.589228 dockerd[1823]: time="2025-10-13T00:05:33.589177263Z" level=info msg="Daemon has completed initialization" Oct 13 00:05:33.589599 dockerd[1823]: time="2025-10-13T00:05:33.589499521Z" level=info msg="API listen on /run/docker.sock" Oct 13 00:05:33.591007 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 00:05:34.356759 containerd[1544]: time="2025-10-13T00:05:34.356590174Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 13 00:05:34.873700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 13 00:05:34.878823 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:05:34.957643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3527990502.mount: Deactivated successfully. Oct 13 00:05:35.062965 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:05:35.073833 (kubelet)[2060]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 00:05:35.138651 kubelet[2060]: E1013 00:05:35.138376 2060 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 00:05:35.144591 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 00:05:35.144932 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 00:05:35.145631 systemd[1]: kubelet.service: Consumed 173ms CPU time, 106.7M memory peak. Oct 13 00:05:35.811972 containerd[1544]: time="2025-10-13T00:05:35.811905225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:05:35.813207 containerd[1544]: time="2025-10-13T00:05:35.813135725Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=24574608" Oct 13 00:05:35.814947 containerd[1544]: time="2025-10-13T00:05:35.814442729Z" level=info msg="ImageCreate event name:\"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:05:35.818569 containerd[1544]: time="2025-10-13T00:05:35.818528463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:05:35.820639 containerd[1544]: time="2025-10-13T00:05:35.820594506Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"24571109\" in 1.463953154s" Oct 13 00:05:35.820792 containerd[1544]: time="2025-10-13T00:05:35.820776867Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\"" Oct 13 00:05:35.821662 containerd[1544]: time="2025-10-13T00:05:35.821546090Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 13 00:05:36.328008 update_engine[1520]: I20251013 00:05:36.327948 1520 update_attempter.cc:509] Updating boot flags... Oct 13 00:05:36.820981 containerd[1544]: time="2025-10-13T00:05:36.820047764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:05:36.822522 containerd[1544]: time="2025-10-13T00:05:36.822489018Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=19132163" Oct 13 00:05:36.826685 containerd[1544]: time="2025-10-13T00:05:36.826621617Z" level=info msg="ImageCreate event name:\"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:05:36.831377 containerd[1544]: time="2025-10-13T00:05:36.831318451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:05:36.833003 containerd[1544]: time="2025-10-13T00:05:36.832951031Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"20720058\" in 1.011329081s" Oct 13 00:05:36.833195 containerd[1544]: time="2025-10-13T00:05:36.833173182Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\"" Oct 13 00:05:36.834313 containerd[1544]: time="2025-10-13T00:05:36.834015688Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 13 00:05:37.757895 containerd[1544]: time="2025-10-13T00:05:37.757817878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:05:37.763923 containerd[1544]: time="2025-10-13T00:05:37.762933968Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=14191904" Oct 13 00:05:37.763923 containerd[1544]: time="2025-10-13T00:05:37.763054471Z" level=info msg="ImageCreate event name:\"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:05:37.766564 containerd[1544]: time="2025-10-13T00:05:37.766519659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:05:37.767589 containerd[1544]: time="2025-10-13T00:05:37.767545001Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"15779817\" in 933.486706ms" Oct 13 00:05:37.767589 containerd[1544]: time="2025-10-13T00:05:37.767585036Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\"" Oct 13 00:05:37.768054 containerd[1544]: time="2025-10-13T00:05:37.767994956Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 13 00:05:38.687047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2920887789.mount: Deactivated successfully. Oct 13 00:05:38.904104 containerd[1544]: time="2025-10-13T00:05:38.904044726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:05:38.905912 containerd[1544]: time="2025-10-13T00:05:38.905862517Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=22789054" Oct 13 00:05:38.907232 containerd[1544]: time="2025-10-13T00:05:38.907172243Z" level=info msg="ImageCreate event name:\"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:05:38.912765 containerd[1544]: time="2025-10-13T00:05:38.912662997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:05:38.913746 containerd[1544]: time="2025-10-13T00:05:38.913525798Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"22788047\" in 1.145492735s" Oct 13 00:05:38.913746 containerd[1544]: time="2025-10-13T00:05:38.913575960Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\"" Oct 13 00:05:38.914271 containerd[1544]: time="2025-10-13T00:05:38.914203458Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 13 00:05:39.483749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2872416371.mount: Deactivated successfully. Oct 13 00:05:40.448185 containerd[1544]: time="2025-10-13T00:05:40.448115876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:05:40.449884 containerd[1544]: time="2025-10-13T00:05:40.449845707Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395498" Oct 13 00:05:40.450998 containerd[1544]: time="2025-10-13T00:05:40.450776504Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:05:40.453967 containerd[1544]: time="2025-10-13T00:05:40.453719924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:05:40.455597 containerd[1544]: time="2025-10-13T00:05:40.455207214Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.540952596s" Oct 13 00:05:40.455597 containerd[1544]: time="2025-10-13T00:05:40.455252070Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Oct 13 00:05:40.455778 containerd[1544]: time="2025-10-13T00:05:40.455714525Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 13 00:05:40.995384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1468820963.mount: Deactivated successfully. Oct 13 00:05:41.003922 containerd[1544]: time="2025-10-13T00:05:41.003314469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:05:41.004265 containerd[1544]: time="2025-10-13T00:05:41.004240357Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268729" Oct 13 00:05:41.006004 containerd[1544]: time="2025-10-13T00:05:41.005955503Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:05:41.008203 containerd[1544]: time="2025-10-13T00:05:41.008153335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:05:41.009033 containerd[1544]: time="2025-10-13T00:05:41.009003180Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 553.261863ms" Oct 13 00:05:41.009156 containerd[1544]: time="2025-10-13T00:05:41.009140289Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Oct 13 00:05:41.009878 containerd[1544]: time="2025-10-13T00:05:41.009847218Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 13 00:05:44.738280 containerd[1544]: time="2025-10-13T00:05:44.737388818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:05:44.738775 containerd[1544]: time="2025-10-13T00:05:44.738537096Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=97410810" Oct 13 00:05:44.739718 containerd[1544]: time="2025-10-13T00:05:44.739676166Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:05:44.743299 containerd[1544]: time="2025-10-13T00:05:44.743242807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:05:44.744708 containerd[1544]: time="2025-10-13T00:05:44.744669408Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.734781108s" Oct 13 00:05:44.744831 containerd[1544]: time="2025-10-13T00:05:44.744814313Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Oct 13 00:05:45.194694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 13 00:05:45.205053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:05:45.365200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:05:45.376554 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 00:05:45.423995 kubelet[2270]: E1013 00:05:45.423944 2270 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 00:05:45.426425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 00:05:45.426568 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 00:05:45.427034 systemd[1]: kubelet.service: Consumed 167ms CPU time, 106.6M memory peak. Oct 13 00:05:49.656658 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:05:49.656810 systemd[1]: kubelet.service: Consumed 167ms CPU time, 106.6M memory peak. Oct 13 00:05:49.660310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:05:49.699576 systemd[1]: Reload requested from client PID 2285 ('systemctl') (unit session-7.scope)... Oct 13 00:05:49.699594 systemd[1]: Reloading... Oct 13 00:05:49.831709 zram_generator::config[2335]: No configuration found. Oct 13 00:05:50.017864 systemd[1]: Reloading finished in 317 ms. Oct 13 00:05:50.074323 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 00:05:50.074619 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 00:05:50.075053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:05:50.075107 systemd[1]: kubelet.service: Consumed 118ms CPU time, 95M memory peak. Oct 13 00:05:50.079075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:05:50.233317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:05:50.248457 (kubelet)[2377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 00:05:50.302579 kubelet[2377]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 00:05:50.303184 kubelet[2377]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 00:05:50.304278 kubelet[2377]: I1013 00:05:50.304201 2377 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 00:05:51.603589 kubelet[2377]: I1013 00:05:51.603540 2377 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 00:05:51.604004 kubelet[2377]: I1013 00:05:51.603990 2377 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 00:05:51.606073 kubelet[2377]: I1013 00:05:51.606044 2377 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 00:05:51.606210 kubelet[2377]: I1013 00:05:51.606193 2377 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 00:05:51.606530 kubelet[2377]: I1013 00:05:51.606504 2377 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 00:05:51.613334 kubelet[2377]: E1013 00:05:51.613274 2377 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://5.75.247.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 5.75.247.119:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 00:05:51.614135 kubelet[2377]: I1013 00:05:51.614095 2377 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 00:05:51.622972 kubelet[2377]: I1013 00:05:51.622949 2377 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 00:05:51.626800 kubelet[2377]: I1013 00:05:51.625949 2377 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 00:05:51.626800 kubelet[2377]: I1013 00:05:51.626246 2377 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 00:05:51.626800 kubelet[2377]: I1013 00:05:51.626274 2377 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-1-0-c-ccbbacf556","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 00:05:51.626800 kubelet[2377]: I1013 00:05:51.626459 2377 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 00:05:51.627112 kubelet[2377]: I1013 00:05:51.626468 2377 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 00:05:51.627112 kubelet[2377]: I1013 00:05:51.626593 2377 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 00:05:51.629406 kubelet[2377]: I1013 00:05:51.629369 2377 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:05:51.631097 kubelet[2377]: I1013 00:05:51.631070 2377 kubelet.go:475] "Attempting to sync node with API server" Oct 13 00:05:51.631248 kubelet[2377]: I1013 00:05:51.631236 2377 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 00:05:51.631762 kubelet[2377]: E1013 00:05:51.631717 2377 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://5.75.247.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-1-0-c-ccbbacf556&limit=500&resourceVersion=0\": dial tcp 5.75.247.119:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 00:05:51.632201 kubelet[2377]: I1013 00:05:51.632183 2377 kubelet.go:387] "Adding apiserver pod source" Oct 13 00:05:51.632278 kubelet[2377]: I1013 00:05:51.632270 2377 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 00:05:51.634183 kubelet[2377]: E1013 00:05:51.634156 2377 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://5.75.247.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 5.75.247.119:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 00:05:51.634416 kubelet[2377]: I1013 00:05:51.634399 2377 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 00:05:51.635173 kubelet[2377]: I1013 00:05:51.635147 2377 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 00:05:51.635272 kubelet[2377]: I1013 00:05:51.635260 2377 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 00:05:51.635393 kubelet[2377]: W1013 00:05:51.635380 2377 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 00:05:51.639406 kubelet[2377]: I1013 00:05:51.639376 2377 server.go:1262] "Started kubelet" Oct 13 00:05:51.640243 kubelet[2377]: I1013 00:05:51.640206 2377 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 00:05:51.640849 kubelet[2377]: I1013 00:05:51.640808 2377 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 00:05:51.641830 kubelet[2377]: I1013 00:05:51.641800 2377 server.go:310] "Adding debug handlers to kubelet server" Oct 13 00:05:51.646534 kubelet[2377]: I1013 00:05:51.646456 2377 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 00:05:51.646745 kubelet[2377]: I1013 00:05:51.646726 2377 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 00:05:51.647145 kubelet[2377]: I1013 00:05:51.647124 2377 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 00:05:51.648616 kubelet[2377]: I1013 00:05:51.648571 2377 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 00:05:51.648871 kubelet[2377]: E1013 00:05:51.648847 2377 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-1-0-c-ccbbacf556\" not found" Oct 13 00:05:51.651286 kubelet[2377]: I1013 00:05:51.651254 2377 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 00:05:51.653108 kubelet[2377]: I1013 00:05:51.651317 2377 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 00:05:51.653218 kubelet[2377]: I1013 00:05:51.651360 2377 reconciler.go:29] "Reconciler: start to sync state" Oct 13 00:05:51.655097 kubelet[2377]: E1013 00:05:51.653790 2377 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://5.75.247.119:6443/api/v1/namespaces/default/events\": dial tcp 5.75.247.119:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-1-0-c-ccbbacf556.186de43664a061a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-1-0-c-ccbbacf556,UID:ci-4459-1-0-c-ccbbacf556,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-1-0-c-ccbbacf556,},FirstTimestamp:2025-10-13 00:05:51.639331239 +0000 UTC m=+1.386166221,LastTimestamp:2025-10-13 00:05:51.639331239 +0000 UTC m=+1.386166221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-1-0-c-ccbbacf556,}" Oct 13 00:05:51.655425 kubelet[2377]: E1013 00:05:51.655388 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://5.75.247.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-0-c-ccbbacf556?timeout=10s\": dial tcp 5.75.247.119:6443: connect: connection refused" interval="200ms" Oct 13 00:05:51.656224 kubelet[2377]: I1013 00:05:51.656203 2377 factory.go:223] Registration of the systemd container factory successfully Oct 13 00:05:51.656485 kubelet[2377]: I1013 00:05:51.656419 2377 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 00:05:51.657812 kubelet[2377]: E1013 00:05:51.657777 2377 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://5.75.247.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 5.75.247.119:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 00:05:51.658114 kubelet[2377]: I1013 00:05:51.658095 2377 factory.go:223] Registration of the containerd container factory successfully Oct 13 00:05:51.670236 kubelet[2377]: I1013 00:05:51.670028 2377 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 00:05:51.675468 kubelet[2377]: I1013 00:05:51.675428 2377 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 00:05:51.675593 kubelet[2377]: I1013 00:05:51.675475 2377 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 00:05:51.675593 kubelet[2377]: I1013 00:05:51.675553 2377 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 00:05:51.675672 kubelet[2377]: E1013 00:05:51.675638 2377 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 00:05:51.683244 kubelet[2377]: E1013 00:05:51.683200 2377 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://5.75.247.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 5.75.247.119:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 00:05:51.687096 kubelet[2377]: I1013 00:05:51.687068 2377 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 00:05:51.687461 kubelet[2377]: I1013 00:05:51.687206 2377 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 00:05:51.687461 kubelet[2377]: I1013 00:05:51.687227 2377 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:05:51.690884 kubelet[2377]: I1013 00:05:51.690468 2377 policy_none.go:49] "None policy: Start" Oct 13 00:05:51.690884 kubelet[2377]: I1013 00:05:51.690513 2377 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 00:05:51.690884 kubelet[2377]: I1013 00:05:51.690533 2377 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 00:05:51.692282 kubelet[2377]: I1013 00:05:51.692250 2377 policy_none.go:47] "Start" Oct 13 00:05:51.699321 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 00:05:51.714164 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 00:05:51.719015 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 00:05:51.742716 kubelet[2377]: E1013 00:05:51.742675 2377 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 00:05:51.743391 kubelet[2377]: I1013 00:05:51.743352 2377 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 00:05:51.743488 kubelet[2377]: I1013 00:05:51.743387 2377 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 00:05:51.744312 kubelet[2377]: I1013 00:05:51.744273 2377 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 00:05:51.747150 kubelet[2377]: E1013 00:05:51.747049 2377 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 00:05:51.747769 kubelet[2377]: E1013 00:05:51.747738 2377 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-1-0-c-ccbbacf556\" not found" Oct 13 00:05:51.792868 systemd[1]: Created slice kubepods-burstable-poded89b09c015a24ab272739e8ee310664.slice - libcontainer container kubepods-burstable-poded89b09c015a24ab272739e8ee310664.slice. Oct 13 00:05:51.802700 kubelet[2377]: E1013 00:05:51.802597 2377 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-c-ccbbacf556\" not found" node="ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:51.808019 systemd[1]: Created slice kubepods-burstable-podba5a03efa834908df43ca220df12a187.slice - libcontainer container kubepods-burstable-podba5a03efa834908df43ca220df12a187.slice. Oct 13 00:05:51.818817 kubelet[2377]: E1013 00:05:51.818767 2377 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-c-ccbbacf556\" not found" node="ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:51.823327 systemd[1]: Created slice kubepods-burstable-pod218186680315933454f872e1f9cce9a7.slice - libcontainer container kubepods-burstable-pod218186680315933454f872e1f9cce9a7.slice. Oct 13 00:05:51.825992 kubelet[2377]: E1013 00:05:51.825939 2377 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-c-ccbbacf556\" not found" node="ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:51.846567 kubelet[2377]: I1013 00:05:51.846195 2377 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:51.846860 kubelet[2377]: E1013 00:05:51.846829 2377 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://5.75.247.119:6443/api/v1/nodes\": dial tcp 5.75.247.119:6443: connect: connection refused" node="ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:51.857513 kubelet[2377]: E1013 00:05:51.857228 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://5.75.247.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-0-c-ccbbacf556?timeout=10s\": dial tcp 5.75.247.119:6443: connect: connection refused" interval="400ms" Oct 13 00:05:51.954727 kubelet[2377]: I1013 00:05:51.954640 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/218186680315933454f872e1f9cce9a7-k8s-certs\") pod \"kube-controller-manager-ci-4459-1-0-c-ccbbacf556\" (UID: \"218186680315933454f872e1f9cce9a7\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:51.954727 kubelet[2377]: I1013 00:05:51.954724 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/218186680315933454f872e1f9cce9a7-kubeconfig\") pod \"kube-controller-manager-ci-4459-1-0-c-ccbbacf556\" (UID: \"218186680315933454f872e1f9cce9a7\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:51.954929 kubelet[2377]: I1013 00:05:51.954765 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed89b09c015a24ab272739e8ee310664-kubeconfig\") pod \"kube-scheduler-ci-4459-1-0-c-ccbbacf556\" (UID: \"ed89b09c015a24ab272739e8ee310664\") " pod="kube-system/kube-scheduler-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:51.954929 kubelet[2377]: I1013 00:05:51.954797 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/218186680315933454f872e1f9cce9a7-ca-certs\") pod \"kube-controller-manager-ci-4459-1-0-c-ccbbacf556\" (UID: \"218186680315933454f872e1f9cce9a7\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:51.954929 kubelet[2377]: I1013 00:05:51.954830 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/218186680315933454f872e1f9cce9a7-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-1-0-c-ccbbacf556\" (UID: \"218186680315933454f872e1f9cce9a7\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:51.954929 kubelet[2377]: I1013 00:05:51.954862 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/218186680315933454f872e1f9cce9a7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-1-0-c-ccbbacf556\" (UID: \"218186680315933454f872e1f9cce9a7\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:51.954929 kubelet[2377]: I1013 00:05:51.954918 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba5a03efa834908df43ca220df12a187-ca-certs\") pod \"kube-apiserver-ci-4459-1-0-c-ccbbacf556\" (UID: \"ba5a03efa834908df43ca220df12a187\") " pod="kube-system/kube-apiserver-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:51.955182 kubelet[2377]: I1013 00:05:51.954956 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba5a03efa834908df43ca220df12a187-k8s-certs\") pod \"kube-apiserver-ci-4459-1-0-c-ccbbacf556\" (UID: \"ba5a03efa834908df43ca220df12a187\") " pod="kube-system/kube-apiserver-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:51.955182 kubelet[2377]: I1013 00:05:51.954993 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba5a03efa834908df43ca220df12a187-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-1-0-c-ccbbacf556\" (UID: \"ba5a03efa834908df43ca220df12a187\") " pod="kube-system/kube-apiserver-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:52.050196 kubelet[2377]: I1013 00:05:52.050085 2377 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:52.050799 kubelet[2377]: E1013 00:05:52.050708 2377 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://5.75.247.119:6443/api/v1/nodes\": dial tcp 5.75.247.119:6443: connect: connection refused" node="ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:52.106738 containerd[1544]: time="2025-10-13T00:05:52.106284249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-1-0-c-ccbbacf556,Uid:ed89b09c015a24ab272739e8ee310664,Namespace:kube-system,Attempt:0,}" Oct 13 00:05:52.122981 containerd[1544]: time="2025-10-13T00:05:52.122338793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-1-0-c-ccbbacf556,Uid:ba5a03efa834908df43ca220df12a187,Namespace:kube-system,Attempt:0,}" Oct 13 00:05:52.129569 containerd[1544]: time="2025-10-13T00:05:52.129524955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-1-0-c-ccbbacf556,Uid:218186680315933454f872e1f9cce9a7,Namespace:kube-system,Attempt:0,}" Oct 13 00:05:52.258707 kubelet[2377]: E1013 00:05:52.258631 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://5.75.247.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-0-c-ccbbacf556?timeout=10s\": dial tcp 5.75.247.119:6443: connect: connection refused" interval="800ms" Oct 13 00:05:52.453743 kubelet[2377]: I1013 00:05:52.453296 2377 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:52.453857 kubelet[2377]: E1013 00:05:52.453769 2377 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://5.75.247.119:6443/api/v1/nodes\": dial tcp 5.75.247.119:6443: connect: connection refused" node="ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:52.475006 kubelet[2377]: E1013 00:05:52.474959 2377 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://5.75.247.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 5.75.247.119:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 00:05:52.593764 kubelet[2377]: E1013 00:05:52.593641 2377 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://5.75.247.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 5.75.247.119:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 00:05:52.616053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3201445793.mount: Deactivated successfully. Oct 13 00:05:52.624248 containerd[1544]: time="2025-10-13T00:05:52.624193730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:05:52.626201 containerd[1544]: time="2025-10-13T00:05:52.626132736Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Oct 13 00:05:52.630922 containerd[1544]: time="2025-10-13T00:05:52.630144381Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:05:52.631599 containerd[1544]: time="2025-10-13T00:05:52.631560336Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:05:52.632224 containerd[1544]: time="2025-10-13T00:05:52.632186773Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 00:05:52.635127 containerd[1544]: time="2025-10-13T00:05:52.635062534Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 00:05:52.635216 containerd[1544]: time="2025-10-13T00:05:52.635171922Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:05:52.637350 containerd[1544]: time="2025-10-13T00:05:52.636114238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:05:52.637350 containerd[1544]: time="2025-10-13T00:05:52.636678219Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 525.738604ms" Oct 13 00:05:52.638769 containerd[1544]: time="2025-10-13T00:05:52.638730734Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 513.899435ms" Oct 13 00:05:52.640798 containerd[1544]: time="2025-10-13T00:05:52.640732356Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 508.604549ms" Oct 13 00:05:52.668267 containerd[1544]: time="2025-10-13T00:05:52.668194601Z" level=info msg="connecting to shim 1fecb21375293e7f9fc8b07a9a71eae9ffa5c64566d5ef1dbea3fb5a06a158c4" address="unix:///run/containerd/s/97e939b8d6d2e90c7a783a9cf19fb1546245e46eb7c92b1375dca3c68bb29bfe" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:05:52.684157 containerd[1544]: time="2025-10-13T00:05:52.684101428Z" level=info msg="connecting to shim 33a4730611b026bfe7455881e39fb981f9f26265442176c726bf3d2d3c6f0947" address="unix:///run/containerd/s/fe3cecfa28607faa2e262efcbc8732c5e51b7dd64e233789bf3740aaa86709af" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:05:52.693272 containerd[1544]: time="2025-10-13T00:05:52.693210432Z" level=info msg="connecting to shim ede186a051dd73876900a74dcde487b630040fce44acc2fa696a659e3ca4d1ff" address="unix:///run/containerd/s/a39451f8351f649957e2277ab42141251a96f90439ec9828a18b0a3b409453c4" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:05:52.704109 systemd[1]: Started cri-containerd-1fecb21375293e7f9fc8b07a9a71eae9ffa5c64566d5ef1dbea3fb5a06a158c4.scope - libcontainer container 1fecb21375293e7f9fc8b07a9a71eae9ffa5c64566d5ef1dbea3fb5a06a158c4. Oct 13 00:05:52.735711 kubelet[2377]: E1013 00:05:52.735678 2377 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://5.75.247.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-1-0-c-ccbbacf556&limit=500&resourceVersion=0\": dial tcp 5.75.247.119:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 00:05:52.740230 systemd[1]: Started cri-containerd-33a4730611b026bfe7455881e39fb981f9f26265442176c726bf3d2d3c6f0947.scope - libcontainer container 33a4730611b026bfe7455881e39fb981f9f26265442176c726bf3d2d3c6f0947. Oct 13 00:05:52.745971 systemd[1]: Started cri-containerd-ede186a051dd73876900a74dcde487b630040fce44acc2fa696a659e3ca4d1ff.scope - libcontainer container ede186a051dd73876900a74dcde487b630040fce44acc2fa696a659e3ca4d1ff. Oct 13 00:05:52.788868 containerd[1544]: time="2025-10-13T00:05:52.788815160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-1-0-c-ccbbacf556,Uid:ed89b09c015a24ab272739e8ee310664,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fecb21375293e7f9fc8b07a9a71eae9ffa5c64566d5ef1dbea3fb5a06a158c4\"" Oct 13 00:05:52.798502 kubelet[2377]: E1013 00:05:52.798363 2377 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://5.75.247.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 5.75.247.119:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 00:05:52.803979 containerd[1544]: time="2025-10-13T00:05:52.803519727Z" level=info msg="CreateContainer within sandbox \"1fecb21375293e7f9fc8b07a9a71eae9ffa5c64566d5ef1dbea3fb5a06a158c4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 00:05:52.820418 containerd[1544]: time="2025-10-13T00:05:52.820317018Z" level=info msg="Container 416aebd1a40b7f542776212b8d02114755ab2eb0ea3f2ea5611bd407bb9e583f: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:05:52.832990 containerd[1544]: time="2025-10-13T00:05:52.832632025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-1-0-c-ccbbacf556,Uid:218186680315933454f872e1f9cce9a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"33a4730611b026bfe7455881e39fb981f9f26265442176c726bf3d2d3c6f0947\"" Oct 13 00:05:52.836740 containerd[1544]: time="2025-10-13T00:05:52.836689123Z" level=info msg="CreateContainer within sandbox \"1fecb21375293e7f9fc8b07a9a71eae9ffa5c64566d5ef1dbea3fb5a06a158c4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"416aebd1a40b7f542776212b8d02114755ab2eb0ea3f2ea5611bd407bb9e583f\"" Oct 13 00:05:52.839116 containerd[1544]: time="2025-10-13T00:05:52.839055956Z" level=info msg="StartContainer for \"416aebd1a40b7f542776212b8d02114755ab2eb0ea3f2ea5611bd407bb9e583f\"" Oct 13 00:05:52.840740 containerd[1544]: time="2025-10-13T00:05:52.840695887Z" level=info msg="connecting to shim 416aebd1a40b7f542776212b8d02114755ab2eb0ea3f2ea5611bd407bb9e583f" address="unix:///run/containerd/s/97e939b8d6d2e90c7a783a9cf19fb1546245e46eb7c92b1375dca3c68bb29bfe" protocol=ttrpc version=3 Oct 13 00:05:52.841866 containerd[1544]: time="2025-10-13T00:05:52.841789201Z" level=info msg="CreateContainer within sandbox \"33a4730611b026bfe7455881e39fb981f9f26265442176c726bf3d2d3c6f0947\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 00:05:52.844859 containerd[1544]: time="2025-10-13T00:05:52.843629463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-1-0-c-ccbbacf556,Uid:ba5a03efa834908df43ca220df12a187,Namespace:kube-system,Attempt:0,} returns sandbox id \"ede186a051dd73876900a74dcde487b630040fce44acc2fa696a659e3ca4d1ff\"" Oct 13 00:05:52.852627 containerd[1544]: time="2025-10-13T00:05:52.852568944Z" level=info msg="Container 0effa64754f08969349683c44ff405cc645fe84ece32e1fce4439dbf5959017a: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:05:52.865349 systemd[1]: Started cri-containerd-416aebd1a40b7f542776212b8d02114755ab2eb0ea3f2ea5611bd407bb9e583f.scope - libcontainer container 416aebd1a40b7f542776212b8d02114755ab2eb0ea3f2ea5611bd407bb9e583f. Oct 13 00:05:52.866954 containerd[1544]: time="2025-10-13T00:05:52.866910179Z" level=info msg="CreateContainer within sandbox \"ede186a051dd73876900a74dcde487b630040fce44acc2fa696a659e3ca4d1ff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 00:05:52.869934 containerd[1544]: time="2025-10-13T00:05:52.869875923Z" level=info msg="CreateContainer within sandbox \"33a4730611b026bfe7455881e39fb981f9f26265442176c726bf3d2d3c6f0947\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0effa64754f08969349683c44ff405cc645fe84ece32e1fce4439dbf5959017a\"" Oct 13 00:05:52.873806 containerd[1544]: time="2025-10-13T00:05:52.871138999Z" level=info msg="StartContainer for \"0effa64754f08969349683c44ff405cc645fe84ece32e1fce4439dbf5959017a\"" Oct 13 00:05:52.873806 containerd[1544]: time="2025-10-13T00:05:52.873556725Z" level=info msg="connecting to shim 0effa64754f08969349683c44ff405cc645fe84ece32e1fce4439dbf5959017a" address="unix:///run/containerd/s/fe3cecfa28607faa2e262efcbc8732c5e51b7dd64e233789bf3740aaa86709af" protocol=ttrpc version=3 Oct 13 00:05:52.884913 containerd[1544]: time="2025-10-13T00:05:52.884829631Z" level=info msg="Container 59c84e3c8fcc5b67fdd99b85a29f8fa25fd2e6d348594d95f6323d92e5713a19: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:05:52.900777 containerd[1544]: time="2025-10-13T00:05:52.900712533Z" level=info msg="CreateContainer within sandbox \"ede186a051dd73876900a74dcde487b630040fce44acc2fa696a659e3ca4d1ff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"59c84e3c8fcc5b67fdd99b85a29f8fa25fd2e6d348594d95f6323d92e5713a19\"" Oct 13 00:05:52.901524 containerd[1544]: time="2025-10-13T00:05:52.901484807Z" level=info msg="StartContainer for \"59c84e3c8fcc5b67fdd99b85a29f8fa25fd2e6d348594d95f6323d92e5713a19\"" Oct 13 00:05:52.905217 containerd[1544]: time="2025-10-13T00:05:52.905167170Z" level=info msg="connecting to shim 59c84e3c8fcc5b67fdd99b85a29f8fa25fd2e6d348594d95f6323d92e5713a19" address="unix:///run/containerd/s/a39451f8351f649957e2277ab42141251a96f90439ec9828a18b0a3b409453c4" protocol=ttrpc version=3 Oct 13 00:05:52.908132 systemd[1]: Started cri-containerd-0effa64754f08969349683c44ff405cc645fe84ece32e1fce4439dbf5959017a.scope - libcontainer container 0effa64754f08969349683c44ff405cc645fe84ece32e1fce4439dbf5959017a. Oct 13 00:05:52.943584 containerd[1544]: time="2025-10-13T00:05:52.943463771Z" level=info msg="StartContainer for \"416aebd1a40b7f542776212b8d02114755ab2eb0ea3f2ea5611bd407bb9e583f\" returns successfully" Oct 13 00:05:52.969452 systemd[1]: Started cri-containerd-59c84e3c8fcc5b67fdd99b85a29f8fa25fd2e6d348594d95f6323d92e5713a19.scope - libcontainer container 59c84e3c8fcc5b67fdd99b85a29f8fa25fd2e6d348594d95f6323d92e5713a19. Oct 13 00:05:52.984026 containerd[1544]: time="2025-10-13T00:05:52.983962004Z" level=info msg="StartContainer for \"0effa64754f08969349683c44ff405cc645fe84ece32e1fce4439dbf5959017a\" returns successfully" Oct 13 00:05:53.028212 containerd[1544]: time="2025-10-13T00:05:53.028158550Z" level=info msg="StartContainer for \"59c84e3c8fcc5b67fdd99b85a29f8fa25fd2e6d348594d95f6323d92e5713a19\" returns successfully" Oct 13 00:05:53.059538 kubelet[2377]: E1013 00:05:53.059481 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://5.75.247.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-0-c-ccbbacf556?timeout=10s\": dial tcp 5.75.247.119:6443: connect: connection refused" interval="1.6s" Oct 13 00:05:53.256457 kubelet[2377]: I1013 00:05:53.255712 2377 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:53.698847 kubelet[2377]: E1013 00:05:53.698809 2377 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-c-ccbbacf556\" not found" node="ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:53.704827 kubelet[2377]: E1013 00:05:53.704779 2377 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-c-ccbbacf556\" not found" node="ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:53.707310 kubelet[2377]: E1013 00:05:53.707286 2377 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-c-ccbbacf556\" not found" node="ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:54.707563 kubelet[2377]: E1013 00:05:54.707319 2377 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-c-ccbbacf556\" not found" node="ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:54.707563 kubelet[2377]: E1013 00:05:54.707412 2377 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-c-ccbbacf556\" not found" node="ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:54.943491 kubelet[2377]: E1013 00:05:54.943246 2377 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-c-ccbbacf556\" not found" node="ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:55.473374 kubelet[2377]: E1013 00:05:55.473266 2377 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-1-0-c-ccbbacf556\" not found" node="ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:55.601450 kubelet[2377]: I1013 00:05:55.601407 2377 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:55.635832 kubelet[2377]: I1013 00:05:55.635778 2377 apiserver.go:52] "Watching apiserver" Oct 13 00:05:55.650625 kubelet[2377]: I1013 00:05:55.650087 2377 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:55.654267 kubelet[2377]: I1013 00:05:55.654228 2377 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 00:05:55.666716 kubelet[2377]: E1013 00:05:55.666650 2377 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-1-0-c-ccbbacf556\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:55.666716 kubelet[2377]: I1013 00:05:55.666697 2377 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:55.670985 kubelet[2377]: E1013 00:05:55.670943 2377 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-1-0-c-ccbbacf556\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:55.670985 kubelet[2377]: I1013 00:05:55.670975 2377 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:55.675762 kubelet[2377]: E1013 00:05:55.675707 2377 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-1-0-c-ccbbacf556\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:56.148351 kubelet[2377]: I1013 00:05:56.148228 2377 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:56.153919 kubelet[2377]: E1013 00:05:56.153723 2377 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-1-0-c-ccbbacf556\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:57.715010 systemd[1]: Reload requested from client PID 2661 ('systemctl') (unit session-7.scope)... Oct 13 00:05:57.715323 systemd[1]: Reloading... Oct 13 00:05:57.801935 zram_generator::config[2705]: No configuration found. Oct 13 00:05:58.025494 systemd[1]: Reloading finished in 309 ms. Oct 13 00:05:58.049498 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:05:58.061641 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 00:05:58.062073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:05:58.062171 systemd[1]: kubelet.service: Consumed 1.868s CPU time, 121.4M memory peak. Oct 13 00:05:58.066385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:05:58.224892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:05:58.243391 (kubelet)[2750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 00:05:58.299992 kubelet[2750]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 00:05:58.299992 kubelet[2750]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 00:05:58.299992 kubelet[2750]: I1013 00:05:58.298317 2750 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 00:05:58.307691 kubelet[2750]: I1013 00:05:58.307650 2750 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 00:05:58.307691 kubelet[2750]: I1013 00:05:58.307684 2750 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 00:05:58.307855 kubelet[2750]: I1013 00:05:58.307727 2750 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 00:05:58.307855 kubelet[2750]: I1013 00:05:58.307733 2750 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 00:05:58.308312 kubelet[2750]: I1013 00:05:58.308278 2750 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 00:05:58.309623 kubelet[2750]: I1013 00:05:58.309595 2750 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 13 00:05:58.313933 kubelet[2750]: I1013 00:05:58.313872 2750 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 00:05:58.318183 kubelet[2750]: I1013 00:05:58.318155 2750 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 00:05:58.320383 kubelet[2750]: I1013 00:05:58.320338 2750 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 00:05:58.320570 kubelet[2750]: I1013 00:05:58.320544 2750 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 00:05:58.320967 kubelet[2750]: I1013 00:05:58.320571 2750 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-1-0-c-ccbbacf556","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 00:05:58.320967 kubelet[2750]: I1013 00:05:58.320733 2750 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 00:05:58.320967 kubelet[2750]: I1013 00:05:58.320742 2750 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 00:05:58.320967 kubelet[2750]: I1013 00:05:58.320765 2750 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 00:05:58.321652 kubelet[2750]: I1013 00:05:58.321612 2750 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:05:58.321783 kubelet[2750]: I1013 00:05:58.321772 2750 kubelet.go:475] "Attempting to sync node with API server" Oct 13 00:05:58.321822 kubelet[2750]: I1013 00:05:58.321806 2750 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 00:05:58.321847 kubelet[2750]: I1013 00:05:58.321834 2750 kubelet.go:387] "Adding apiserver pod source" Oct 13 00:05:58.321868 kubelet[2750]: I1013 00:05:58.321847 2750 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 00:05:58.326312 kubelet[2750]: I1013 00:05:58.326276 2750 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 00:05:58.327289 kubelet[2750]: I1013 00:05:58.326930 2750 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 00:05:58.327289 kubelet[2750]: I1013 00:05:58.326976 2750 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 00:05:58.331403 kubelet[2750]: I1013 00:05:58.331370 2750 server.go:1262] "Started kubelet" Oct 13 00:05:58.336431 kubelet[2750]: I1013 00:05:58.336402 2750 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 00:05:58.346688 kubelet[2750]: I1013 00:05:58.346639 2750 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 00:05:58.347613 kubelet[2750]: I1013 00:05:58.347583 2750 server.go:310] "Adding debug handlers to kubelet server" Oct 13 00:05:58.354225 kubelet[2750]: I1013 00:05:58.354097 2750 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 00:05:58.361311 kubelet[2750]: I1013 00:05:58.361230 2750 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 00:05:58.361311 kubelet[2750]: I1013 00:05:58.352984 2750 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 00:05:58.361457 kubelet[2750]: I1013 00:05:58.361349 2750 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 00:05:58.361626 kubelet[2750]: I1013 00:05:58.361538 2750 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 00:05:58.364736 kubelet[2750]: I1013 00:05:58.364662 2750 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 00:05:58.367213 kubelet[2750]: I1013 00:05:58.367117 2750 reconciler.go:29] "Reconciler: start to sync state" Oct 13 00:05:58.374205 kubelet[2750]: I1013 00:05:58.374131 2750 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 00:05:58.376002 kubelet[2750]: I1013 00:05:58.375959 2750 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 00:05:58.379256 kubelet[2750]: I1013 00:05:58.379230 2750 factory.go:223] Registration of the containerd container factory successfully Oct 13 00:05:58.379444 kubelet[2750]: I1013 00:05:58.379432 2750 factory.go:223] Registration of the systemd container factory successfully Oct 13 00:05:58.379579 kubelet[2750]: E1013 00:05:58.379557 2750 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 00:05:58.380752 kubelet[2750]: I1013 00:05:58.380718 2750 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 00:05:58.380752 kubelet[2750]: I1013 00:05:58.380745 2750 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 00:05:58.380858 kubelet[2750]: I1013 00:05:58.380770 2750 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 00:05:58.380858 kubelet[2750]: E1013 00:05:58.380810 2750 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 00:05:58.427790 kubelet[2750]: I1013 00:05:58.427750 2750 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 00:05:58.427790 kubelet[2750]: I1013 00:05:58.427772 2750 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 00:05:58.428075 kubelet[2750]: I1013 00:05:58.427814 2750 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:05:58.428922 kubelet[2750]: I1013 00:05:58.428556 2750 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 00:05:58.428922 kubelet[2750]: I1013 00:05:58.428589 2750 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 00:05:58.428922 kubelet[2750]: I1013 00:05:58.428646 2750 policy_none.go:49] "None policy: Start" Oct 13 00:05:58.428922 kubelet[2750]: I1013 00:05:58.428672 2750 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 00:05:58.428922 kubelet[2750]: I1013 00:05:58.428702 2750 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 00:05:58.428922 kubelet[2750]: I1013 00:05:58.428870 2750 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 13 00:05:58.428922 kubelet[2750]: I1013 00:05:58.428880 2750 policy_none.go:47] "Start" Oct 13 00:05:58.438379 kubelet[2750]: E1013 00:05:58.438339 2750 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 00:05:58.438578 kubelet[2750]: I1013 00:05:58.438540 2750 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 00:05:58.438616 kubelet[2750]: I1013 00:05:58.438581 2750 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 00:05:58.439424 kubelet[2750]: I1013 00:05:58.439267 2750 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 00:05:58.441090 kubelet[2750]: E1013 00:05:58.440817 2750 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 00:05:58.482377 kubelet[2750]: I1013 00:05:58.482329 2750 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:58.482657 kubelet[2750]: I1013 00:05:58.482329 2750 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:58.482789 kubelet[2750]: I1013 00:05:58.482773 2750 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:58.550740 kubelet[2750]: I1013 00:05:58.550392 2750 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:58.563795 kubelet[2750]: I1013 00:05:58.563605 2750 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:58.564144 kubelet[2750]: I1013 00:05:58.564050 2750 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:58.567912 kubelet[2750]: I1013 00:05:58.567775 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/218186680315933454f872e1f9cce9a7-k8s-certs\") pod \"kube-controller-manager-ci-4459-1-0-c-ccbbacf556\" (UID: \"218186680315933454f872e1f9cce9a7\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:58.568450 kubelet[2750]: I1013 00:05:58.568157 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/218186680315933454f872e1f9cce9a7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-1-0-c-ccbbacf556\" (UID: \"218186680315933454f872e1f9cce9a7\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:58.568450 kubelet[2750]: I1013 00:05:58.568295 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed89b09c015a24ab272739e8ee310664-kubeconfig\") pod \"kube-scheduler-ci-4459-1-0-c-ccbbacf556\" (UID: \"ed89b09c015a24ab272739e8ee310664\") " pod="kube-system/kube-scheduler-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:58.568450 kubelet[2750]: I1013 00:05:58.568322 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba5a03efa834908df43ca220df12a187-k8s-certs\") pod \"kube-apiserver-ci-4459-1-0-c-ccbbacf556\" (UID: \"ba5a03efa834908df43ca220df12a187\") " pod="kube-system/kube-apiserver-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:58.568450 kubelet[2750]: I1013 00:05:58.568340 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba5a03efa834908df43ca220df12a187-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-1-0-c-ccbbacf556\" (UID: \"ba5a03efa834908df43ca220df12a187\") " pod="kube-system/kube-apiserver-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:58.568907 kubelet[2750]: I1013 00:05:58.568837 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/218186680315933454f872e1f9cce9a7-ca-certs\") pod \"kube-controller-manager-ci-4459-1-0-c-ccbbacf556\" (UID: \"218186680315933454f872e1f9cce9a7\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:58.570445 kubelet[2750]: I1013 00:05:58.568888 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/218186680315933454f872e1f9cce9a7-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-1-0-c-ccbbacf556\" (UID: \"218186680315933454f872e1f9cce9a7\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:58.570548 kubelet[2750]: I1013 00:05:58.570501 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/218186680315933454f872e1f9cce9a7-kubeconfig\") pod \"kube-controller-manager-ci-4459-1-0-c-ccbbacf556\" (UID: \"218186680315933454f872e1f9cce9a7\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:58.570576 kubelet[2750]: I1013 00:05:58.570555 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba5a03efa834908df43ca220df12a187-ca-certs\") pod \"kube-apiserver-ci-4459-1-0-c-ccbbacf556\" (UID: \"ba5a03efa834908df43ca220df12a187\") " pod="kube-system/kube-apiserver-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:59.325603 kubelet[2750]: I1013 00:05:59.325078 2750 apiserver.go:52] "Watching apiserver" Oct 13 00:05:59.365509 kubelet[2750]: I1013 00:05:59.365470 2750 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 00:05:59.397831 kubelet[2750]: I1013 00:05:59.397725 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-1-0-c-ccbbacf556" podStartSLOduration=1.397687992 podStartE2EDuration="1.397687992s" podCreationTimestamp="2025-10-13 00:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:05:59.386266616 +0000 UTC m=+1.138469492" watchObservedRunningTime="2025-10-13 00:05:59.397687992 +0000 UTC m=+1.149890908" Oct 13 00:05:59.409857 kubelet[2750]: I1013 00:05:59.409818 2750 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:59.410368 kubelet[2750]: I1013 00:05:59.410200 2750 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:59.417323 kubelet[2750]: I1013 00:05:59.417245 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-1-0-c-ccbbacf556" podStartSLOduration=1.417213312 podStartE2EDuration="1.417213312s" podCreationTimestamp="2025-10-13 00:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:05:59.399597403 +0000 UTC m=+1.151800359" watchObservedRunningTime="2025-10-13 00:05:59.417213312 +0000 UTC m=+1.169416228" Oct 13 00:05:59.422304 kubelet[2750]: E1013 00:05:59.422056 2750 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-1-0-c-ccbbacf556\" already exists" pod="kube-system/kube-scheduler-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:59.422778 kubelet[2750]: E1013 00:05:59.422691 2750 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-1-0-c-ccbbacf556\" already exists" pod="kube-system/kube-apiserver-ci-4459-1-0-c-ccbbacf556" Oct 13 00:05:59.434163 kubelet[2750]: I1013 00:05:59.434082 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-1-0-c-ccbbacf556" podStartSLOduration=1.434061656 podStartE2EDuration="1.434061656s" podCreationTimestamp="2025-10-13 00:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:05:59.418445697 +0000 UTC m=+1.170648613" watchObservedRunningTime="2025-10-13 00:05:59.434061656 +0000 UTC m=+1.186264572" Oct 13 00:06:03.925624 kubelet[2750]: I1013 00:06:03.925292 2750 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 00:06:03.926265 containerd[1544]: time="2025-10-13T00:06:03.926052118Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 00:06:03.926988 kubelet[2750]: I1013 00:06:03.926956 2750 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 00:06:04.748890 systemd[1]: Created slice kubepods-besteffort-podaa8d9ba0_a000_49a9_a33b_bb22e1af22b0.slice - libcontainer container kubepods-besteffort-podaa8d9ba0_a000_49a9_a33b_bb22e1af22b0.slice. Oct 13 00:06:04.816596 kubelet[2750]: I1013 00:06:04.816514 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddkc8\" (UniqueName: \"kubernetes.io/projected/aa8d9ba0-a000-49a9-a33b-bb22e1af22b0-kube-api-access-ddkc8\") pod \"kube-proxy-mhj77\" (UID: \"aa8d9ba0-a000-49a9-a33b-bb22e1af22b0\") " pod="kube-system/kube-proxy-mhj77" Oct 13 00:06:04.817038 kubelet[2750]: I1013 00:06:04.816924 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa8d9ba0-a000-49a9-a33b-bb22e1af22b0-kube-proxy\") pod \"kube-proxy-mhj77\" (UID: \"aa8d9ba0-a000-49a9-a33b-bb22e1af22b0\") " pod="kube-system/kube-proxy-mhj77" Oct 13 00:06:04.817038 kubelet[2750]: I1013 00:06:04.816957 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa8d9ba0-a000-49a9-a33b-bb22e1af22b0-xtables-lock\") pod \"kube-proxy-mhj77\" (UID: \"aa8d9ba0-a000-49a9-a33b-bb22e1af22b0\") " pod="kube-system/kube-proxy-mhj77" Oct 13 00:06:04.817038 kubelet[2750]: I1013 00:06:04.816974 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa8d9ba0-a000-49a9-a33b-bb22e1af22b0-lib-modules\") pod \"kube-proxy-mhj77\" (UID: \"aa8d9ba0-a000-49a9-a33b-bb22e1af22b0\") " pod="kube-system/kube-proxy-mhj77" Oct 13 00:06:05.046398 systemd[1]: Created slice kubepods-besteffort-pod65cc4732_23ff_4088_9d6d_987928561f30.slice - libcontainer container kubepods-besteffort-pod65cc4732_23ff_4088_9d6d_987928561f30.slice. Oct 13 00:06:05.065323 containerd[1544]: time="2025-10-13T00:06:05.065223244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mhj77,Uid:aa8d9ba0-a000-49a9-a33b-bb22e1af22b0,Namespace:kube-system,Attempt:0,}" Oct 13 00:06:05.103810 containerd[1544]: time="2025-10-13T00:06:05.103400727Z" level=info msg="connecting to shim ad77e4ec1fa490119620a7c2d118672ee8a4f25a5643e50bb678c36698c81b8d" address="unix:///run/containerd/s/7bfc18c9c58991c53324c5ad8ff90b8832627f0a65939eae858a87bb86865210" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:06:05.120494 kubelet[2750]: I1013 00:06:05.120418 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/65cc4732-23ff-4088-9d6d-987928561f30-var-lib-calico\") pod \"tigera-operator-db78d5bd4-wpkzq\" (UID: \"65cc4732-23ff-4088-9d6d-987928561f30\") " pod="tigera-operator/tigera-operator-db78d5bd4-wpkzq" Oct 13 00:06:05.121062 kubelet[2750]: I1013 00:06:05.120953 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r54wr\" (UniqueName: \"kubernetes.io/projected/65cc4732-23ff-4088-9d6d-987928561f30-kube-api-access-r54wr\") pod \"tigera-operator-db78d5bd4-wpkzq\" (UID: \"65cc4732-23ff-4088-9d6d-987928561f30\") " pod="tigera-operator/tigera-operator-db78d5bd4-wpkzq" Oct 13 00:06:05.139280 systemd[1]: Started cri-containerd-ad77e4ec1fa490119620a7c2d118672ee8a4f25a5643e50bb678c36698c81b8d.scope - libcontainer container ad77e4ec1fa490119620a7c2d118672ee8a4f25a5643e50bb678c36698c81b8d. Oct 13 00:06:05.169376 containerd[1544]: time="2025-10-13T00:06:05.169331635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mhj77,Uid:aa8d9ba0-a000-49a9-a33b-bb22e1af22b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad77e4ec1fa490119620a7c2d118672ee8a4f25a5643e50bb678c36698c81b8d\"" Oct 13 00:06:05.176152 containerd[1544]: time="2025-10-13T00:06:05.176017713Z" level=info msg="CreateContainer within sandbox \"ad77e4ec1fa490119620a7c2d118672ee8a4f25a5643e50bb678c36698c81b8d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 00:06:05.194348 containerd[1544]: time="2025-10-13T00:06:05.193084560Z" level=info msg="Container 6f19f8770dc61267c0870889954a505f008646f06e2b826a2cc89d4df38efb3c: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:06:05.203692 containerd[1544]: time="2025-10-13T00:06:05.203647679Z" level=info msg="CreateContainer within sandbox \"ad77e4ec1fa490119620a7c2d118672ee8a4f25a5643e50bb678c36698c81b8d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6f19f8770dc61267c0870889954a505f008646f06e2b826a2cc89d4df38efb3c\"" Oct 13 00:06:05.205412 containerd[1544]: time="2025-10-13T00:06:05.204781575Z" level=info msg="StartContainer for \"6f19f8770dc61267c0870889954a505f008646f06e2b826a2cc89d4df38efb3c\"" Oct 13 00:06:05.208344 containerd[1544]: time="2025-10-13T00:06:05.208294480Z" level=info msg="connecting to shim 6f19f8770dc61267c0870889954a505f008646f06e2b826a2cc89d4df38efb3c" address="unix:///run/containerd/s/7bfc18c9c58991c53324c5ad8ff90b8832627f0a65939eae858a87bb86865210" protocol=ttrpc version=3 Oct 13 00:06:05.231138 systemd[1]: Started cri-containerd-6f19f8770dc61267c0870889954a505f008646f06e2b826a2cc89d4df38efb3c.scope - libcontainer container 6f19f8770dc61267c0870889954a505f008646f06e2b826a2cc89d4df38efb3c. Oct 13 00:06:05.278630 containerd[1544]: time="2025-10-13T00:06:05.278593706Z" level=info msg="StartContainer for \"6f19f8770dc61267c0870889954a505f008646f06e2b826a2cc89d4df38efb3c\" returns successfully" Oct 13 00:06:05.354174 containerd[1544]: time="2025-10-13T00:06:05.353706119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-db78d5bd4-wpkzq,Uid:65cc4732-23ff-4088-9d6d-987928561f30,Namespace:tigera-operator,Attempt:0,}" Oct 13 00:06:05.383003 containerd[1544]: time="2025-10-13T00:06:05.381958942Z" level=info msg="connecting to shim d70f787bef6ceb4846edd9ffa1bb4d79bc0716c474d787d1c2a119325d6a421f" address="unix:///run/containerd/s/c71ae397777d68019ca248bc4512026497878a8ba4dea67198b913eba17690d7" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:06:05.414321 systemd[1]: Started cri-containerd-d70f787bef6ceb4846edd9ffa1bb4d79bc0716c474d787d1c2a119325d6a421f.scope - libcontainer container d70f787bef6ceb4846edd9ffa1bb4d79bc0716c474d787d1c2a119325d6a421f. Oct 13 00:06:05.459088 kubelet[2750]: I1013 00:06:05.458991 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mhj77" podStartSLOduration=1.458889517 podStartE2EDuration="1.458889517s" podCreationTimestamp="2025-10-13 00:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:06:05.45852222 +0000 UTC m=+7.210725096" watchObservedRunningTime="2025-10-13 00:06:05.458889517 +0000 UTC m=+7.211092433" Oct 13 00:06:05.476660 containerd[1544]: time="2025-10-13T00:06:05.476596464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-db78d5bd4-wpkzq,Uid:65cc4732-23ff-4088-9d6d-987928561f30,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d70f787bef6ceb4846edd9ffa1bb4d79bc0716c474d787d1c2a119325d6a421f\"" Oct 13 00:06:05.481176 containerd[1544]: time="2025-10-13T00:06:05.481136168Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 00:06:06.894467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2512258833.mount: Deactivated successfully. Oct 13 00:06:07.315256 containerd[1544]: time="2025-10-13T00:06:07.315213080Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:07.316495 containerd[1544]: time="2025-10-13T00:06:07.316463175Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Oct 13 00:06:07.317348 containerd[1544]: time="2025-10-13T00:06:07.317321415Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:07.320075 containerd[1544]: time="2025-10-13T00:06:07.320029834Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:07.321295 containerd[1544]: time="2025-10-13T00:06:07.321043255Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.838597804s" Oct 13 00:06:07.321295 containerd[1544]: time="2025-10-13T00:06:07.321077740Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Oct 13 00:06:07.326124 containerd[1544]: time="2025-10-13T00:06:07.326069638Z" level=info msg="CreateContainer within sandbox \"d70f787bef6ceb4846edd9ffa1bb4d79bc0716c474d787d1c2a119325d6a421f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 00:06:07.341546 containerd[1544]: time="2025-10-13T00:06:07.340186531Z" level=info msg="Container 273a41726e1ef688e7e20d32cd7434a54443c39ad2daa7f61e33922d23cfac3d: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:06:07.348074 containerd[1544]: time="2025-10-13T00:06:07.348025347Z" level=info msg="CreateContainer within sandbox \"d70f787bef6ceb4846edd9ffa1bb4d79bc0716c474d787d1c2a119325d6a421f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"273a41726e1ef688e7e20d32cd7434a54443c39ad2daa7f61e33922d23cfac3d\"" Oct 13 00:06:07.349359 containerd[1544]: time="2025-10-13T00:06:07.349033608Z" level=info msg="StartContainer for \"273a41726e1ef688e7e20d32cd7434a54443c39ad2daa7f61e33922d23cfac3d\"" Oct 13 00:06:07.350093 containerd[1544]: time="2025-10-13T00:06:07.349891008Z" level=info msg="connecting to shim 273a41726e1ef688e7e20d32cd7434a54443c39ad2daa7f61e33922d23cfac3d" address="unix:///run/containerd/s/c71ae397777d68019ca248bc4512026497878a8ba4dea67198b913eba17690d7" protocol=ttrpc version=3 Oct 13 00:06:07.373149 systemd[1]: Started cri-containerd-273a41726e1ef688e7e20d32cd7434a54443c39ad2daa7f61e33922d23cfac3d.scope - libcontainer container 273a41726e1ef688e7e20d32cd7434a54443c39ad2daa7f61e33922d23cfac3d. Oct 13 00:06:07.409679 containerd[1544]: time="2025-10-13T00:06:07.409631519Z" level=info msg="StartContainer for \"273a41726e1ef688e7e20d32cd7434a54443c39ad2daa7f61e33922d23cfac3d\" returns successfully" Oct 13 00:06:07.453123 kubelet[2750]: I1013 00:06:07.452584 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-db78d5bd4-wpkzq" podStartSLOduration=0.61020179 podStartE2EDuration="2.452569441s" podCreationTimestamp="2025-10-13 00:06:05 +0000 UTC" firstStartedPulling="2025-10-13 00:06:05.479818484 +0000 UTC m=+7.232021400" lastFinishedPulling="2025-10-13 00:06:07.322186135 +0000 UTC m=+9.074389051" observedRunningTime="2025-10-13 00:06:07.452295283 +0000 UTC m=+9.204498199" watchObservedRunningTime="2025-10-13 00:06:07.452569441 +0000 UTC m=+9.204772357" Oct 13 00:06:13.675439 sudo[1806]: pam_unix(sudo:session): session closed for user root Oct 13 00:06:13.836462 sshd[1805]: Connection closed by 139.178.89.65 port 40016 Oct 13 00:06:13.838087 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Oct 13 00:06:13.844647 systemd[1]: sshd@6-5.75.247.119:22-139.178.89.65:40016.service: Deactivated successfully. Oct 13 00:06:13.848737 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 00:06:13.851041 systemd[1]: session-7.scope: Consumed 6.948s CPU time, 220.8M memory peak. Oct 13 00:06:13.855907 systemd-logind[1519]: Session 7 logged out. Waiting for processes to exit. Oct 13 00:06:13.860408 systemd-logind[1519]: Removed session 7. Oct 13 00:06:21.015732 systemd[1]: Created slice kubepods-besteffort-podf60ab43c_a701_4f5b_be02_35216a2cde55.slice - libcontainer container kubepods-besteffort-podf60ab43c_a701_4f5b_be02_35216a2cde55.slice. Oct 13 00:06:21.023807 kubelet[2750]: I1013 00:06:21.023742 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f60ab43c-a701-4f5b-be02-35216a2cde55-tigera-ca-bundle\") pod \"calico-typha-567c5b5c59-ccxx6\" (UID: \"f60ab43c-a701-4f5b-be02-35216a2cde55\") " pod="calico-system/calico-typha-567c5b5c59-ccxx6" Oct 13 00:06:21.023807 kubelet[2750]: I1013 00:06:21.023803 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f60ab43c-a701-4f5b-be02-35216a2cde55-typha-certs\") pod \"calico-typha-567c5b5c59-ccxx6\" (UID: \"f60ab43c-a701-4f5b-be02-35216a2cde55\") " pod="calico-system/calico-typha-567c5b5c59-ccxx6" Oct 13 00:06:21.023807 kubelet[2750]: I1013 00:06:21.023830 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s62qr\" (UniqueName: \"kubernetes.io/projected/f60ab43c-a701-4f5b-be02-35216a2cde55-kube-api-access-s62qr\") pod \"calico-typha-567c5b5c59-ccxx6\" (UID: \"f60ab43c-a701-4f5b-be02-35216a2cde55\") " pod="calico-system/calico-typha-567c5b5c59-ccxx6" Oct 13 00:06:21.207032 systemd[1]: Created slice kubepods-besteffort-pod5a7b6cce_b47a_4747_ac7e_be822b2bcc74.slice - libcontainer container kubepods-besteffort-pod5a7b6cce_b47a_4747_ac7e_be822b2bcc74.slice. Oct 13 00:06:21.225398 kubelet[2750]: I1013 00:06:21.225356 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5a7b6cce-b47a-4747-ac7e-be822b2bcc74-cni-log-dir\") pod \"calico-node-smjrz\" (UID: \"5a7b6cce-b47a-4747-ac7e-be822b2bcc74\") " pod="calico-system/calico-node-smjrz" Oct 13 00:06:21.225676 kubelet[2750]: I1013 00:06:21.225661 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a7b6cce-b47a-4747-ac7e-be822b2bcc74-tigera-ca-bundle\") pod \"calico-node-smjrz\" (UID: \"5a7b6cce-b47a-4747-ac7e-be822b2bcc74\") " pod="calico-system/calico-node-smjrz" Oct 13 00:06:21.225919 kubelet[2750]: I1013 00:06:21.225730 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5a7b6cce-b47a-4747-ac7e-be822b2bcc74-var-run-calico\") pod \"calico-node-smjrz\" (UID: \"5a7b6cce-b47a-4747-ac7e-be822b2bcc74\") " pod="calico-system/calico-node-smjrz" Oct 13 00:06:21.225919 kubelet[2750]: I1013 00:06:21.225751 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5a7b6cce-b47a-4747-ac7e-be822b2bcc74-node-certs\") pod \"calico-node-smjrz\" (UID: \"5a7b6cce-b47a-4747-ac7e-be822b2bcc74\") " pod="calico-system/calico-node-smjrz" Oct 13 00:06:21.227021 kubelet[2750]: I1013 00:06:21.226991 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5a7b6cce-b47a-4747-ac7e-be822b2bcc74-var-lib-calico\") pod \"calico-node-smjrz\" (UID: \"5a7b6cce-b47a-4747-ac7e-be822b2bcc74\") " pod="calico-system/calico-node-smjrz" Oct 13 00:06:21.227278 kubelet[2750]: I1013 00:06:21.227242 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5a7b6cce-b47a-4747-ac7e-be822b2bcc74-cni-bin-dir\") pod \"calico-node-smjrz\" (UID: \"5a7b6cce-b47a-4747-ac7e-be822b2bcc74\") " pod="calico-system/calico-node-smjrz" Oct 13 00:06:21.227324 kubelet[2750]: I1013 00:06:21.227279 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a7b6cce-b47a-4747-ac7e-be822b2bcc74-xtables-lock\") pod \"calico-node-smjrz\" (UID: \"5a7b6cce-b47a-4747-ac7e-be822b2bcc74\") " pod="calico-system/calico-node-smjrz" Oct 13 00:06:21.227324 kubelet[2750]: I1013 00:06:21.227299 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btlrd\" (UniqueName: \"kubernetes.io/projected/5a7b6cce-b47a-4747-ac7e-be822b2bcc74-kube-api-access-btlrd\") pod \"calico-node-smjrz\" (UID: \"5a7b6cce-b47a-4747-ac7e-be822b2bcc74\") " pod="calico-system/calico-node-smjrz" Oct 13 00:06:21.227324 kubelet[2750]: I1013 00:06:21.227315 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5a7b6cce-b47a-4747-ac7e-be822b2bcc74-policysync\") pod \"calico-node-smjrz\" (UID: \"5a7b6cce-b47a-4747-ac7e-be822b2bcc74\") " pod="calico-system/calico-node-smjrz" Oct 13 00:06:21.227426 kubelet[2750]: I1013 00:06:21.227332 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5a7b6cce-b47a-4747-ac7e-be822b2bcc74-flexvol-driver-host\") pod \"calico-node-smjrz\" (UID: \"5a7b6cce-b47a-4747-ac7e-be822b2bcc74\") " pod="calico-system/calico-node-smjrz" Oct 13 00:06:21.227426 kubelet[2750]: I1013 00:06:21.227345 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a7b6cce-b47a-4747-ac7e-be822b2bcc74-lib-modules\") pod \"calico-node-smjrz\" (UID: \"5a7b6cce-b47a-4747-ac7e-be822b2bcc74\") " pod="calico-system/calico-node-smjrz" Oct 13 00:06:21.227426 kubelet[2750]: I1013 00:06:21.227369 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5a7b6cce-b47a-4747-ac7e-be822b2bcc74-cni-net-dir\") pod \"calico-node-smjrz\" (UID: \"5a7b6cce-b47a-4747-ac7e-be822b2bcc74\") " pod="calico-system/calico-node-smjrz" Oct 13 00:06:21.323225 containerd[1544]: time="2025-10-13T00:06:21.322889533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-567c5b5c59-ccxx6,Uid:f60ab43c-a701-4f5b-be02-35216a2cde55,Namespace:calico-system,Attempt:0,}" Oct 13 00:06:21.340106 kubelet[2750]: E1013 00:06:21.340066 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.340106 kubelet[2750]: W1013 00:06:21.340096 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.340329 kubelet[2750]: E1013 00:06:21.340128 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.374666 kubelet[2750]: E1013 00:06:21.374041 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.374666 kubelet[2750]: W1013 00:06:21.374074 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.374666 kubelet[2750]: E1013 00:06:21.374100 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.378230 containerd[1544]: time="2025-10-13T00:06:21.378179997Z" level=info msg="connecting to shim d1ef69361e1ee84d5a54df27ec2d16b8be28ca885ae1c9e3316693ea331f763d" address="unix:///run/containerd/s/d5b9227018480242426f61ec8239553102213ae76cc43896bd9080b7c3463438" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:06:21.428163 systemd[1]: Started cri-containerd-d1ef69361e1ee84d5a54df27ec2d16b8be28ca885ae1c9e3316693ea331f763d.scope - libcontainer container d1ef69361e1ee84d5a54df27ec2d16b8be28ca885ae1c9e3316693ea331f763d. Oct 13 00:06:21.500292 kubelet[2750]: E1013 00:06:21.500225 2750 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bc8bg" podUID="01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1" Oct 13 00:06:21.506816 containerd[1544]: time="2025-10-13T00:06:21.506776927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-567c5b5c59-ccxx6,Uid:f60ab43c-a701-4f5b-be02-35216a2cde55,Namespace:calico-system,Attempt:0,} returns sandbox id \"d1ef69361e1ee84d5a54df27ec2d16b8be28ca885ae1c9e3316693ea331f763d\"" Oct 13 00:06:21.509849 containerd[1544]: time="2025-10-13T00:06:21.509816031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 00:06:21.514313 kubelet[2750]: E1013 00:06:21.513822 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.514945 kubelet[2750]: W1013 00:06:21.514472 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.514945 kubelet[2750]: E1013 00:06:21.514507 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.515278 kubelet[2750]: E1013 00:06:21.515262 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.515436 kubelet[2750]: W1013 00:06:21.515325 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.515436 kubelet[2750]: E1013 00:06:21.515373 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.515824 kubelet[2750]: E1013 00:06:21.515804 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.516054 kubelet[2750]: W1013 00:06:21.515933 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.516054 kubelet[2750]: E1013 00:06:21.515950 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.517286 kubelet[2750]: E1013 00:06:21.517260 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.517488 kubelet[2750]: W1013 00:06:21.517385 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.517488 kubelet[2750]: E1013 00:06:21.517417 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.517817 kubelet[2750]: E1013 00:06:21.517756 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.517817 kubelet[2750]: W1013 00:06:21.517769 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.517817 kubelet[2750]: E1013 00:06:21.517780 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.518180 kubelet[2750]: E1013 00:06:21.518119 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.518180 kubelet[2750]: W1013 00:06:21.518132 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.518180 kubelet[2750]: E1013 00:06:21.518145 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.519122 kubelet[2750]: E1013 00:06:21.519048 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.519122 kubelet[2750]: W1013 00:06:21.519062 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.519122 kubelet[2750]: E1013 00:06:21.519074 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.519542 kubelet[2750]: E1013 00:06:21.519526 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.519844 kubelet[2750]: W1013 00:06:21.519612 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.519844 kubelet[2750]: E1013 00:06:21.519629 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.519990 kubelet[2750]: E1013 00:06:21.519961 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.519990 kubelet[2750]: W1013 00:06:21.519986 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.520365 kubelet[2750]: E1013 00:06:21.520000 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.520507 kubelet[2750]: E1013 00:06:21.520479 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.520507 kubelet[2750]: W1013 00:06:21.520499 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.520570 kubelet[2750]: E1013 00:06:21.520510 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.520861 kubelet[2750]: E1013 00:06:21.520813 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.520861 kubelet[2750]: W1013 00:06:21.520834 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.520861 kubelet[2750]: E1013 00:06:21.520846 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.522134 kubelet[2750]: E1013 00:06:21.521984 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.522134 kubelet[2750]: W1013 00:06:21.522006 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.522134 kubelet[2750]: E1013 00:06:21.522021 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.522406 kubelet[2750]: E1013 00:06:21.522186 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.522406 kubelet[2750]: W1013 00:06:21.522195 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.522406 kubelet[2750]: E1013 00:06:21.522203 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.523572 containerd[1544]: time="2025-10-13T00:06:21.523226496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-smjrz,Uid:5a7b6cce-b47a-4747-ac7e-be822b2bcc74,Namespace:calico-system,Attempt:0,}" Oct 13 00:06:21.523663 kubelet[2750]: E1013 00:06:21.523410 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.523663 kubelet[2750]: W1013 00:06:21.523425 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.523663 kubelet[2750]: E1013 00:06:21.523444 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.523879 kubelet[2750]: E1013 00:06:21.523854 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.523879 kubelet[2750]: W1013 00:06:21.523871 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.523986 kubelet[2750]: E1013 00:06:21.523883 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.525166 kubelet[2750]: E1013 00:06:21.524963 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.525166 kubelet[2750]: W1013 00:06:21.524995 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.525166 kubelet[2750]: E1013 00:06:21.525010 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.525694 kubelet[2750]: E1013 00:06:21.525648 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.525694 kubelet[2750]: W1013 00:06:21.525665 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.525694 kubelet[2750]: E1013 00:06:21.525677 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.526004 kubelet[2750]: E1013 00:06:21.525832 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.526004 kubelet[2750]: W1013 00:06:21.525839 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.526004 kubelet[2750]: E1013 00:06:21.525847 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.527149 kubelet[2750]: E1013 00:06:21.527125 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.527149 kubelet[2750]: W1013 00:06:21.527143 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.527269 kubelet[2750]: E1013 00:06:21.527157 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.527339 kubelet[2750]: E1013 00:06:21.527323 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.527339 kubelet[2750]: W1013 00:06:21.527334 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.527472 kubelet[2750]: E1013 00:06:21.527345 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.530024 kubelet[2750]: E1013 00:06:21.529993 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.530706 kubelet[2750]: W1013 00:06:21.530175 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.530706 kubelet[2750]: E1013 00:06:21.530610 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.531044 kubelet[2750]: I1013 00:06:21.530778 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1-kubelet-dir\") pod \"csi-node-driver-bc8bg\" (UID: \"01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1\") " pod="calico-system/csi-node-driver-bc8bg" Oct 13 00:06:21.533136 kubelet[2750]: E1013 00:06:21.532021 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.535302 kubelet[2750]: W1013 00:06:21.534939 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.535302 kubelet[2750]: E1013 00:06:21.534985 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.535302 kubelet[2750]: I1013 00:06:21.535023 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1-socket-dir\") pod \"csi-node-driver-bc8bg\" (UID: \"01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1\") " pod="calico-system/csi-node-driver-bc8bg" Oct 13 00:06:21.535842 kubelet[2750]: E1013 00:06:21.535657 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.535842 kubelet[2750]: W1013 00:06:21.535680 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.535842 kubelet[2750]: E1013 00:06:21.535704 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.535842 kubelet[2750]: I1013 00:06:21.535767 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1-registration-dir\") pod \"csi-node-driver-bc8bg\" (UID: \"01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1\") " pod="calico-system/csi-node-driver-bc8bg" Oct 13 00:06:21.536444 kubelet[2750]: E1013 00:06:21.536032 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.536444 kubelet[2750]: W1013 00:06:21.536043 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.536444 kubelet[2750]: E1013 00:06:21.536054 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.539524 kubelet[2750]: E1013 00:06:21.539229 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.539524 kubelet[2750]: W1013 00:06:21.539270 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.539524 kubelet[2750]: E1013 00:06:21.539290 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.540128 kubelet[2750]: E1013 00:06:21.539930 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.540128 kubelet[2750]: W1013 00:06:21.539950 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.540128 kubelet[2750]: E1013 00:06:21.539966 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.540128 kubelet[2750]: I1013 00:06:21.540002 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87bzh\" (UniqueName: \"kubernetes.io/projected/01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1-kube-api-access-87bzh\") pod \"csi-node-driver-bc8bg\" (UID: \"01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1\") " pod="calico-system/csi-node-driver-bc8bg" Oct 13 00:06:21.540318 kubelet[2750]: E1013 00:06:21.540165 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.540318 kubelet[2750]: W1013 00:06:21.540180 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.540318 kubelet[2750]: E1013 00:06:21.540194 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.540697 kubelet[2750]: E1013 00:06:21.540578 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.540697 kubelet[2750]: W1013 00:06:21.540596 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.540697 kubelet[2750]: E1013 00:06:21.540609 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.542299 kubelet[2750]: E1013 00:06:21.541986 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.542299 kubelet[2750]: W1013 00:06:21.542006 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.542299 kubelet[2750]: E1013 00:06:21.542023 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.542299 kubelet[2750]: E1013 00:06:21.542199 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.542299 kubelet[2750]: W1013 00:06:21.542207 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.542299 kubelet[2750]: E1013 00:06:21.542216 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.542299 kubelet[2750]: I1013 00:06:21.542242 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1-varrun\") pod \"csi-node-driver-bc8bg\" (UID: \"01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1\") " pod="calico-system/csi-node-driver-bc8bg" Oct 13 00:06:21.542510 kubelet[2750]: E1013 00:06:21.542380 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.542510 kubelet[2750]: W1013 00:06:21.542388 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.542510 kubelet[2750]: E1013 00:06:21.542397 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.542578 kubelet[2750]: E1013 00:06:21.542519 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.542578 kubelet[2750]: W1013 00:06:21.542526 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.542578 kubelet[2750]: E1013 00:06:21.542533 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.543511 kubelet[2750]: E1013 00:06:21.542946 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.543511 kubelet[2750]: W1013 00:06:21.542966 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.543511 kubelet[2750]: E1013 00:06:21.542981 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.543989 kubelet[2750]: E1013 00:06:21.543963 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.543989 kubelet[2750]: W1013 00:06:21.543984 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.543989 kubelet[2750]: E1013 00:06:21.544000 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.545036 kubelet[2750]: E1013 00:06:21.545011 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.545036 kubelet[2750]: W1013 00:06:21.545032 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.545121 kubelet[2750]: E1013 00:06:21.545049 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.583188 containerd[1544]: time="2025-10-13T00:06:21.582574578Z" level=info msg="connecting to shim 05af078a76b4bb1eb1d34438df24a25283f9a0e5e9ddfe67ff49933bbd6b233e" address="unix:///run/containerd/s/59c511fc0db3ea1bc9cb099d6052290685ed1e5035a565127f7b85801401b80b" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:06:21.627131 systemd[1]: Started cri-containerd-05af078a76b4bb1eb1d34438df24a25283f9a0e5e9ddfe67ff49933bbd6b233e.scope - libcontainer container 05af078a76b4bb1eb1d34438df24a25283f9a0e5e9ddfe67ff49933bbd6b233e. Oct 13 00:06:21.645026 kubelet[2750]: E1013 00:06:21.644959 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.645026 kubelet[2750]: W1013 00:06:21.644986 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.645351 kubelet[2750]: E1013 00:06:21.645007 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.646275 kubelet[2750]: E1013 00:06:21.646207 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.646275 kubelet[2750]: W1013 00:06:21.646231 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.646510 kubelet[2750]: E1013 00:06:21.646438 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.646766 kubelet[2750]: E1013 00:06:21.646743 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.646825 kubelet[2750]: W1013 00:06:21.646766 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.646825 kubelet[2750]: E1013 00:06:21.646784 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.647979 kubelet[2750]: E1013 00:06:21.646971 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.647979 kubelet[2750]: W1013 00:06:21.646980 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.647979 kubelet[2750]: E1013 00:06:21.646989 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.647979 kubelet[2750]: E1013 00:06:21.647094 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.647979 kubelet[2750]: W1013 00:06:21.647100 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.647979 kubelet[2750]: E1013 00:06:21.647107 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.647979 kubelet[2750]: E1013 00:06:21.647369 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.647979 kubelet[2750]: W1013 00:06:21.647380 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.647979 kubelet[2750]: E1013 00:06:21.647392 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.647979 kubelet[2750]: E1013 00:06:21.647531 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.648427 kubelet[2750]: W1013 00:06:21.647538 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.648427 kubelet[2750]: E1013 00:06:21.647546 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.648427 kubelet[2750]: E1013 00:06:21.647703 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.648427 kubelet[2750]: W1013 00:06:21.647711 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.648427 kubelet[2750]: E1013 00:06:21.647718 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.648427 kubelet[2750]: E1013 00:06:21.647839 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.648427 kubelet[2750]: W1013 00:06:21.647849 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.648427 kubelet[2750]: E1013 00:06:21.647857 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.648427 kubelet[2750]: E1013 00:06:21.648025 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.648427 kubelet[2750]: W1013 00:06:21.648034 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.648618 kubelet[2750]: E1013 00:06:21.648043 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.648777 kubelet[2750]: E1013 00:06:21.648757 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.648777 kubelet[2750]: W1013 00:06:21.648774 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.648879 kubelet[2750]: E1013 00:06:21.648791 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.649432 kubelet[2750]: E1013 00:06:21.649413 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.649432 kubelet[2750]: W1013 00:06:21.649429 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.649562 kubelet[2750]: E1013 00:06:21.649443 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.649752 kubelet[2750]: E1013 00:06:21.649735 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.649752 kubelet[2750]: W1013 00:06:21.649748 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.649873 kubelet[2750]: E1013 00:06:21.649758 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.651068 kubelet[2750]: E1013 00:06:21.651039 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.651068 kubelet[2750]: W1013 00:06:21.651057 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.651068 kubelet[2750]: E1013 00:06:21.651071 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.651487 kubelet[2750]: E1013 00:06:21.651234 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.651487 kubelet[2750]: W1013 00:06:21.651242 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.651487 kubelet[2750]: E1013 00:06:21.651250 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.651827 kubelet[2750]: E1013 00:06:21.651803 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.651827 kubelet[2750]: W1013 00:06:21.651819 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.652228 kubelet[2750]: E1013 00:06:21.651836 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.652228 kubelet[2750]: E1013 00:06:21.652054 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.652228 kubelet[2750]: W1013 00:06:21.652064 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.652228 kubelet[2750]: E1013 00:06:21.652072 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.652542 kubelet[2750]: E1013 00:06:21.652523 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.652542 kubelet[2750]: W1013 00:06:21.652539 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.652617 kubelet[2750]: E1013 00:06:21.652551 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.652769 kubelet[2750]: E1013 00:06:21.652754 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.652769 kubelet[2750]: W1013 00:06:21.652766 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.652888 kubelet[2750]: E1013 00:06:21.652774 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.654073 kubelet[2750]: E1013 00:06:21.654041 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.654073 kubelet[2750]: W1013 00:06:21.654064 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.654173 kubelet[2750]: E1013 00:06:21.654079 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.654271 kubelet[2750]: E1013 00:06:21.654248 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.654318 kubelet[2750]: W1013 00:06:21.654285 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.654318 kubelet[2750]: E1013 00:06:21.654294 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.654432 kubelet[2750]: E1013 00:06:21.654418 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.654432 kubelet[2750]: W1013 00:06:21.654429 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.654488 kubelet[2750]: E1013 00:06:21.654436 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.654647 kubelet[2750]: E1013 00:06:21.654634 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.654647 kubelet[2750]: W1013 00:06:21.654646 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.654755 kubelet[2750]: E1013 00:06:21.654655 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.655305 kubelet[2750]: E1013 00:06:21.655027 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.655305 kubelet[2750]: W1013 00:06:21.655044 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.655305 kubelet[2750]: E1013 00:06:21.655058 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.655599 kubelet[2750]: E1013 00:06:21.655585 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.655669 kubelet[2750]: W1013 00:06:21.655656 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.655734 kubelet[2750]: E1013 00:06:21.655723 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.673069 kubelet[2750]: E1013 00:06:21.673033 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:21.673069 kubelet[2750]: W1013 00:06:21.673059 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:21.673069 kubelet[2750]: E1013 00:06:21.673082 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:21.762943 containerd[1544]: time="2025-10-13T00:06:21.762528403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-smjrz,Uid:5a7b6cce-b47a-4747-ac7e-be822b2bcc74,Namespace:calico-system,Attempt:0,} returns sandbox id \"05af078a76b4bb1eb1d34438df24a25283f9a0e5e9ddfe67ff49933bbd6b233e\"" Oct 13 00:06:22.998782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4031205162.mount: Deactivated successfully. Oct 13 00:06:23.382053 kubelet[2750]: E1013 00:06:23.381434 2750 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bc8bg" podUID="01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1" Oct 13 00:06:23.525494 containerd[1544]: time="2025-10-13T00:06:23.525410071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:23.527023 containerd[1544]: time="2025-10-13T00:06:23.526600952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Oct 13 00:06:23.527981 containerd[1544]: time="2025-10-13T00:06:23.527938563Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:23.531260 containerd[1544]: time="2025-10-13T00:06:23.531175903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:23.532142 containerd[1544]: time="2025-10-13T00:06:23.532114007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.02208376s" Oct 13 00:06:23.532256 containerd[1544]: time="2025-10-13T00:06:23.532243176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Oct 13 00:06:23.533706 containerd[1544]: time="2025-10-13T00:06:23.533658272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 00:06:23.552634 containerd[1544]: time="2025-10-13T00:06:23.552592200Z" level=info msg="CreateContainer within sandbox \"d1ef69361e1ee84d5a54df27ec2d16b8be28ca885ae1c9e3316693ea331f763d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 00:06:23.572076 containerd[1544]: time="2025-10-13T00:06:23.571862711Z" level=info msg="Container 9677ac3fc769edecf8c0ed69055a3ef4f410236127b5e81ce03edd6d70fa8d33: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:06:23.586018 containerd[1544]: time="2025-10-13T00:06:23.585923667Z" level=info msg="CreateContainer within sandbox \"d1ef69361e1ee84d5a54df27ec2d16b8be28ca885ae1c9e3316693ea331f763d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9677ac3fc769edecf8c0ed69055a3ef4f410236127b5e81ce03edd6d70fa8d33\"" Oct 13 00:06:23.587542 containerd[1544]: time="2025-10-13T00:06:23.587449451Z" level=info msg="StartContainer for \"9677ac3fc769edecf8c0ed69055a3ef4f410236127b5e81ce03edd6d70fa8d33\"" Oct 13 00:06:23.592618 containerd[1544]: time="2025-10-13T00:06:23.591981640Z" level=info msg="connecting to shim 9677ac3fc769edecf8c0ed69055a3ef4f410236127b5e81ce03edd6d70fa8d33" address="unix:///run/containerd/s/d5b9227018480242426f61ec8239553102213ae76cc43896bd9080b7c3463438" protocol=ttrpc version=3 Oct 13 00:06:23.619218 systemd[1]: Started cri-containerd-9677ac3fc769edecf8c0ed69055a3ef4f410236127b5e81ce03edd6d70fa8d33.scope - libcontainer container 9677ac3fc769edecf8c0ed69055a3ef4f410236127b5e81ce03edd6d70fa8d33. Oct 13 00:06:23.671138 containerd[1544]: time="2025-10-13T00:06:23.670938490Z" level=info msg="StartContainer for \"9677ac3fc769edecf8c0ed69055a3ef4f410236127b5e81ce03edd6d70fa8d33\" returns successfully" Oct 13 00:06:24.512083 kubelet[2750]: I1013 00:06:24.511958 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-567c5b5c59-ccxx6" podStartSLOduration=2.488136111 podStartE2EDuration="4.511928511s" podCreationTimestamp="2025-10-13 00:06:20 +0000 UTC" firstStartedPulling="2025-10-13 00:06:21.509550611 +0000 UTC m=+23.261753527" lastFinishedPulling="2025-10-13 00:06:23.533343011 +0000 UTC m=+25.285545927" observedRunningTime="2025-10-13 00:06:24.510131113 +0000 UTC m=+26.262334069" watchObservedRunningTime="2025-10-13 00:06:24.511928511 +0000 UTC m=+26.264131467" Oct 13 00:06:24.547732 kubelet[2750]: E1013 00:06:24.547692 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.547732 kubelet[2750]: W1013 00:06:24.547719 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.548045 kubelet[2750]: E1013 00:06:24.547742 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.548045 kubelet[2750]: E1013 00:06:24.547971 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.548045 kubelet[2750]: W1013 00:06:24.547983 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.548045 kubelet[2750]: E1013 00:06:24.548044 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.548554 kubelet[2750]: E1013 00:06:24.548244 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.549062 kubelet[2750]: W1013 00:06:24.548255 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.549062 kubelet[2750]: E1013 00:06:24.549012 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.549813 kubelet[2750]: E1013 00:06:24.549786 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.549813 kubelet[2750]: W1013 00:06:24.549811 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.550082 kubelet[2750]: E1013 00:06:24.549836 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.550218 kubelet[2750]: E1013 00:06:24.550182 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.550260 kubelet[2750]: W1013 00:06:24.550218 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.550260 kubelet[2750]: E1013 00:06:24.550234 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.550489 kubelet[2750]: E1013 00:06:24.550411 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.550489 kubelet[2750]: W1013 00:06:24.550487 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.550611 kubelet[2750]: E1013 00:06:24.550502 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.550736 kubelet[2750]: E1013 00:06:24.550710 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.550736 kubelet[2750]: W1013 00:06:24.550726 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.550824 kubelet[2750]: E1013 00:06:24.550737 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.551137 kubelet[2750]: E1013 00:06:24.550952 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.551137 kubelet[2750]: W1013 00:06:24.550964 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.551137 kubelet[2750]: E1013 00:06:24.550976 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.551244 kubelet[2750]: E1013 00:06:24.551166 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.551244 kubelet[2750]: W1013 00:06:24.551173 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.551244 kubelet[2750]: E1013 00:06:24.551188 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.551391 kubelet[2750]: E1013 00:06:24.551312 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.551391 kubelet[2750]: W1013 00:06:24.551323 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.551391 kubelet[2750]: E1013 00:06:24.551331 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.551681 kubelet[2750]: E1013 00:06:24.551507 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.551681 kubelet[2750]: W1013 00:06:24.551516 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.551681 kubelet[2750]: E1013 00:06:24.551525 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.551681 kubelet[2750]: E1013 00:06:24.551650 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.551681 kubelet[2750]: W1013 00:06:24.551657 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.551681 kubelet[2750]: E1013 00:06:24.551665 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.553017 kubelet[2750]: E1013 00:06:24.551815 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.553017 kubelet[2750]: W1013 00:06:24.551821 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.553017 kubelet[2750]: E1013 00:06:24.551829 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.553017 kubelet[2750]: E1013 00:06:24.551952 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.553017 kubelet[2750]: W1013 00:06:24.551959 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.553017 kubelet[2750]: E1013 00:06:24.551966 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.553017 kubelet[2750]: E1013 00:06:24.552091 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.553017 kubelet[2750]: W1013 00:06:24.552099 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.553017 kubelet[2750]: E1013 00:06:24.552106 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.570998 kubelet[2750]: E1013 00:06:24.570941 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.571166 kubelet[2750]: W1013 00:06:24.570983 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.571166 kubelet[2750]: E1013 00:06:24.571041 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.571547 kubelet[2750]: E1013 00:06:24.571506 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.571547 kubelet[2750]: W1013 00:06:24.571535 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.571659 kubelet[2750]: E1013 00:06:24.571572 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.572216 kubelet[2750]: E1013 00:06:24.572171 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.572216 kubelet[2750]: W1013 00:06:24.572203 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.572334 kubelet[2750]: E1013 00:06:24.572240 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.572646 kubelet[2750]: E1013 00:06:24.572620 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.572692 kubelet[2750]: W1013 00:06:24.572648 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.572692 kubelet[2750]: E1013 00:06:24.572669 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.572964 kubelet[2750]: E1013 00:06:24.572945 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.573008 kubelet[2750]: W1013 00:06:24.572967 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.573008 kubelet[2750]: E1013 00:06:24.572988 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.573317 kubelet[2750]: E1013 00:06:24.573298 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.573363 kubelet[2750]: W1013 00:06:24.573332 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.573363 kubelet[2750]: E1013 00:06:24.573350 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.573696 kubelet[2750]: E1013 00:06:24.573667 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.573756 kubelet[2750]: W1013 00:06:24.573700 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.573756 kubelet[2750]: E1013 00:06:24.573719 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.574047 kubelet[2750]: E1013 00:06:24.574027 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.574104 kubelet[2750]: W1013 00:06:24.574060 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.574104 kubelet[2750]: E1013 00:06:24.574076 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.574338 kubelet[2750]: E1013 00:06:24.574320 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.574384 kubelet[2750]: W1013 00:06:24.574339 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.574384 kubelet[2750]: E1013 00:06:24.574354 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.574695 kubelet[2750]: E1013 00:06:24.574675 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.574741 kubelet[2750]: W1013 00:06:24.574697 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.574741 kubelet[2750]: E1013 00:06:24.574715 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.574976 kubelet[2750]: E1013 00:06:24.574958 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.575030 kubelet[2750]: W1013 00:06:24.574977 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.575030 kubelet[2750]: E1013 00:06:24.575004 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.575338 kubelet[2750]: E1013 00:06:24.575320 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.575378 kubelet[2750]: W1013 00:06:24.575338 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.575378 kubelet[2750]: E1013 00:06:24.575353 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.575833 kubelet[2750]: E1013 00:06:24.575800 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.575833 kubelet[2750]: W1013 00:06:24.575818 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.575833 kubelet[2750]: E1013 00:06:24.575829 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.576264 kubelet[2750]: E1013 00:06:24.576248 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.576264 kubelet[2750]: W1013 00:06:24.576261 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.576331 kubelet[2750]: E1013 00:06:24.576271 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.576503 kubelet[2750]: E1013 00:06:24.576490 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.576503 kubelet[2750]: W1013 00:06:24.576502 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.576561 kubelet[2750]: E1013 00:06:24.576511 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.576696 kubelet[2750]: E1013 00:06:24.576684 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.576795 kubelet[2750]: W1013 00:06:24.576782 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.576824 kubelet[2750]: E1013 00:06:24.576799 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.577080 kubelet[2750]: E1013 00:06:24.577064 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.577272 kubelet[2750]: W1013 00:06:24.577138 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.577272 kubelet[2750]: E1013 00:06:24.577158 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.577738 kubelet[2750]: E1013 00:06:24.577642 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:06:24.577933 kubelet[2750]: W1013 00:06:24.577815 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:06:24.577933 kubelet[2750]: E1013 00:06:24.577836 2750 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:06:24.942762 containerd[1544]: time="2025-10-13T00:06:24.942629981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:24.944642 containerd[1544]: time="2025-10-13T00:06:24.944580388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Oct 13 00:06:24.947160 containerd[1544]: time="2025-10-13T00:06:24.947118995Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:24.952709 containerd[1544]: time="2025-10-13T00:06:24.952666878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:24.955314 containerd[1544]: time="2025-10-13T00:06:24.955162082Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.421462167s" Oct 13 00:06:24.955314 containerd[1544]: time="2025-10-13T00:06:24.955206685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Oct 13 00:06:24.960174 containerd[1544]: time="2025-10-13T00:06:24.960111006Z" level=info msg="CreateContainer within sandbox \"05af078a76b4bb1eb1d34438df24a25283f9a0e5e9ddfe67ff49933bbd6b233e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 00:06:24.970584 containerd[1544]: time="2025-10-13T00:06:24.970518048Z" level=info msg="Container f7c09e6e7c0ccc6a8d4ceb750235071cd7a844fbdbf5f0f9594e04a5db7d1d67: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:06:24.990922 containerd[1544]: time="2025-10-13T00:06:24.990843181Z" level=info msg="CreateContainer within sandbox \"05af078a76b4bb1eb1d34438df24a25283f9a0e5e9ddfe67ff49933bbd6b233e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f7c09e6e7c0ccc6a8d4ceb750235071cd7a844fbdbf5f0f9594e04a5db7d1d67\"" Oct 13 00:06:24.992501 containerd[1544]: time="2025-10-13T00:06:24.992088142Z" level=info msg="StartContainer for \"f7c09e6e7c0ccc6a8d4ceb750235071cd7a844fbdbf5f0f9594e04a5db7d1d67\"" Oct 13 00:06:24.997492 containerd[1544]: time="2025-10-13T00:06:24.997418372Z" level=info msg="connecting to shim f7c09e6e7c0ccc6a8d4ceb750235071cd7a844fbdbf5f0f9594e04a5db7d1d67" address="unix:///run/containerd/s/59c511fc0db3ea1bc9cb099d6052290685ed1e5035a565127f7b85801401b80b" protocol=ttrpc version=3 Oct 13 00:06:25.040289 systemd[1]: Started cri-containerd-f7c09e6e7c0ccc6a8d4ceb750235071cd7a844fbdbf5f0f9594e04a5db7d1d67.scope - libcontainer container f7c09e6e7c0ccc6a8d4ceb750235071cd7a844fbdbf5f0f9594e04a5db7d1d67. Oct 13 00:06:25.098292 containerd[1544]: time="2025-10-13T00:06:25.098242794Z" level=info msg="StartContainer for \"f7c09e6e7c0ccc6a8d4ceb750235071cd7a844fbdbf5f0f9594e04a5db7d1d67\" returns successfully" Oct 13 00:06:25.120605 systemd[1]: cri-containerd-f7c09e6e7c0ccc6a8d4ceb750235071cd7a844fbdbf5f0f9594e04a5db7d1d67.scope: Deactivated successfully. Oct 13 00:06:25.129519 containerd[1544]: time="2025-10-13T00:06:25.129316238Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f7c09e6e7c0ccc6a8d4ceb750235071cd7a844fbdbf5f0f9594e04a5db7d1d67\" id:\"f7c09e6e7c0ccc6a8d4ceb750235071cd7a844fbdbf5f0f9594e04a5db7d1d67\" pid:3420 exited_at:{seconds:1760313985 nanos:128655116}" Oct 13 00:06:25.129519 containerd[1544]: time="2025-10-13T00:06:25.129361721Z" level=info msg="received exit event container_id:\"f7c09e6e7c0ccc6a8d4ceb750235071cd7a844fbdbf5f0f9594e04a5db7d1d67\" id:\"f7c09e6e7c0ccc6a8d4ceb750235071cd7a844fbdbf5f0f9594e04a5db7d1d67\" pid:3420 exited_at:{seconds:1760313985 nanos:128655116}" Oct 13 00:06:25.166688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7c09e6e7c0ccc6a8d4ceb750235071cd7a844fbdbf5f0f9594e04a5db7d1d67-rootfs.mount: Deactivated successfully. Oct 13 00:06:25.382241 kubelet[2750]: E1013 00:06:25.381920 2750 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bc8bg" podUID="01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1" Oct 13 00:06:25.503641 kubelet[2750]: I1013 00:06:25.502212 2750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:06:25.508637 containerd[1544]: time="2025-10-13T00:06:25.508599176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 00:06:27.382212 kubelet[2750]: E1013 00:06:27.382163 2750 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bc8bg" podUID="01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1" Oct 13 00:06:28.128772 containerd[1544]: time="2025-10-13T00:06:28.128141235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:28.132247 containerd[1544]: time="2025-10-13T00:06:28.131475465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Oct 13 00:06:28.132783 containerd[1544]: time="2025-10-13T00:06:28.132704175Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:28.137735 containerd[1544]: time="2025-10-13T00:06:28.137644937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:28.139789 containerd[1544]: time="2025-10-13T00:06:28.139546566Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.630896747s" Oct 13 00:06:28.139789 containerd[1544]: time="2025-10-13T00:06:28.139636771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Oct 13 00:06:28.147290 containerd[1544]: time="2025-10-13T00:06:28.147157080Z" level=info msg="CreateContainer within sandbox \"05af078a76b4bb1eb1d34438df24a25283f9a0e5e9ddfe67ff49933bbd6b233e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 00:06:28.162191 containerd[1544]: time="2025-10-13T00:06:28.162135175Z" level=info msg="Container 4497660b4002531eef51ddaf778040513a960dd6549d34dc4316170d508fcf2a: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:06:28.182275 containerd[1544]: time="2025-10-13T00:06:28.182098955Z" level=info msg="CreateContainer within sandbox \"05af078a76b4bb1eb1d34438df24a25283f9a0e5e9ddfe67ff49933bbd6b233e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4497660b4002531eef51ddaf778040513a960dd6549d34dc4316170d508fcf2a\"" Oct 13 00:06:28.183285 containerd[1544]: time="2025-10-13T00:06:28.183149175Z" level=info msg="StartContainer for \"4497660b4002531eef51ddaf778040513a960dd6549d34dc4316170d508fcf2a\"" Oct 13 00:06:28.186188 containerd[1544]: time="2025-10-13T00:06:28.186134185Z" level=info msg="connecting to shim 4497660b4002531eef51ddaf778040513a960dd6549d34dc4316170d508fcf2a" address="unix:///run/containerd/s/59c511fc0db3ea1bc9cb099d6052290685ed1e5035a565127f7b85801401b80b" protocol=ttrpc version=3 Oct 13 00:06:28.219184 systemd[1]: Started cri-containerd-4497660b4002531eef51ddaf778040513a960dd6549d34dc4316170d508fcf2a.scope - libcontainer container 4497660b4002531eef51ddaf778040513a960dd6549d34dc4316170d508fcf2a. Oct 13 00:06:28.273603 containerd[1544]: time="2025-10-13T00:06:28.273480371Z" level=info msg="StartContainer for \"4497660b4002531eef51ddaf778040513a960dd6549d34dc4316170d508fcf2a\" returns successfully" Oct 13 00:06:28.817957 containerd[1544]: time="2025-10-13T00:06:28.817736236Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 00:06:28.820838 systemd[1]: cri-containerd-4497660b4002531eef51ddaf778040513a960dd6549d34dc4316170d508fcf2a.scope: Deactivated successfully. Oct 13 00:06:28.821390 systemd[1]: cri-containerd-4497660b4002531eef51ddaf778040513a960dd6549d34dc4316170d508fcf2a.scope: Consumed 523ms CPU time, 184.8M memory peak, 165.8M written to disk. Oct 13 00:06:28.826286 containerd[1544]: time="2025-10-13T00:06:28.826222321Z" level=info msg="received exit event container_id:\"4497660b4002531eef51ddaf778040513a960dd6549d34dc4316170d508fcf2a\" id:\"4497660b4002531eef51ddaf778040513a960dd6549d34dc4316170d508fcf2a\" pid:3477 exited_at:{seconds:1760313988 nanos:825347071}" Oct 13 00:06:28.826591 containerd[1544]: time="2025-10-13T00:06:28.826264403Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4497660b4002531eef51ddaf778040513a960dd6549d34dc4316170d508fcf2a\" id:\"4497660b4002531eef51ddaf778040513a960dd6549d34dc4316170d508fcf2a\" pid:3477 exited_at:{seconds:1760313988 nanos:825347071}" Oct 13 00:06:28.831178 kubelet[2750]: I1013 00:06:28.831064 2750 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 13 00:06:28.865710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4497660b4002531eef51ddaf778040513a960dd6549d34dc4316170d508fcf2a-rootfs.mount: Deactivated successfully. Oct 13 00:06:28.899739 systemd[1]: Created slice kubepods-burstable-pod4a3d6e61_9a33_4840_8142_125a1b0a5221.slice - libcontainer container kubepods-burstable-pod4a3d6e61_9a33_4840_8142_125a1b0a5221.slice. Oct 13 00:06:28.908712 kubelet[2750]: I1013 00:06:28.907631 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a3d6e61-9a33-4840-8142-125a1b0a5221-config-volume\") pod \"coredns-66bc5c9577-55db8\" (UID: \"4a3d6e61-9a33-4840-8142-125a1b0a5221\") " pod="kube-system/coredns-66bc5c9577-55db8" Oct 13 00:06:28.908712 kubelet[2750]: I1013 00:06:28.907693 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppmzg\" (UniqueName: \"kubernetes.io/projected/4a3d6e61-9a33-4840-8142-125a1b0a5221-kube-api-access-ppmzg\") pod \"coredns-66bc5c9577-55db8\" (UID: \"4a3d6e61-9a33-4840-8142-125a1b0a5221\") " pod="kube-system/coredns-66bc5c9577-55db8" Oct 13 00:06:28.924018 systemd[1]: Created slice kubepods-burstable-podeb1f963b_8359_434b_98d9_a539c2e3c069.slice - libcontainer container kubepods-burstable-podeb1f963b_8359_434b_98d9_a539c2e3c069.slice. Oct 13 00:06:28.999949 systemd[1]: Created slice kubepods-besteffort-pod70f10094_62b2_4cbd_b7f0_bdb77a19b9b7.slice - libcontainer container kubepods-besteffort-pod70f10094_62b2_4cbd_b7f0_bdb77a19b9b7.slice. Oct 13 00:06:29.011402 kubelet[2750]: I1013 00:06:29.010386 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpfnh\" (UniqueName: \"kubernetes.io/projected/53e4f244-7eb8-4d22-a08f-aeae3cade04b-kube-api-access-lpfnh\") pod \"calico-apiserver-5789d66748-bhnjj\" (UID: \"53e4f244-7eb8-4d22-a08f-aeae3cade04b\") " pod="calico-apiserver/calico-apiserver-5789d66748-bhnjj" Oct 13 00:06:29.018882 kubelet[2750]: I1013 00:06:29.018098 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70f10094-62b2-4cbd-b7f0-bdb77a19b9b7-tigera-ca-bundle\") pod \"calico-kube-controllers-7b6546f656-gvhgn\" (UID: \"70f10094-62b2-4cbd-b7f0-bdb77a19b9b7\") " pod="calico-system/calico-kube-controllers-7b6546f656-gvhgn" Oct 13 00:06:29.018882 kubelet[2750]: I1013 00:06:29.018167 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/53e4f244-7eb8-4d22-a08f-aeae3cade04b-calico-apiserver-certs\") pod \"calico-apiserver-5789d66748-bhnjj\" (UID: \"53e4f244-7eb8-4d22-a08f-aeae3cade04b\") " pod="calico-apiserver/calico-apiserver-5789d66748-bhnjj" Oct 13 00:06:29.018882 kubelet[2750]: I1013 00:06:29.018219 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb1f963b-8359-434b-98d9-a539c2e3c069-config-volume\") pod \"coredns-66bc5c9577-9mcnk\" (UID: \"eb1f963b-8359-434b-98d9-a539c2e3c069\") " pod="kube-system/coredns-66bc5c9577-9mcnk" Oct 13 00:06:29.018882 kubelet[2750]: I1013 00:06:29.018289 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hb94\" (UniqueName: \"kubernetes.io/projected/70f10094-62b2-4cbd-b7f0-bdb77a19b9b7-kube-api-access-4hb94\") pod \"calico-kube-controllers-7b6546f656-gvhgn\" (UID: \"70f10094-62b2-4cbd-b7f0-bdb77a19b9b7\") " pod="calico-system/calico-kube-controllers-7b6546f656-gvhgn" Oct 13 00:06:29.018882 kubelet[2750]: I1013 00:06:29.018372 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5d5l\" (UniqueName: \"kubernetes.io/projected/eb1f963b-8359-434b-98d9-a539c2e3c069-kube-api-access-k5d5l\") pod \"coredns-66bc5c9577-9mcnk\" (UID: \"eb1f963b-8359-434b-98d9-a539c2e3c069\") " pod="kube-system/coredns-66bc5c9577-9mcnk" Oct 13 00:06:29.034929 systemd[1]: Created slice kubepods-besteffort-pod53e4f244_7eb8_4d22_a08f_aeae3cade04b.slice - libcontainer container kubepods-besteffort-pod53e4f244_7eb8_4d22_a08f_aeae3cade04b.slice. Oct 13 00:06:29.045927 systemd[1]: Created slice kubepods-besteffort-pode3c7842e_bcb8_45e2_b1cf_3717aeb735ad.slice - libcontainer container kubepods-besteffort-pode3c7842e_bcb8_45e2_b1cf_3717aeb735ad.slice. Oct 13 00:06:29.060609 systemd[1]: Created slice kubepods-besteffort-pode2434000_5a08_4018_ae79_d2c26ea7ecde.slice - libcontainer container kubepods-besteffort-pode2434000_5a08_4018_ae79_d2c26ea7ecde.slice. Oct 13 00:06:29.070141 systemd[1]: Created slice kubepods-besteffort-podc183318d_6e0e_4dec_a61a_7d306d33a93f.slice - libcontainer container kubepods-besteffort-podc183318d_6e0e_4dec_a61a_7d306d33a93f.slice. Oct 13 00:06:29.123145 kubelet[2750]: I1013 00:06:29.123043 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2m4n\" (UniqueName: \"kubernetes.io/projected/c183318d-6e0e-4dec-a61a-7d306d33a93f-kube-api-access-d2m4n\") pod \"calico-apiserver-5789d66748-lpcf6\" (UID: \"c183318d-6e0e-4dec-a61a-7d306d33a93f\") " pod="calico-apiserver/calico-apiserver-5789d66748-lpcf6" Oct 13 00:06:29.124872 kubelet[2750]: I1013 00:06:29.124100 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptjt6\" (UniqueName: \"kubernetes.io/projected/e2434000-5a08-4018-ae79-d2c26ea7ecde-kube-api-access-ptjt6\") pod \"whisker-55ccb5dcd9-zw97z\" (UID: \"e2434000-5a08-4018-ae79-d2c26ea7ecde\") " pod="calico-system/whisker-55ccb5dcd9-zw97z" Oct 13 00:06:29.125125 kubelet[2750]: I1013 00:06:29.125100 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7t6c\" (UniqueName: \"kubernetes.io/projected/e3c7842e-bcb8-45e2-b1cf-3717aeb735ad-kube-api-access-q7t6c\") pod \"goldmane-854f97d977-lzkp5\" (UID: \"e3c7842e-bcb8-45e2-b1cf-3717aeb735ad\") " pod="calico-system/goldmane-854f97d977-lzkp5" Oct 13 00:06:29.125218 kubelet[2750]: I1013 00:06:29.125207 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e2434000-5a08-4018-ae79-d2c26ea7ecde-whisker-backend-key-pair\") pod \"whisker-55ccb5dcd9-zw97z\" (UID: \"e2434000-5a08-4018-ae79-d2c26ea7ecde\") " pod="calico-system/whisker-55ccb5dcd9-zw97z" Oct 13 00:06:29.125362 kubelet[2750]: I1013 00:06:29.125341 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e3c7842e-bcb8-45e2-b1cf-3717aeb735ad-goldmane-key-pair\") pod \"goldmane-854f97d977-lzkp5\" (UID: \"e3c7842e-bcb8-45e2-b1cf-3717aeb735ad\") " pod="calico-system/goldmane-854f97d977-lzkp5" Oct 13 00:06:29.125435 kubelet[2750]: I1013 00:06:29.125423 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c183318d-6e0e-4dec-a61a-7d306d33a93f-calico-apiserver-certs\") pod \"calico-apiserver-5789d66748-lpcf6\" (UID: \"c183318d-6e0e-4dec-a61a-7d306d33a93f\") " pod="calico-apiserver/calico-apiserver-5789d66748-lpcf6" Oct 13 00:06:29.125626 kubelet[2750]: I1013 00:06:29.125561 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3c7842e-bcb8-45e2-b1cf-3717aeb735ad-config\") pod \"goldmane-854f97d977-lzkp5\" (UID: \"e3c7842e-bcb8-45e2-b1cf-3717aeb735ad\") " pod="calico-system/goldmane-854f97d977-lzkp5" Oct 13 00:06:29.129398 kubelet[2750]: I1013 00:06:29.129368 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3c7842e-bcb8-45e2-b1cf-3717aeb735ad-goldmane-ca-bundle\") pod \"goldmane-854f97d977-lzkp5\" (UID: \"e3c7842e-bcb8-45e2-b1cf-3717aeb735ad\") " pod="calico-system/goldmane-854f97d977-lzkp5" Oct 13 00:06:29.129524 kubelet[2750]: I1013 00:06:29.129509 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2434000-5a08-4018-ae79-d2c26ea7ecde-whisker-ca-bundle\") pod \"whisker-55ccb5dcd9-zw97z\" (UID: \"e2434000-5a08-4018-ae79-d2c26ea7ecde\") " pod="calico-system/whisker-55ccb5dcd9-zw97z" Oct 13 00:06:29.211497 containerd[1544]: time="2025-10-13T00:06:29.211453651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-55db8,Uid:4a3d6e61-9a33-4840-8142-125a1b0a5221,Namespace:kube-system,Attempt:0,}" Oct 13 00:06:29.260162 containerd[1544]: time="2025-10-13T00:06:29.259928851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9mcnk,Uid:eb1f963b-8359-434b-98d9-a539c2e3c069,Namespace:kube-system,Attempt:0,}" Oct 13 00:06:29.333228 containerd[1544]: time="2025-10-13T00:06:29.332093640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b6546f656-gvhgn,Uid:70f10094-62b2-4cbd-b7f0-bdb77a19b9b7,Namespace:calico-system,Attempt:0,}" Oct 13 00:06:29.352916 containerd[1544]: time="2025-10-13T00:06:29.352833587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5789d66748-bhnjj,Uid:53e4f244-7eb8-4d22-a08f-aeae3cade04b,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:06:29.360111 containerd[1544]: time="2025-10-13T00:06:29.360049866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-lzkp5,Uid:e3c7842e-bcb8-45e2-b1cf-3717aeb735ad,Namespace:calico-system,Attempt:0,}" Oct 13 00:06:29.385684 containerd[1544]: time="2025-10-13T00:06:29.385628160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5789d66748-lpcf6,Uid:c183318d-6e0e-4dec-a61a-7d306d33a93f,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:06:29.388240 containerd[1544]: time="2025-10-13T00:06:29.388167540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55ccb5dcd9-zw97z,Uid:e2434000-5a08-4018-ae79-d2c26ea7ecde,Namespace:calico-system,Attempt:0,}" Oct 13 00:06:29.391511 systemd[1]: Created slice kubepods-besteffort-pod01aa0e53_b9f7_4e6c_b5ea_bde03ed0aec1.slice - libcontainer container kubepods-besteffort-pod01aa0e53_b9f7_4e6c_b5ea_bde03ed0aec1.slice. Oct 13 00:06:29.396566 containerd[1544]: time="2025-10-13T00:06:29.396530602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc8bg,Uid:01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1,Namespace:calico-system,Attempt:0,}" Oct 13 00:06:29.548931 containerd[1544]: time="2025-10-13T00:06:29.547758363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 00:06:29.603456 containerd[1544]: time="2025-10-13T00:06:29.603082341Z" level=error msg="Failed to destroy network for sandbox \"088f201e2353a5668e3de28f53676103a5cdcb3c5d77f74b5ee92cb0dcbb26b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.614748 containerd[1544]: time="2025-10-13T00:06:29.614633100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-lzkp5,Uid:e3c7842e-bcb8-45e2-b1cf-3717aeb735ad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"088f201e2353a5668e3de28f53676103a5cdcb3c5d77f74b5ee92cb0dcbb26b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.615054 kubelet[2750]: E1013 00:06:29.614996 2750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"088f201e2353a5668e3de28f53676103a5cdcb3c5d77f74b5ee92cb0dcbb26b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.615130 kubelet[2750]: E1013 00:06:29.615081 2750 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"088f201e2353a5668e3de28f53676103a5cdcb3c5d77f74b5ee92cb0dcbb26b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-lzkp5" Oct 13 00:06:29.615130 kubelet[2750]: E1013 00:06:29.615101 2750 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"088f201e2353a5668e3de28f53676103a5cdcb3c5d77f74b5ee92cb0dcbb26b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-lzkp5" Oct 13 00:06:29.615197 kubelet[2750]: E1013 00:06:29.615156 2750 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-854f97d977-lzkp5_calico-system(e3c7842e-bcb8-45e2-b1cf-3717aeb735ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-854f97d977-lzkp5_calico-system(e3c7842e-bcb8-45e2-b1cf-3717aeb735ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"088f201e2353a5668e3de28f53676103a5cdcb3c5d77f74b5ee92cb0dcbb26b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-854f97d977-lzkp5" podUID="e3c7842e-bcb8-45e2-b1cf-3717aeb735ad" Oct 13 00:06:29.644936 containerd[1544]: time="2025-10-13T00:06:29.644853930Z" level=error msg="Failed to destroy network for sandbox \"6d709258fbc92ba947ff393158c478bf38aef4862db171d387253e1e95ff45be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.646097 containerd[1544]: time="2025-10-13T00:06:29.646038876Z" level=error msg="Failed to destroy network for sandbox \"2b54c0ab5242031da318b3e91845d08f25776e7fe8738a1df1b0261481d07807\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.647948 containerd[1544]: time="2025-10-13T00:06:29.647873097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9mcnk,Uid:eb1f963b-8359-434b-98d9-a539c2e3c069,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d709258fbc92ba947ff393158c478bf38aef4862db171d387253e1e95ff45be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.649258 containerd[1544]: time="2025-10-13T00:06:29.649012240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5789d66748-lpcf6,Uid:c183318d-6e0e-4dec-a61a-7d306d33a93f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b54c0ab5242031da318b3e91845d08f25776e7fe8738a1df1b0261481d07807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.649455 kubelet[2750]: E1013 00:06:29.649245 2750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d709258fbc92ba947ff393158c478bf38aef4862db171d387253e1e95ff45be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.649455 kubelet[2750]: E1013 00:06:29.649300 2750 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d709258fbc92ba947ff393158c478bf38aef4862db171d387253e1e95ff45be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9mcnk" Oct 13 00:06:29.649455 kubelet[2750]: E1013 00:06:29.649318 2750 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d709258fbc92ba947ff393158c478bf38aef4862db171d387253e1e95ff45be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9mcnk" Oct 13 00:06:29.649558 kubelet[2750]: E1013 00:06:29.649381 2750 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9mcnk_kube-system(eb1f963b-8359-434b-98d9-a539c2e3c069)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9mcnk_kube-system(eb1f963b-8359-434b-98d9-a539c2e3c069)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d709258fbc92ba947ff393158c478bf38aef4862db171d387253e1e95ff45be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9mcnk" podUID="eb1f963b-8359-434b-98d9-a539c2e3c069" Oct 13 00:06:29.652338 containerd[1544]: time="2025-10-13T00:06:29.651780033Z" level=error msg="Failed to destroy network for sandbox \"298b355a839a203e4a3dd6e0670a28ab330ad65535ee1e196d5b7f113cbe1161\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.652439 kubelet[2750]: E1013 00:06:29.651698 2750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b54c0ab5242031da318b3e91845d08f25776e7fe8738a1df1b0261481d07807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.652439 kubelet[2750]: E1013 00:06:29.651873 2750 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b54c0ab5242031da318b3e91845d08f25776e7fe8738a1df1b0261481d07807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5789d66748-lpcf6" Oct 13 00:06:29.652439 kubelet[2750]: E1013 00:06:29.652016 2750 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b54c0ab5242031da318b3e91845d08f25776e7fe8738a1df1b0261481d07807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5789d66748-lpcf6" Oct 13 00:06:29.652540 kubelet[2750]: E1013 00:06:29.652185 2750 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5789d66748-lpcf6_calico-apiserver(c183318d-6e0e-4dec-a61a-7d306d33a93f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5789d66748-lpcf6_calico-apiserver(c183318d-6e0e-4dec-a61a-7d306d33a93f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b54c0ab5242031da318b3e91845d08f25776e7fe8738a1df1b0261481d07807\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5789d66748-lpcf6" podUID="c183318d-6e0e-4dec-a61a-7d306d33a93f" Oct 13 00:06:29.657410 containerd[1544]: time="2025-10-13T00:06:29.657342781Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55ccb5dcd9-zw97z,Uid:e2434000-5a08-4018-ae79-d2c26ea7ecde,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"298b355a839a203e4a3dd6e0670a28ab330ad65535ee1e196d5b7f113cbe1161\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.657835 kubelet[2750]: E1013 00:06:29.657627 2750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"298b355a839a203e4a3dd6e0670a28ab330ad65535ee1e196d5b7f113cbe1161\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.657835 kubelet[2750]: E1013 00:06:29.657831 2750 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"298b355a839a203e4a3dd6e0670a28ab330ad65535ee1e196d5b7f113cbe1161\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55ccb5dcd9-zw97z" Oct 13 00:06:29.658073 kubelet[2750]: E1013 00:06:29.657981 2750 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"298b355a839a203e4a3dd6e0670a28ab330ad65535ee1e196d5b7f113cbe1161\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55ccb5dcd9-zw97z" Oct 13 00:06:29.658278 kubelet[2750]: E1013 00:06:29.658153 2750 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-55ccb5dcd9-zw97z_calico-system(e2434000-5a08-4018-ae79-d2c26ea7ecde)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-55ccb5dcd9-zw97z_calico-system(e2434000-5a08-4018-ae79-d2c26ea7ecde)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"298b355a839a203e4a3dd6e0670a28ab330ad65535ee1e196d5b7f113cbe1161\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55ccb5dcd9-zw97z" podUID="e2434000-5a08-4018-ae79-d2c26ea7ecde" Oct 13 00:06:29.681627 containerd[1544]: time="2025-10-13T00:06:29.681481235Z" level=error msg="Failed to destroy network for sandbox \"a5c6fb04b61750bf981e5a00fc9714e7a3d619da0d4ffdc4df1917a9a07e9776\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.683240 containerd[1544]: time="2025-10-13T00:06:29.683187970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc8bg,Uid:01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5c6fb04b61750bf981e5a00fc9714e7a3d619da0d4ffdc4df1917a9a07e9776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.683758 kubelet[2750]: E1013 00:06:29.683718 2750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5c6fb04b61750bf981e5a00fc9714e7a3d619da0d4ffdc4df1917a9a07e9776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.683941 kubelet[2750]: E1013 00:06:29.683776 2750 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5c6fb04b61750bf981e5a00fc9714e7a3d619da0d4ffdc4df1917a9a07e9776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc8bg" Oct 13 00:06:29.683941 kubelet[2750]: E1013 00:06:29.683796 2750 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5c6fb04b61750bf981e5a00fc9714e7a3d619da0d4ffdc4df1917a9a07e9776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc8bg" Oct 13 00:06:29.683941 kubelet[2750]: E1013 00:06:29.683849 2750 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bc8bg_calico-system(01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bc8bg_calico-system(01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5c6fb04b61750bf981e5a00fc9714e7a3d619da0d4ffdc4df1917a9a07e9776\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bc8bg" podUID="01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1" Oct 13 00:06:29.687624 containerd[1544]: time="2025-10-13T00:06:29.687572972Z" level=error msg="Failed to destroy network for sandbox \"aa1709f01bce31cd39c51c7a0fb1e934ccb879098cc82560c1b412a8ebeee783\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.689282 containerd[1544]: time="2025-10-13T00:06:29.689116377Z" level=error msg="Failed to destroy network for sandbox \"574df0e0befa4b5c92579af5b04ade38c79bd38cdca6eba4856940f9d59868d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.691116 containerd[1544]: time="2025-10-13T00:06:29.691071405Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5789d66748-bhnjj,Uid:53e4f244-7eb8-4d22-a08f-aeae3cade04b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa1709f01bce31cd39c51c7a0fb1e934ccb879098cc82560c1b412a8ebeee783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.692023 kubelet[2750]: E1013 00:06:29.691516 2750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa1709f01bce31cd39c51c7a0fb1e934ccb879098cc82560c1b412a8ebeee783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.692023 kubelet[2750]: E1013 00:06:29.691572 2750 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa1709f01bce31cd39c51c7a0fb1e934ccb879098cc82560c1b412a8ebeee783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5789d66748-bhnjj" Oct 13 00:06:29.692023 kubelet[2750]: E1013 00:06:29.691594 2750 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa1709f01bce31cd39c51c7a0fb1e934ccb879098cc82560c1b412a8ebeee783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5789d66748-bhnjj" Oct 13 00:06:29.692244 kubelet[2750]: E1013 00:06:29.691648 2750 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5789d66748-bhnjj_calico-apiserver(53e4f244-7eb8-4d22-a08f-aeae3cade04b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5789d66748-bhnjj_calico-apiserver(53e4f244-7eb8-4d22-a08f-aeae3cade04b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa1709f01bce31cd39c51c7a0fb1e934ccb879098cc82560c1b412a8ebeee783\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5789d66748-bhnjj" podUID="53e4f244-7eb8-4d22-a08f-aeae3cade04b" Oct 13 00:06:29.692311 containerd[1544]: time="2025-10-13T00:06:29.692197868Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b6546f656-gvhgn,Uid:70f10094-62b2-4cbd-b7f0-bdb77a19b9b7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"574df0e0befa4b5c92579af5b04ade38c79bd38cdca6eba4856940f9d59868d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.692823 kubelet[2750]: E1013 00:06:29.692775 2750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"574df0e0befa4b5c92579af5b04ade38c79bd38cdca6eba4856940f9d59868d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.693075 kubelet[2750]: E1013 00:06:29.692836 2750 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"574df0e0befa4b5c92579af5b04ade38c79bd38cdca6eba4856940f9d59868d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b6546f656-gvhgn" Oct 13 00:06:29.693075 kubelet[2750]: E1013 00:06:29.692868 2750 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"574df0e0befa4b5c92579af5b04ade38c79bd38cdca6eba4856940f9d59868d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b6546f656-gvhgn" Oct 13 00:06:29.693075 kubelet[2750]: E1013 00:06:29.692999 2750 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b6546f656-gvhgn_calico-system(70f10094-62b2-4cbd-b7f0-bdb77a19b9b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b6546f656-gvhgn_calico-system(70f10094-62b2-4cbd-b7f0-bdb77a19b9b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"574df0e0befa4b5c92579af5b04ade38c79bd38cdca6eba4856940f9d59868d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b6546f656-gvhgn" podUID="70f10094-62b2-4cbd-b7f0-bdb77a19b9b7" Oct 13 00:06:29.693427 containerd[1544]: time="2025-10-13T00:06:29.693152520Z" level=error msg="Failed to destroy network for sandbox \"e6681145b174775b236b1b6c619f2ab9bf95f9cd4f0e74132ff7fb3e134ebc73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.694253 containerd[1544]: time="2025-10-13T00:06:29.694205539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-55db8,Uid:4a3d6e61-9a33-4840-8142-125a1b0a5221,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6681145b174775b236b1b6c619f2ab9bf95f9cd4f0e74132ff7fb3e134ebc73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.694481 kubelet[2750]: E1013 00:06:29.694398 2750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6681145b174775b236b1b6c619f2ab9bf95f9cd4f0e74132ff7fb3e134ebc73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:06:29.694481 kubelet[2750]: E1013 00:06:29.694442 2750 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6681145b174775b236b1b6c619f2ab9bf95f9cd4f0e74132ff7fb3e134ebc73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-55db8" Oct 13 00:06:29.694481 kubelet[2750]: E1013 00:06:29.694467 2750 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6681145b174775b236b1b6c619f2ab9bf95f9cd4f0e74132ff7fb3e134ebc73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-55db8" Oct 13 00:06:29.694730 kubelet[2750]: E1013 00:06:29.694511 2750 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-55db8_kube-system(4a3d6e61-9a33-4840-8142-125a1b0a5221)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-55db8_kube-system(4a3d6e61-9a33-4840-8142-125a1b0a5221)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6681145b174775b236b1b6c619f2ab9bf95f9cd4f0e74132ff7fb3e134ebc73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-55db8" podUID="4a3d6e61-9a33-4840-8142-125a1b0a5221" Oct 13 00:06:30.169723 systemd[1]: run-netns-cni\x2d91d5c5f6\x2d6b62\x2da122\x2db530\x2dd4e0754cb7fb.mount: Deactivated successfully. Oct 13 00:06:30.986286 kubelet[2750]: I1013 00:06:30.985975 2750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:06:33.972638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3677245710.mount: Deactivated successfully. Oct 13 00:06:33.996040 containerd[1544]: time="2025-10-13T00:06:33.995955924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:33.996989 containerd[1544]: time="2025-10-13T00:06:33.996957293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Oct 13 00:06:33.997764 containerd[1544]: time="2025-10-13T00:06:33.997646767Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:34.000180 containerd[1544]: time="2025-10-13T00:06:34.000116768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:34.001337 containerd[1544]: time="2025-10-13T00:06:34.000713037Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.452607976s" Oct 13 00:06:34.001337 containerd[1544]: time="2025-10-13T00:06:34.000751839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Oct 13 00:06:34.025956 containerd[1544]: time="2025-10-13T00:06:34.025870081Z" level=info msg="CreateContainer within sandbox \"05af078a76b4bb1eb1d34438df24a25283f9a0e5e9ddfe67ff49933bbd6b233e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 00:06:34.038053 containerd[1544]: time="2025-10-13T00:06:34.034395729Z" level=info msg="Container 2d41dc5c1b4da935419770bb7aa35db31f121f2ac5f375ac454d55583e1bf285: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:06:34.053764 containerd[1544]: time="2025-10-13T00:06:34.053624850Z" level=info msg="CreateContainer within sandbox \"05af078a76b4bb1eb1d34438df24a25283f9a0e5e9ddfe67ff49933bbd6b233e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2d41dc5c1b4da935419770bb7aa35db31f121f2ac5f375ac454d55583e1bf285\"" Oct 13 00:06:34.055703 containerd[1544]: time="2025-10-13T00:06:34.054734623Z" level=info msg="StartContainer for \"2d41dc5c1b4da935419770bb7aa35db31f121f2ac5f375ac454d55583e1bf285\"" Oct 13 00:06:34.059041 containerd[1544]: time="2025-10-13T00:06:34.058966065Z" level=info msg="connecting to shim 2d41dc5c1b4da935419770bb7aa35db31f121f2ac5f375ac454d55583e1bf285" address="unix:///run/containerd/s/59c511fc0db3ea1bc9cb099d6052290685ed1e5035a565127f7b85801401b80b" protocol=ttrpc version=3 Oct 13 00:06:34.105235 systemd[1]: Started cri-containerd-2d41dc5c1b4da935419770bb7aa35db31f121f2ac5f375ac454d55583e1bf285.scope - libcontainer container 2d41dc5c1b4da935419770bb7aa35db31f121f2ac5f375ac454d55583e1bf285. Oct 13 00:06:34.152160 containerd[1544]: time="2025-10-13T00:06:34.152113202Z" level=info msg="StartContainer for \"2d41dc5c1b4da935419770bb7aa35db31f121f2ac5f375ac454d55583e1bf285\" returns successfully" Oct 13 00:06:34.285148 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 00:06:34.285296 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 00:06:34.565695 kubelet[2750]: I1013 00:06:34.565235 2750 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2434000-5a08-4018-ae79-d2c26ea7ecde-whisker-ca-bundle\") pod \"e2434000-5a08-4018-ae79-d2c26ea7ecde\" (UID: \"e2434000-5a08-4018-ae79-d2c26ea7ecde\") " Oct 13 00:06:34.565695 kubelet[2750]: I1013 00:06:34.565297 2750 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e2434000-5a08-4018-ae79-d2c26ea7ecde-whisker-backend-key-pair\") pod \"e2434000-5a08-4018-ae79-d2c26ea7ecde\" (UID: \"e2434000-5a08-4018-ae79-d2c26ea7ecde\") " Oct 13 00:06:34.565695 kubelet[2750]: I1013 00:06:34.565328 2750 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptjt6\" (UniqueName: \"kubernetes.io/projected/e2434000-5a08-4018-ae79-d2c26ea7ecde-kube-api-access-ptjt6\") pod \"e2434000-5a08-4018-ae79-d2c26ea7ecde\" (UID: \"e2434000-5a08-4018-ae79-d2c26ea7ecde\") " Oct 13 00:06:34.575873 kubelet[2750]: I1013 00:06:34.575703 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2434000-5a08-4018-ae79-d2c26ea7ecde-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e2434000-5a08-4018-ae79-d2c26ea7ecde" (UID: "e2434000-5a08-4018-ae79-d2c26ea7ecde"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 00:06:34.582063 kubelet[2750]: I1013 00:06:34.581987 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2434000-5a08-4018-ae79-d2c26ea7ecde-kube-api-access-ptjt6" (OuterVolumeSpecName: "kube-api-access-ptjt6") pod "e2434000-5a08-4018-ae79-d2c26ea7ecde" (UID: "e2434000-5a08-4018-ae79-d2c26ea7ecde"). InnerVolumeSpecName "kube-api-access-ptjt6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 00:06:34.587584 kubelet[2750]: I1013 00:06:34.587407 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2434000-5a08-4018-ae79-d2c26ea7ecde-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e2434000-5a08-4018-ae79-d2c26ea7ecde" (UID: "e2434000-5a08-4018-ae79-d2c26ea7ecde"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 00:06:34.605153 kubelet[2750]: I1013 00:06:34.605065 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-smjrz" podStartSLOduration=1.368537949 podStartE2EDuration="13.605046956s" podCreationTimestamp="2025-10-13 00:06:21 +0000 UTC" firstStartedPulling="2025-10-13 00:06:21.765137675 +0000 UTC m=+23.517340591" lastFinishedPulling="2025-10-13 00:06:34.001646682 +0000 UTC m=+35.753849598" observedRunningTime="2025-10-13 00:06:34.604207716 +0000 UTC m=+36.356410632" watchObservedRunningTime="2025-10-13 00:06:34.605046956 +0000 UTC m=+36.357249872" Oct 13 00:06:34.666273 kubelet[2750]: I1013 00:06:34.666220 2750 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2434000-5a08-4018-ae79-d2c26ea7ecde-whisker-ca-bundle\") on node \"ci-4459-1-0-c-ccbbacf556\" DevicePath \"\"" Oct 13 00:06:34.666273 kubelet[2750]: I1013 00:06:34.666255 2750 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e2434000-5a08-4018-ae79-d2c26ea7ecde-whisker-backend-key-pair\") on node \"ci-4459-1-0-c-ccbbacf556\" DevicePath \"\"" Oct 13 00:06:34.666273 kubelet[2750]: I1013 00:06:34.666265 2750 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptjt6\" (UniqueName: \"kubernetes.io/projected/e2434000-5a08-4018-ae79-d2c26ea7ecde-kube-api-access-ptjt6\") on node \"ci-4459-1-0-c-ccbbacf556\" DevicePath \"\"" Oct 13 00:06:34.741689 containerd[1544]: time="2025-10-13T00:06:34.741640892Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d41dc5c1b4da935419770bb7aa35db31f121f2ac5f375ac454d55583e1bf285\" id:\"3100d9fd429a8c2e55f1d73f9f973a76d97850facf99b71f9f155457020f5743\" pid:3803 exit_status:1 exited_at:{seconds:1760313994 nanos:740981500}" Oct 13 00:06:34.890601 systemd[1]: Removed slice kubepods-besteffort-pode2434000_5a08_4018_ae79_d2c26ea7ecde.slice - libcontainer container kubepods-besteffort-pode2434000_5a08_4018_ae79_d2c26ea7ecde.slice. Oct 13 00:06:34.976884 systemd[1]: var-lib-kubelet-pods-e2434000\x2d5a08\x2d4018\x2dae79\x2dd2c26ea7ecde-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dptjt6.mount: Deactivated successfully. Oct 13 00:06:34.977323 systemd[1]: var-lib-kubelet-pods-e2434000\x2d5a08\x2d4018\x2dae79\x2dd2c26ea7ecde-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 00:06:34.997972 systemd[1]: Created slice kubepods-besteffort-pod6e87420c_c3bb_4512_8b1f_1628b816cb72.slice - libcontainer container kubepods-besteffort-pod6e87420c_c3bb_4512_8b1f_1628b816cb72.slice. Oct 13 00:06:35.070568 kubelet[2750]: I1013 00:06:35.070484 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6e87420c-c3bb-4512-8b1f-1628b816cb72-whisker-backend-key-pair\") pod \"whisker-65ffd8b4dd-r68zf\" (UID: \"6e87420c-c3bb-4512-8b1f-1628b816cb72\") " pod="calico-system/whisker-65ffd8b4dd-r68zf" Oct 13 00:06:35.070568 kubelet[2750]: I1013 00:06:35.070532 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ttzk\" (UniqueName: \"kubernetes.io/projected/6e87420c-c3bb-4512-8b1f-1628b816cb72-kube-api-access-7ttzk\") pod \"whisker-65ffd8b4dd-r68zf\" (UID: \"6e87420c-c3bb-4512-8b1f-1628b816cb72\") " pod="calico-system/whisker-65ffd8b4dd-r68zf" Oct 13 00:06:35.070730 kubelet[2750]: I1013 00:06:35.070621 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e87420c-c3bb-4512-8b1f-1628b816cb72-whisker-ca-bundle\") pod \"whisker-65ffd8b4dd-r68zf\" (UID: \"6e87420c-c3bb-4512-8b1f-1628b816cb72\") " pod="calico-system/whisker-65ffd8b4dd-r68zf" Oct 13 00:06:35.306491 containerd[1544]: time="2025-10-13T00:06:35.306389703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65ffd8b4dd-r68zf,Uid:6e87420c-c3bb-4512-8b1f-1628b816cb72,Namespace:calico-system,Attempt:0,}" Oct 13 00:06:35.495892 systemd-networkd[1413]: calia109c2606ad: Link UP Oct 13 00:06:35.496446 systemd-networkd[1413]: calia109c2606ad: Gained carrier Oct 13 00:06:35.517796 containerd[1544]: 2025-10-13 00:06:35.338 [INFO][3827] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 00:06:35.517796 containerd[1544]: 2025-10-13 00:06:35.375 [INFO][3827] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--c--ccbbacf556-k8s-whisker--65ffd8b4dd--r68zf-eth0 whisker-65ffd8b4dd- calico-system 6e87420c-c3bb-4512-8b1f-1628b816cb72 911 0 2025-10-13 00:06:34 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:65ffd8b4dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-1-0-c-ccbbacf556 whisker-65ffd8b4dd-r68zf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia109c2606ad [] [] }} ContainerID="0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85" Namespace="calico-system" Pod="whisker-65ffd8b4dd-r68zf" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-whisker--65ffd8b4dd--r68zf-" Oct 13 00:06:35.517796 containerd[1544]: 2025-10-13 00:06:35.376 [INFO][3827] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85" Namespace="calico-system" Pod="whisker-65ffd8b4dd-r68zf" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-whisker--65ffd8b4dd--r68zf-eth0" Oct 13 00:06:35.517796 containerd[1544]: 2025-10-13 00:06:35.427 [INFO][3840] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85" HandleID="k8s-pod-network.0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85" Workload="ci--4459--1--0--c--ccbbacf556-k8s-whisker--65ffd8b4dd--r68zf-eth0" Oct 13 00:06:35.518166 containerd[1544]: 2025-10-13 00:06:35.427 [INFO][3840] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85" HandleID="k8s-pod-network.0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85" Workload="ci--4459--1--0--c--ccbbacf556-k8s-whisker--65ffd8b4dd--r68zf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-0-c-ccbbacf556", "pod":"whisker-65ffd8b4dd-r68zf", "timestamp":"2025-10-13 00:06:35.427505311 +0000 UTC"}, Hostname:"ci-4459-1-0-c-ccbbacf556", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:06:35.518166 containerd[1544]: 2025-10-13 00:06:35.427 [INFO][3840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:06:35.518166 containerd[1544]: 2025-10-13 00:06:35.427 [INFO][3840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:06:35.518166 containerd[1544]: 2025-10-13 00:06:35.428 [INFO][3840] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-c-ccbbacf556' Oct 13 00:06:35.518166 containerd[1544]: 2025-10-13 00:06:35.442 [INFO][3840] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:35.518166 containerd[1544]: 2025-10-13 00:06:35.449 [INFO][3840] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:35.518166 containerd[1544]: 2025-10-13 00:06:35.455 [INFO][3840] ipam/ipam.go 511: Trying affinity for 192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:35.518166 containerd[1544]: 2025-10-13 00:06:35.457 [INFO][3840] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:35.518166 containerd[1544]: 2025-10-13 00:06:35.461 [INFO][3840] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:35.518352 containerd[1544]: 2025-10-13 00:06:35.461 [INFO][3840] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:35.518352 containerd[1544]: 2025-10-13 00:06:35.466 [INFO][3840] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85 Oct 13 00:06:35.518352 containerd[1544]: 2025-10-13 00:06:35.476 [INFO][3840] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:35.518352 containerd[1544]: 2025-10-13 00:06:35.483 [INFO][3840] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.65/26] block=192.168.88.64/26 handle="k8s-pod-network.0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:35.518352 containerd[1544]: 2025-10-13 00:06:35.484 [INFO][3840] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.65/26] handle="k8s-pod-network.0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:35.518352 containerd[1544]: 2025-10-13 00:06:35.484 [INFO][3840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:06:35.518352 containerd[1544]: 2025-10-13 00:06:35.484 [INFO][3840] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.65/26] IPv6=[] ContainerID="0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85" HandleID="k8s-pod-network.0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85" Workload="ci--4459--1--0--c--ccbbacf556-k8s-whisker--65ffd8b4dd--r68zf-eth0" Oct 13 00:06:35.518485 containerd[1544]: 2025-10-13 00:06:35.487 [INFO][3827] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85" Namespace="calico-system" Pod="whisker-65ffd8b4dd-r68zf" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-whisker--65ffd8b4dd--r68zf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--ccbbacf556-k8s-whisker--65ffd8b4dd--r68zf-eth0", GenerateName:"whisker-65ffd8b4dd-", Namespace:"calico-system", SelfLink:"", UID:"6e87420c-c3bb-4512-8b1f-1628b816cb72", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65ffd8b4dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-ccbbacf556", ContainerID:"", Pod:"whisker-65ffd8b4dd-r68zf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia109c2606ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:06:35.518485 containerd[1544]: 2025-10-13 00:06:35.488 [INFO][3827] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.65/32] ContainerID="0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85" Namespace="calico-system" Pod="whisker-65ffd8b4dd-r68zf" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-whisker--65ffd8b4dd--r68zf-eth0" Oct 13 00:06:35.518551 containerd[1544]: 2025-10-13 00:06:35.488 [INFO][3827] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia109c2606ad ContainerID="0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85" Namespace="calico-system" Pod="whisker-65ffd8b4dd-r68zf" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-whisker--65ffd8b4dd--r68zf-eth0" Oct 13 00:06:35.518551 containerd[1544]: 2025-10-13 00:06:35.497 [INFO][3827] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85" Namespace="calico-system" Pod="whisker-65ffd8b4dd-r68zf" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-whisker--65ffd8b4dd--r68zf-eth0" Oct 13 00:06:35.518592 containerd[1544]: 2025-10-13 00:06:35.498 [INFO][3827] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85" Namespace="calico-system" Pod="whisker-65ffd8b4dd-r68zf" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-whisker--65ffd8b4dd--r68zf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--ccbbacf556-k8s-whisker--65ffd8b4dd--r68zf-eth0", GenerateName:"whisker-65ffd8b4dd-", Namespace:"calico-system", SelfLink:"", UID:"6e87420c-c3bb-4512-8b1f-1628b816cb72", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65ffd8b4dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-ccbbacf556", ContainerID:"0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85", Pod:"whisker-65ffd8b4dd-r68zf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia109c2606ad", MAC:"e2:1a:84:9a:ac:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:06:35.518638 containerd[1544]: 2025-10-13 00:06:35.514 [INFO][3827] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85" Namespace="calico-system" Pod="whisker-65ffd8b4dd-r68zf" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-whisker--65ffd8b4dd--r68zf-eth0" Oct 13 00:06:35.575752 containerd[1544]: time="2025-10-13T00:06:35.575565535Z" level=info msg="connecting to shim 0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85" address="unix:///run/containerd/s/1c950bc4ecc3f2a5d145179632ecc3eb00a50b853f551c7b5cfb21c4cd56b63a" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:06:35.609554 systemd[1]: Started cri-containerd-0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85.scope - libcontainer container 0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85. Oct 13 00:06:35.697561 containerd[1544]: time="2025-10-13T00:06:35.697437218Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d41dc5c1b4da935419770bb7aa35db31f121f2ac5f375ac454d55583e1bf285\" id:\"9982d1fd5ffae4c8938b8661ce833d8515d189c627bd18b3fe4cbdc4efc2be23\" pid:3900 exit_status:1 exited_at:{seconds:1760313995 nanos:696779388}" Oct 13 00:06:35.701228 containerd[1544]: time="2025-10-13T00:06:35.701164912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65ffd8b4dd-r68zf,Uid:6e87420c-c3bb-4512-8b1f-1628b816cb72,Namespace:calico-system,Attempt:0,} returns sandbox id \"0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85\"" Oct 13 00:06:35.705634 containerd[1544]: time="2025-10-13T00:06:35.704993251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 00:06:36.397563 kubelet[2750]: I1013 00:06:36.397434 2750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2434000-5a08-4018-ae79-d2c26ea7ecde" path="/var/lib/kubelet/pods/e2434000-5a08-4018-ae79-d2c26ea7ecde/volumes" Oct 13 00:06:36.424331 systemd-networkd[1413]: vxlan.calico: Link UP Oct 13 00:06:36.424341 systemd-networkd[1413]: vxlan.calico: Gained carrier Oct 13 00:06:36.708543 systemd-networkd[1413]: calia109c2606ad: Gained IPv6LL Oct 13 00:06:36.761393 containerd[1544]: time="2025-10-13T00:06:36.761353961Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d41dc5c1b4da935419770bb7aa35db31f121f2ac5f375ac454d55583e1bf285\" id:\"317673098029aa84b2436b0b9526ca9146017078f0bbe14ba7b7b27dc56deafb\" pid:4100 exit_status:1 exited_at:{seconds:1760313996 nanos:760432879}" Oct 13 00:06:37.317436 containerd[1544]: time="2025-10-13T00:06:37.317379394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:37.319057 containerd[1544]: time="2025-10-13T00:06:37.318777616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Oct 13 00:06:37.320007 containerd[1544]: time="2025-10-13T00:06:37.319971989Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:37.323772 containerd[1544]: time="2025-10-13T00:06:37.323707555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:37.325131 containerd[1544]: time="2025-10-13T00:06:37.324780123Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.618813906s" Oct 13 00:06:37.325131 containerd[1544]: time="2025-10-13T00:06:37.324821164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Oct 13 00:06:37.330787 containerd[1544]: time="2025-10-13T00:06:37.330736667Z" level=info msg="CreateContainer within sandbox \"0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 00:06:37.345511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2974115954.mount: Deactivated successfully. Oct 13 00:06:37.351734 containerd[1544]: time="2025-10-13T00:06:37.345685851Z" level=info msg="Container cfdee230872f5e0c4d33b368e6c339ccb7707b3ffe0013042bd59135b0436765: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:06:37.368410 containerd[1544]: time="2025-10-13T00:06:37.368307216Z" level=info msg="CreateContainer within sandbox \"0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"cfdee230872f5e0c4d33b368e6c339ccb7707b3ffe0013042bd59135b0436765\"" Oct 13 00:06:37.369755 containerd[1544]: time="2025-10-13T00:06:37.369649715Z" level=info msg="StartContainer for \"cfdee230872f5e0c4d33b368e6c339ccb7707b3ffe0013042bd59135b0436765\"" Oct 13 00:06:37.371640 containerd[1544]: time="2025-10-13T00:06:37.371546240Z" level=info msg="connecting to shim cfdee230872f5e0c4d33b368e6c339ccb7707b3ffe0013042bd59135b0436765" address="unix:///run/containerd/s/1c950bc4ecc3f2a5d145179632ecc3eb00a50b853f551c7b5cfb21c4cd56b63a" protocol=ttrpc version=3 Oct 13 00:06:37.393314 systemd[1]: Started cri-containerd-cfdee230872f5e0c4d33b368e6c339ccb7707b3ffe0013042bd59135b0436765.scope - libcontainer container cfdee230872f5e0c4d33b368e6c339ccb7707b3ffe0013042bd59135b0436765. Oct 13 00:06:37.448527 containerd[1544]: time="2025-10-13T00:06:37.448480097Z" level=info msg="StartContainer for \"cfdee230872f5e0c4d33b368e6c339ccb7707b3ffe0013042bd59135b0436765\" returns successfully" Oct 13 00:06:37.452666 containerd[1544]: time="2025-10-13T00:06:37.452536357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 00:06:38.373360 systemd-networkd[1413]: vxlan.calico: Gained IPv6LL Oct 13 00:06:39.438157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount945730023.mount: Deactivated successfully. Oct 13 00:06:39.461285 containerd[1544]: time="2025-10-13T00:06:39.461181913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:39.462653 containerd[1544]: time="2025-10-13T00:06:39.462427686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Oct 13 00:06:39.463735 containerd[1544]: time="2025-10-13T00:06:39.463694419Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:39.466399 containerd[1544]: time="2025-10-13T00:06:39.466319251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:39.467307 containerd[1544]: time="2025-10-13T00:06:39.467248610Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 2.01464673s" Oct 13 00:06:39.467307 containerd[1544]: time="2025-10-13T00:06:39.467288692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Oct 13 00:06:39.474298 containerd[1544]: time="2025-10-13T00:06:39.474146783Z" level=info msg="CreateContainer within sandbox \"0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 00:06:39.493186 containerd[1544]: time="2025-10-13T00:06:39.493129509Z" level=info msg="Container 3521262eeefcbe481a45e21927d1f61cb6c6f7efe93d40dc548d26030b446a29: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:06:39.505333 containerd[1544]: time="2025-10-13T00:06:39.505251504Z" level=info msg="CreateContainer within sandbox \"0c0028b645f870b33c71c59949afd7542230114341feed87135183b7a4d15d85\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3521262eeefcbe481a45e21927d1f61cb6c6f7efe93d40dc548d26030b446a29\"" Oct 13 00:06:39.507989 containerd[1544]: time="2025-10-13T00:06:39.507120344Z" level=info msg="StartContainer for \"3521262eeefcbe481a45e21927d1f61cb6c6f7efe93d40dc548d26030b446a29\"" Oct 13 00:06:39.509448 containerd[1544]: time="2025-10-13T00:06:39.509403321Z" level=info msg="connecting to shim 3521262eeefcbe481a45e21927d1f61cb6c6f7efe93d40dc548d26030b446a29" address="unix:///run/containerd/s/1c950bc4ecc3f2a5d145179632ecc3eb00a50b853f551c7b5cfb21c4cd56b63a" protocol=ttrpc version=3 Oct 13 00:06:39.537168 systemd[1]: Started cri-containerd-3521262eeefcbe481a45e21927d1f61cb6c6f7efe93d40dc548d26030b446a29.scope - libcontainer container 3521262eeefcbe481a45e21927d1f61cb6c6f7efe93d40dc548d26030b446a29. Oct 13 00:06:39.588118 containerd[1544]: time="2025-10-13T00:06:39.588073622Z" level=info msg="StartContainer for \"3521262eeefcbe481a45e21927d1f61cb6c6f7efe93d40dc548d26030b446a29\" returns successfully" Oct 13 00:06:39.626453 kubelet[2750]: I1013 00:06:39.626146 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-65ffd8b4dd-r68zf" podStartSLOduration=1.8621633229999999 podStartE2EDuration="5.626126958s" podCreationTimestamp="2025-10-13 00:06:34 +0000 UTC" firstStartedPulling="2025-10-13 00:06:35.704429464 +0000 UTC m=+37.456632380" lastFinishedPulling="2025-10-13 00:06:39.468393139 +0000 UTC m=+41.220596015" observedRunningTime="2025-10-13 00:06:39.625928549 +0000 UTC m=+41.378131505" watchObservedRunningTime="2025-10-13 00:06:39.626126958 +0000 UTC m=+41.378329874" Oct 13 00:06:41.384582 containerd[1544]: time="2025-10-13T00:06:41.384520575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9mcnk,Uid:eb1f963b-8359-434b-98d9-a539c2e3c069,Namespace:kube-system,Attempt:0,}" Oct 13 00:06:41.386731 containerd[1544]: time="2025-10-13T00:06:41.386677823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc8bg,Uid:01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1,Namespace:calico-system,Attempt:0,}" Oct 13 00:06:41.391469 containerd[1544]: time="2025-10-13T00:06:41.390607623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5789d66748-bhnjj,Uid:53e4f244-7eb8-4d22-a08f-aeae3cade04b,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:06:41.605019 systemd-networkd[1413]: calie2f53703a7c: Link UP Oct 13 00:06:41.608573 systemd-networkd[1413]: calie2f53703a7c: Gained carrier Oct 13 00:06:41.636871 containerd[1544]: 2025-10-13 00:06:41.458 [INFO][4228] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--9mcnk-eth0 coredns-66bc5c9577- kube-system eb1f963b-8359-434b-98d9-a539c2e3c069 840 0 2025-10-13 00:06:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-1-0-c-ccbbacf556 coredns-66bc5c9577-9mcnk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie2f53703a7c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3" Namespace="kube-system" Pod="coredns-66bc5c9577-9mcnk" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--9mcnk-" Oct 13 00:06:41.636871 containerd[1544]: 2025-10-13 00:06:41.458 [INFO][4228] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3" Namespace="kube-system" Pod="coredns-66bc5c9577-9mcnk" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--9mcnk-eth0" Oct 13 00:06:41.636871 containerd[1544]: 2025-10-13 00:06:41.526 [INFO][4258] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3" HandleID="k8s-pod-network.2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3" Workload="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--9mcnk-eth0" Oct 13 00:06:41.637161 containerd[1544]: 2025-10-13 00:06:41.526 [INFO][4258] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3" HandleID="k8s-pod-network.2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3" Workload="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--9mcnk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000321870), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-1-0-c-ccbbacf556", "pod":"coredns-66bc5c9577-9mcnk", "timestamp":"2025-10-13 00:06:41.526093425 +0000 UTC"}, Hostname:"ci-4459-1-0-c-ccbbacf556", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:06:41.637161 containerd[1544]: 2025-10-13 00:06:41.526 [INFO][4258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:06:41.637161 containerd[1544]: 2025-10-13 00:06:41.526 [INFO][4258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:06:41.637161 containerd[1544]: 2025-10-13 00:06:41.526 [INFO][4258] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-c-ccbbacf556' Oct 13 00:06:41.637161 containerd[1544]: 2025-10-13 00:06:41.542 [INFO][4258] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.637161 containerd[1544]: 2025-10-13 00:06:41.555 [INFO][4258] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.637161 containerd[1544]: 2025-10-13 00:06:41.563 [INFO][4258] ipam/ipam.go 511: Trying affinity for 192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.637161 containerd[1544]: 2025-10-13 00:06:41.566 [INFO][4258] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.637161 containerd[1544]: 2025-10-13 00:06:41.570 [INFO][4258] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.637368 containerd[1544]: 2025-10-13 00:06:41.570 [INFO][4258] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.637368 containerd[1544]: 2025-10-13 00:06:41.572 [INFO][4258] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3 Oct 13 00:06:41.637368 containerd[1544]: 2025-10-13 00:06:41.581 [INFO][4258] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.637368 containerd[1544]: 2025-10-13 00:06:41.592 [INFO][4258] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.66/26] block=192.168.88.64/26 handle="k8s-pod-network.2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.637368 containerd[1544]: 2025-10-13 00:06:41.592 [INFO][4258] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.66/26] handle="k8s-pod-network.2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.637368 containerd[1544]: 2025-10-13 00:06:41.592 [INFO][4258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:06:41.637368 containerd[1544]: 2025-10-13 00:06:41.592 [INFO][4258] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.66/26] IPv6=[] ContainerID="2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3" HandleID="k8s-pod-network.2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3" Workload="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--9mcnk-eth0" Oct 13 00:06:41.637549 containerd[1544]: 2025-10-13 00:06:41.598 [INFO][4228] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3" Namespace="kube-system" Pod="coredns-66bc5c9577-9mcnk" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--9mcnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--9mcnk-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"eb1f963b-8359-434b-98d9-a539c2e3c069", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 6, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-ccbbacf556", ContainerID:"", Pod:"coredns-66bc5c9577-9mcnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2f53703a7c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:06:41.637549 containerd[1544]: 2025-10-13 00:06:41.598 [INFO][4228] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.66/32] ContainerID="2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3" Namespace="kube-system" Pod="coredns-66bc5c9577-9mcnk" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--9mcnk-eth0" Oct 13 00:06:41.637549 containerd[1544]: 2025-10-13 00:06:41.598 [INFO][4228] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2f53703a7c ContainerID="2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3" Namespace="kube-system" Pod="coredns-66bc5c9577-9mcnk" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--9mcnk-eth0" Oct 13 00:06:41.637549 containerd[1544]: 2025-10-13 00:06:41.608 [INFO][4228] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3" Namespace="kube-system" Pod="coredns-66bc5c9577-9mcnk" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--9mcnk-eth0" Oct 13 00:06:41.637549 containerd[1544]: 2025-10-13 00:06:41.610 [INFO][4228] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3" Namespace="kube-system" Pod="coredns-66bc5c9577-9mcnk" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--9mcnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--9mcnk-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"eb1f963b-8359-434b-98d9-a539c2e3c069", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 6, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-ccbbacf556", ContainerID:"2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3", Pod:"coredns-66bc5c9577-9mcnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2f53703a7c", MAC:"e6:30:0c:22:d5:9b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:06:41.637720 containerd[1544]: 2025-10-13 00:06:41.628 [INFO][4228] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3" Namespace="kube-system" Pod="coredns-66bc5c9577-9mcnk" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--9mcnk-eth0" Oct 13 00:06:41.686603 containerd[1544]: time="2025-10-13T00:06:41.686417440Z" level=info msg="connecting to shim 2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3" address="unix:///run/containerd/s/ecbb65f534a0e128ac34d27df487d6121a57163a59957bc5f88d8ad697c34bf4" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:06:41.724334 systemd-networkd[1413]: cali8ef0dcdd95a: Link UP Oct 13 00:06:41.726140 systemd-networkd[1413]: cali8ef0dcdd95a: Gained carrier Oct 13 00:06:41.749519 containerd[1544]: 2025-10-13 00:06:41.484 [INFO][4242] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--bhnjj-eth0 calico-apiserver-5789d66748- calico-apiserver 53e4f244-7eb8-4d22-a08f-aeae3cade04b 842 0 2025-10-13 00:06:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5789d66748 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-1-0-c-ccbbacf556 calico-apiserver-5789d66748-bhnjj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8ef0dcdd95a [] [] }} ContainerID="ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97" Namespace="calico-apiserver" Pod="calico-apiserver-5789d66748-bhnjj" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--bhnjj-" Oct 13 00:06:41.749519 containerd[1544]: 2025-10-13 00:06:41.484 [INFO][4242] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97" Namespace="calico-apiserver" Pod="calico-apiserver-5789d66748-bhnjj" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--bhnjj-eth0" Oct 13 00:06:41.749519 containerd[1544]: 2025-10-13 00:06:41.552 [INFO][4270] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97" HandleID="k8s-pod-network.ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97" Workload="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--bhnjj-eth0" Oct 13 00:06:41.749519 containerd[1544]: 2025-10-13 00:06:41.552 [INFO][4270] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97" HandleID="k8s-pod-network.ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97" Workload="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--bhnjj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-1-0-c-ccbbacf556", "pod":"calico-apiserver-5789d66748-bhnjj", "timestamp":"2025-10-13 00:06:41.552377016 +0000 UTC"}, Hostname:"ci-4459-1-0-c-ccbbacf556", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:06:41.749519 containerd[1544]: 2025-10-13 00:06:41.552 [INFO][4270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:06:41.749519 containerd[1544]: 2025-10-13 00:06:41.592 [INFO][4270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:06:41.749519 containerd[1544]: 2025-10-13 00:06:41.593 [INFO][4270] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-c-ccbbacf556' Oct 13 00:06:41.749519 containerd[1544]: 2025-10-13 00:06:41.643 [INFO][4270] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.749519 containerd[1544]: 2025-10-13 00:06:41.656 [INFO][4270] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.749519 containerd[1544]: 2025-10-13 00:06:41.672 [INFO][4270] ipam/ipam.go 511: Trying affinity for 192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.749519 containerd[1544]: 2025-10-13 00:06:41.675 [INFO][4270] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.749519 containerd[1544]: 2025-10-13 00:06:41.680 [INFO][4270] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.749519 containerd[1544]: 2025-10-13 00:06:41.680 [INFO][4270] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.749519 containerd[1544]: 2025-10-13 00:06:41.683 [INFO][4270] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97 Oct 13 00:06:41.749519 containerd[1544]: 2025-10-13 00:06:41.691 [INFO][4270] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.749519 containerd[1544]: 2025-10-13 00:06:41.702 [INFO][4270] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.67/26] block=192.168.88.64/26 handle="k8s-pod-network.ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.749519 containerd[1544]: 2025-10-13 00:06:41.703 [INFO][4270] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.67/26] handle="k8s-pod-network.ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.749519 containerd[1544]: 2025-10-13 00:06:41.703 [INFO][4270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:06:41.749519 containerd[1544]: 2025-10-13 00:06:41.703 [INFO][4270] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.67/26] IPv6=[] ContainerID="ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97" HandleID="k8s-pod-network.ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97" Workload="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--bhnjj-eth0" Oct 13 00:06:41.750591 containerd[1544]: 2025-10-13 00:06:41.715 [INFO][4242] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97" Namespace="calico-apiserver" Pod="calico-apiserver-5789d66748-bhnjj" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--bhnjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--bhnjj-eth0", GenerateName:"calico-apiserver-5789d66748-", Namespace:"calico-apiserver", SelfLink:"", UID:"53e4f244-7eb8-4d22-a08f-aeae3cade04b", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5789d66748", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-ccbbacf556", ContainerID:"", Pod:"calico-apiserver-5789d66748-bhnjj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ef0dcdd95a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:06:41.750591 containerd[1544]: 2025-10-13 00:06:41.716 [INFO][4242] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.67/32] ContainerID="ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97" Namespace="calico-apiserver" Pod="calico-apiserver-5789d66748-bhnjj" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--bhnjj-eth0" Oct 13 00:06:41.750591 containerd[1544]: 2025-10-13 00:06:41.716 [INFO][4242] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ef0dcdd95a ContainerID="ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97" Namespace="calico-apiserver" Pod="calico-apiserver-5789d66748-bhnjj" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--bhnjj-eth0" Oct 13 00:06:41.750591 containerd[1544]: 2025-10-13 00:06:41.725 [INFO][4242] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97" Namespace="calico-apiserver" Pod="calico-apiserver-5789d66748-bhnjj" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--bhnjj-eth0" Oct 13 00:06:41.750591 containerd[1544]: 2025-10-13 00:06:41.726 [INFO][4242] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97" Namespace="calico-apiserver" Pod="calico-apiserver-5789d66748-bhnjj" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--bhnjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--bhnjj-eth0", GenerateName:"calico-apiserver-5789d66748-", Namespace:"calico-apiserver", SelfLink:"", UID:"53e4f244-7eb8-4d22-a08f-aeae3cade04b", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5789d66748", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-ccbbacf556", ContainerID:"ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97", Pod:"calico-apiserver-5789d66748-bhnjj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ef0dcdd95a", MAC:"3a:17:dc:44:a5:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:06:41.750591 containerd[1544]: 2025-10-13 00:06:41.742 [INFO][4242] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97" Namespace="calico-apiserver" Pod="calico-apiserver-5789d66748-bhnjj" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--bhnjj-eth0" Oct 13 00:06:41.770193 systemd[1]: Started cri-containerd-2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3.scope - libcontainer container 2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3. Oct 13 00:06:41.860651 systemd-networkd[1413]: cali6b05a2823ce: Link UP Oct 13 00:06:41.863419 systemd-networkd[1413]: cali6b05a2823ce: Gained carrier Oct 13 00:06:41.866517 containerd[1544]: time="2025-10-13T00:06:41.866462018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9mcnk,Uid:eb1f963b-8359-434b-98d9-a539c2e3c069,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3\"" Oct 13 00:06:41.889747 containerd[1544]: time="2025-10-13T00:06:41.889076340Z" level=info msg="CreateContainer within sandbox \"2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 00:06:41.890999 containerd[1544]: 2025-10-13 00:06:41.475 [INFO][4223] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--c--ccbbacf556-k8s-csi--node--driver--bc8bg-eth0 csi-node-driver- calico-system 01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1 749 0 2025-10-13 00:06:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:f8549cf5c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-1-0-c-ccbbacf556 csi-node-driver-bc8bg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6b05a2823ce [] [] }} ContainerID="03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93" Namespace="calico-system" Pod="csi-node-driver-bc8bg" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-csi--node--driver--bc8bg-" Oct 13 00:06:41.890999 containerd[1544]: 2025-10-13 00:06:41.476 [INFO][4223] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93" Namespace="calico-system" Pod="csi-node-driver-bc8bg" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-csi--node--driver--bc8bg-eth0" Oct 13 00:06:41.890999 containerd[1544]: 2025-10-13 00:06:41.557 [INFO][4264] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93" HandleID="k8s-pod-network.03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93" Workload="ci--4459--1--0--c--ccbbacf556-k8s-csi--node--driver--bc8bg-eth0" Oct 13 00:06:41.890999 containerd[1544]: 2025-10-13 00:06:41.558 [INFO][4264] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93" HandleID="k8s-pod-network.03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93" Workload="ci--4459--1--0--c--ccbbacf556-k8s-csi--node--driver--bc8bg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b150), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-0-c-ccbbacf556", "pod":"csi-node-driver-bc8bg", "timestamp":"2025-10-13 00:06:41.557776396 +0000 UTC"}, Hostname:"ci-4459-1-0-c-ccbbacf556", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:06:41.890999 containerd[1544]: 2025-10-13 00:06:41.558 [INFO][4264] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:06:41.890999 containerd[1544]: 2025-10-13 00:06:41.703 [INFO][4264] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:06:41.890999 containerd[1544]: 2025-10-13 00:06:41.705 [INFO][4264] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-c-ccbbacf556' Oct 13 00:06:41.890999 containerd[1544]: 2025-10-13 00:06:41.745 [INFO][4264] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.890999 containerd[1544]: 2025-10-13 00:06:41.764 [INFO][4264] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.890999 containerd[1544]: 2025-10-13 00:06:41.776 [INFO][4264] ipam/ipam.go 511: Trying affinity for 192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.890999 containerd[1544]: 2025-10-13 00:06:41.783 [INFO][4264] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.890999 containerd[1544]: 2025-10-13 00:06:41.789 [INFO][4264] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.890999 containerd[1544]: 2025-10-13 00:06:41.790 [INFO][4264] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.890999 containerd[1544]: 2025-10-13 00:06:41.796 [INFO][4264] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93 Oct 13 00:06:41.890999 containerd[1544]: 2025-10-13 00:06:41.811 [INFO][4264] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.890999 containerd[1544]: 2025-10-13 00:06:41.827 [INFO][4264] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.68/26] block=192.168.88.64/26 handle="k8s-pod-network.03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.890999 containerd[1544]: 2025-10-13 00:06:41.827 [INFO][4264] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.68/26] handle="k8s-pod-network.03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:41.890999 containerd[1544]: 2025-10-13 00:06:41.827 [INFO][4264] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:06:41.890999 containerd[1544]: 2025-10-13 00:06:41.827 [INFO][4264] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.68/26] IPv6=[] ContainerID="03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93" HandleID="k8s-pod-network.03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93" Workload="ci--4459--1--0--c--ccbbacf556-k8s-csi--node--driver--bc8bg-eth0" Oct 13 00:06:41.891856 containerd[1544]: 2025-10-13 00:06:41.842 [INFO][4223] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93" Namespace="calico-system" Pod="csi-node-driver-bc8bg" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-csi--node--driver--bc8bg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--ccbbacf556-k8s-csi--node--driver--bc8bg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 6, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-ccbbacf556", ContainerID:"", Pod:"csi-node-driver-bc8bg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6b05a2823ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:06:41.891856 containerd[1544]: 2025-10-13 00:06:41.843 [INFO][4223] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.68/32] ContainerID="03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93" Namespace="calico-system" Pod="csi-node-driver-bc8bg" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-csi--node--driver--bc8bg-eth0" Oct 13 00:06:41.891856 containerd[1544]: 2025-10-13 00:06:41.844 [INFO][4223] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b05a2823ce ContainerID="03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93" Namespace="calico-system" Pod="csi-node-driver-bc8bg" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-csi--node--driver--bc8bg-eth0" Oct 13 00:06:41.891856 containerd[1544]: 2025-10-13 00:06:41.865 [INFO][4223] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93" Namespace="calico-system" Pod="csi-node-driver-bc8bg" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-csi--node--driver--bc8bg-eth0" Oct 13 00:06:41.891856 containerd[1544]: 2025-10-13 00:06:41.866 [INFO][4223] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93" Namespace="calico-system" Pod="csi-node-driver-bc8bg" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-csi--node--driver--bc8bg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--ccbbacf556-k8s-csi--node--driver--bc8bg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 6, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-ccbbacf556", ContainerID:"03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93", Pod:"csi-node-driver-bc8bg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6b05a2823ce", MAC:"86:d5:5f:42:96:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:06:41.891856 containerd[1544]: 2025-10-13 00:06:41.882 [INFO][4223] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93" Namespace="calico-system" Pod="csi-node-driver-bc8bg" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-csi--node--driver--bc8bg-eth0" Oct 13 00:06:41.899537 containerd[1544]: time="2025-10-13T00:06:41.899495444Z" level=info msg="connecting to shim ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97" address="unix:///run/containerd/s/36aab2f3e8467dd4361c95552697fb90f3c21249708e311d547cc000feb1515e" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:06:41.912615 containerd[1544]: time="2025-10-13T00:06:41.912529215Z" level=info msg="Container 0194dbc773265ce035c753670fdc35cf16997df8f16f0c547213075a08fd1ec7: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:06:41.937838 containerd[1544]: time="2025-10-13T00:06:41.937779445Z" level=info msg="connecting to shim 03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93" address="unix:///run/containerd/s/1e9e83c4b70819edf23efda1b0d558e9127664abd4e80515827f9e9e44603dad" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:06:41.948802 containerd[1544]: time="2025-10-13T00:06:41.948664568Z" level=info msg="CreateContainer within sandbox \"2fada68763544b19c8b55e8b0aa9b57fe9305eb8c14e1ad935f9dc9e062492f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0194dbc773265ce035c753670fdc35cf16997df8f16f0c547213075a08fd1ec7\"" Oct 13 00:06:41.959244 containerd[1544]: time="2025-10-13T00:06:41.957223557Z" level=info msg="StartContainer for \"0194dbc773265ce035c753670fdc35cf16997df8f16f0c547213075a08fd1ec7\"" Oct 13 00:06:41.967350 systemd[1]: Started cri-containerd-ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97.scope - libcontainer container ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97. Oct 13 00:06:41.971202 containerd[1544]: time="2025-10-13T00:06:41.971158765Z" level=info msg="connecting to shim 0194dbc773265ce035c753670fdc35cf16997df8f16f0c547213075a08fd1ec7" address="unix:///run/containerd/s/ecbb65f534a0e128ac34d27df487d6121a57163a59957bc5f88d8ad697c34bf4" protocol=ttrpc version=3 Oct 13 00:06:41.983183 systemd[1]: Started cri-containerd-03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93.scope - libcontainer container 03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93. Oct 13 00:06:42.002967 systemd[1]: Started cri-containerd-0194dbc773265ce035c753670fdc35cf16997df8f16f0c547213075a08fd1ec7.scope - libcontainer container 0194dbc773265ce035c753670fdc35cf16997df8f16f0c547213075a08fd1ec7. Oct 13 00:06:42.068354 containerd[1544]: time="2025-10-13T00:06:42.067672927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc8bg,Uid:01aa0e53-b9f7-4e6c-b5ea-bde03ed0aec1,Namespace:calico-system,Attempt:0,} returns sandbox id \"03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93\"" Oct 13 00:06:42.068354 containerd[1544]: time="2025-10-13T00:06:42.068144706Z" level=info msg="StartContainer for \"0194dbc773265ce035c753670fdc35cf16997df8f16f0c547213075a08fd1ec7\" returns successfully" Oct 13 00:06:42.074198 containerd[1544]: time="2025-10-13T00:06:42.074109504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 00:06:42.122375 containerd[1544]: time="2025-10-13T00:06:42.122304311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5789d66748-bhnjj,Uid:53e4f244-7eb8-4d22-a08f-aeae3cade04b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97\"" Oct 13 00:06:42.645162 kubelet[2750]: I1013 00:06:42.644517 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9mcnk" podStartSLOduration=37.644498429 podStartE2EDuration="37.644498429s" podCreationTimestamp="2025-10-13 00:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:06:42.643446587 +0000 UTC m=+44.395649503" watchObservedRunningTime="2025-10-13 00:06:42.644498429 +0000 UTC m=+44.396701345" Oct 13 00:06:42.788180 systemd-networkd[1413]: cali8ef0dcdd95a: Gained IPv6LL Oct 13 00:06:43.172534 systemd-networkd[1413]: calie2f53703a7c: Gained IPv6LL Oct 13 00:06:43.387558 containerd[1544]: time="2025-10-13T00:06:43.386951993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-lzkp5,Uid:e3c7842e-bcb8-45e2-b1cf-3717aeb735ad,Namespace:calico-system,Attempt:0,}" Oct 13 00:06:43.573030 containerd[1544]: time="2025-10-13T00:06:43.572462235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:43.574231 containerd[1544]: time="2025-10-13T00:06:43.574197863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Oct 13 00:06:43.575372 containerd[1544]: time="2025-10-13T00:06:43.575339988Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:43.580851 containerd[1544]: time="2025-10-13T00:06:43.580796482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:43.584200 containerd[1544]: time="2025-10-13T00:06:43.584153574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.509764539s" Oct 13 00:06:43.584200 containerd[1544]: time="2025-10-13T00:06:43.584195576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Oct 13 00:06:43.587053 containerd[1544]: time="2025-10-13T00:06:43.586289658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 00:06:43.593184 containerd[1544]: time="2025-10-13T00:06:43.593130327Z" level=info msg="CreateContainer within sandbox \"03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 00:06:43.599336 systemd-networkd[1413]: calicec266e587a: Link UP Oct 13 00:06:43.600965 systemd-networkd[1413]: calicec266e587a: Gained carrier Oct 13 00:06:43.629218 containerd[1544]: 2025-10-13 00:06:43.455 [INFO][4490] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--c--ccbbacf556-k8s-goldmane--854f97d977--lzkp5-eth0 goldmane-854f97d977- calico-system e3c7842e-bcb8-45e2-b1cf-3717aeb735ad 843 0 2025-10-13 00:06:21 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:854f97d977 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-1-0-c-ccbbacf556 goldmane-854f97d977-lzkp5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calicec266e587a [] [] }} ContainerID="bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843" Namespace="calico-system" Pod="goldmane-854f97d977-lzkp5" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-goldmane--854f97d977--lzkp5-" Oct 13 00:06:43.629218 containerd[1544]: 2025-10-13 00:06:43.455 [INFO][4490] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843" Namespace="calico-system" Pod="goldmane-854f97d977-lzkp5" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-goldmane--854f97d977--lzkp5-eth0" Oct 13 00:06:43.629218 containerd[1544]: 2025-10-13 00:06:43.517 [INFO][4506] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843" HandleID="k8s-pod-network.bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843" Workload="ci--4459--1--0--c--ccbbacf556-k8s-goldmane--854f97d977--lzkp5-eth0" Oct 13 00:06:43.629218 containerd[1544]: 2025-10-13 00:06:43.517 [INFO][4506] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843" HandleID="k8s-pod-network.bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843" Workload="ci--4459--1--0--c--ccbbacf556-k8s-goldmane--854f97d977--lzkp5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3120), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-0-c-ccbbacf556", "pod":"goldmane-854f97d977-lzkp5", "timestamp":"2025-10-13 00:06:43.517412234 +0000 UTC"}, Hostname:"ci-4459-1-0-c-ccbbacf556", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:06:43.629218 containerd[1544]: 2025-10-13 00:06:43.517 [INFO][4506] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:06:43.629218 containerd[1544]: 2025-10-13 00:06:43.518 [INFO][4506] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:06:43.629218 containerd[1544]: 2025-10-13 00:06:43.518 [INFO][4506] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-c-ccbbacf556' Oct 13 00:06:43.629218 containerd[1544]: 2025-10-13 00:06:43.536 [INFO][4506] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:43.629218 containerd[1544]: 2025-10-13 00:06:43.546 [INFO][4506] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:43.629218 containerd[1544]: 2025-10-13 00:06:43.554 [INFO][4506] ipam/ipam.go 511: Trying affinity for 192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:43.629218 containerd[1544]: 2025-10-13 00:06:43.559 [INFO][4506] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:43.629218 containerd[1544]: 2025-10-13 00:06:43.563 [INFO][4506] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:43.629218 containerd[1544]: 2025-10-13 00:06:43.564 [INFO][4506] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:43.629218 containerd[1544]: 2025-10-13 00:06:43.568 [INFO][4506] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843 Oct 13 00:06:43.629218 containerd[1544]: 2025-10-13 00:06:43.578 [INFO][4506] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:43.629218 containerd[1544]: 2025-10-13 00:06:43.592 [INFO][4506] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.69/26] block=192.168.88.64/26 handle="k8s-pod-network.bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:43.629218 containerd[1544]: 2025-10-13 00:06:43.592 [INFO][4506] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.69/26] handle="k8s-pod-network.bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:43.629218 containerd[1544]: 2025-10-13 00:06:43.593 [INFO][4506] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:06:43.629218 containerd[1544]: 2025-10-13 00:06:43.593 [INFO][4506] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.69/26] IPv6=[] ContainerID="bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843" HandleID="k8s-pod-network.bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843" Workload="ci--4459--1--0--c--ccbbacf556-k8s-goldmane--854f97d977--lzkp5-eth0" Oct 13 00:06:43.630541 containerd[1544]: 2025-10-13 00:06:43.595 [INFO][4490] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843" Namespace="calico-system" Pod="goldmane-854f97d977-lzkp5" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-goldmane--854f97d977--lzkp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--ccbbacf556-k8s-goldmane--854f97d977--lzkp5-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"e3c7842e-bcb8-45e2-b1cf-3717aeb735ad", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 6, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-ccbbacf556", ContainerID:"", Pod:"goldmane-854f97d977-lzkp5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicec266e587a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:06:43.630541 containerd[1544]: 2025-10-13 00:06:43.596 [INFO][4490] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.69/32] ContainerID="bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843" Namespace="calico-system" Pod="goldmane-854f97d977-lzkp5" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-goldmane--854f97d977--lzkp5-eth0" Oct 13 00:06:43.630541 containerd[1544]: 2025-10-13 00:06:43.596 [INFO][4490] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicec266e587a ContainerID="bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843" Namespace="calico-system" Pod="goldmane-854f97d977-lzkp5" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-goldmane--854f97d977--lzkp5-eth0" Oct 13 00:06:43.630541 containerd[1544]: 2025-10-13 00:06:43.598 [INFO][4490] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843" Namespace="calico-system" Pod="goldmane-854f97d977-lzkp5" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-goldmane--854f97d977--lzkp5-eth0" Oct 13 00:06:43.630541 containerd[1544]: 2025-10-13 00:06:43.599 [INFO][4490] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843" Namespace="calico-system" Pod="goldmane-854f97d977-lzkp5" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-goldmane--854f97d977--lzkp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--ccbbacf556-k8s-goldmane--854f97d977--lzkp5-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"e3c7842e-bcb8-45e2-b1cf-3717aeb735ad", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 6, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-ccbbacf556", ContainerID:"bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843", Pod:"goldmane-854f97d977-lzkp5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicec266e587a", MAC:"6a:4c:97:a2:ea:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:06:43.630541 containerd[1544]: 2025-10-13 00:06:43.620 [INFO][4490] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843" Namespace="calico-system" Pod="goldmane-854f97d977-lzkp5" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-goldmane--854f97d977--lzkp5-eth0" Oct 13 00:06:43.638783 containerd[1544]: time="2025-10-13T00:06:43.638555150Z" level=info msg="Container cf525846a9fcd081149b763fc997445f050d35d8c69a8a3a28307bdcf7bd0d85: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:06:43.651330 containerd[1544]: time="2025-10-13T00:06:43.651188766Z" level=info msg="CreateContainer within sandbox \"03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cf525846a9fcd081149b763fc997445f050d35d8c69a8a3a28307bdcf7bd0d85\"" Oct 13 00:06:43.653180 containerd[1544]: time="2025-10-13T00:06:43.653088240Z" level=info msg="StartContainer for \"cf525846a9fcd081149b763fc997445f050d35d8c69a8a3a28307bdcf7bd0d85\"" Oct 13 00:06:43.665050 containerd[1544]: time="2025-10-13T00:06:43.663226238Z" level=info msg="connecting to shim cf525846a9fcd081149b763fc997445f050d35d8c69a8a3a28307bdcf7bd0d85" address="unix:///run/containerd/s/1e9e83c4b70819edf23efda1b0d558e9127664abd4e80515827f9e9e44603dad" protocol=ttrpc version=3 Oct 13 00:06:43.679524 containerd[1544]: time="2025-10-13T00:06:43.678886853Z" level=info msg="connecting to shim bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843" address="unix:///run/containerd/s/e5d4176869a4b8f0ffdc26c6104328348b293a804f68b4565f777bdf469d51e7" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:06:43.685526 systemd-networkd[1413]: cali6b05a2823ce: Gained IPv6LL Oct 13 00:06:43.712558 systemd[1]: Started cri-containerd-cf525846a9fcd081149b763fc997445f050d35d8c69a8a3a28307bdcf7bd0d85.scope - libcontainer container cf525846a9fcd081149b763fc997445f050d35d8c69a8a3a28307bdcf7bd0d85. Oct 13 00:06:43.749241 systemd[1]: Started cri-containerd-bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843.scope - libcontainer container bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843. Oct 13 00:06:43.797928 containerd[1544]: time="2025-10-13T00:06:43.796978448Z" level=info msg="StartContainer for \"cf525846a9fcd081149b763fc997445f050d35d8c69a8a3a28307bdcf7bd0d85\" returns successfully" Oct 13 00:06:43.823161 containerd[1544]: time="2025-10-13T00:06:43.822998310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-lzkp5,Uid:e3c7842e-bcb8-45e2-b1cf-3717aeb735ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843\"" Oct 13 00:06:44.385876 containerd[1544]: time="2025-10-13T00:06:44.385451446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-55db8,Uid:4a3d6e61-9a33-4840-8142-125a1b0a5221,Namespace:kube-system,Attempt:0,}" Oct 13 00:06:44.389951 containerd[1544]: time="2025-10-13T00:06:44.389440720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b6546f656-gvhgn,Uid:70f10094-62b2-4cbd-b7f0-bdb77a19b9b7,Namespace:calico-system,Attempt:0,}" Oct 13 00:06:44.578004 systemd-networkd[1413]: cali69047420493: Link UP Oct 13 00:06:44.579569 systemd-networkd[1413]: cali69047420493: Gained carrier Oct 13 00:06:44.601623 containerd[1544]: 2025-10-13 00:06:44.453 [INFO][4599] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--55db8-eth0 coredns-66bc5c9577- kube-system 4a3d6e61-9a33-4840-8142-125a1b0a5221 833 0 2025-10-13 00:06:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-1-0-c-ccbbacf556 coredns-66bc5c9577-55db8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali69047420493 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1" Namespace="kube-system" Pod="coredns-66bc5c9577-55db8" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--55db8-" Oct 13 00:06:44.601623 containerd[1544]: 2025-10-13 00:06:44.453 [INFO][4599] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1" Namespace="kube-system" Pod="coredns-66bc5c9577-55db8" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--55db8-eth0" Oct 13 00:06:44.601623 containerd[1544]: 2025-10-13 00:06:44.501 [INFO][4622] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1" HandleID="k8s-pod-network.3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1" Workload="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--55db8-eth0" Oct 13 00:06:44.601623 containerd[1544]: 2025-10-13 00:06:44.503 [INFO][4622] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1" HandleID="k8s-pod-network.3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1" Workload="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--55db8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3ba0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-1-0-c-ccbbacf556", "pod":"coredns-66bc5c9577-55db8", "timestamp":"2025-10-13 00:06:44.501598206 +0000 UTC"}, Hostname:"ci-4459-1-0-c-ccbbacf556", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:06:44.601623 containerd[1544]: 2025-10-13 00:06:44.503 [INFO][4622] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:06:44.601623 containerd[1544]: 2025-10-13 00:06:44.503 [INFO][4622] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:06:44.601623 containerd[1544]: 2025-10-13 00:06:44.503 [INFO][4622] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-c-ccbbacf556' Oct 13 00:06:44.601623 containerd[1544]: 2025-10-13 00:06:44.516 [INFO][4622] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:44.601623 containerd[1544]: 2025-10-13 00:06:44.529 [INFO][4622] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:44.601623 containerd[1544]: 2025-10-13 00:06:44.537 [INFO][4622] ipam/ipam.go 511: Trying affinity for 192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:44.601623 containerd[1544]: 2025-10-13 00:06:44.541 [INFO][4622] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:44.601623 containerd[1544]: 2025-10-13 00:06:44.545 [INFO][4622] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:44.601623 containerd[1544]: 2025-10-13 00:06:44.545 [INFO][4622] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:44.601623 containerd[1544]: 2025-10-13 00:06:44.548 [INFO][4622] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1 Oct 13 00:06:44.601623 containerd[1544]: 2025-10-13 00:06:44.556 [INFO][4622] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:44.601623 containerd[1544]: 2025-10-13 00:06:44.566 [INFO][4622] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.70/26] block=192.168.88.64/26 handle="k8s-pod-network.3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:44.601623 containerd[1544]: 2025-10-13 00:06:44.566 [INFO][4622] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.70/26] handle="k8s-pod-network.3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:44.601623 containerd[1544]: 2025-10-13 00:06:44.566 [INFO][4622] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:06:44.601623 containerd[1544]: 2025-10-13 00:06:44.567 [INFO][4622] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.70/26] IPv6=[] ContainerID="3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1" HandleID="k8s-pod-network.3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1" Workload="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--55db8-eth0" Oct 13 00:06:44.603740 containerd[1544]: 2025-10-13 00:06:44.570 [INFO][4599] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1" Namespace="kube-system" Pod="coredns-66bc5c9577-55db8" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--55db8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--55db8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4a3d6e61-9a33-4840-8142-125a1b0a5221", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 6, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-ccbbacf556", ContainerID:"", Pod:"coredns-66bc5c9577-55db8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69047420493", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:06:44.603740 containerd[1544]: 2025-10-13 00:06:44.570 [INFO][4599] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.70/32] ContainerID="3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1" Namespace="kube-system" Pod="coredns-66bc5c9577-55db8" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--55db8-eth0" Oct 13 00:06:44.603740 containerd[1544]: 2025-10-13 00:06:44.570 [INFO][4599] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69047420493 ContainerID="3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1" Namespace="kube-system" Pod="coredns-66bc5c9577-55db8" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--55db8-eth0" Oct 13 00:06:44.603740 containerd[1544]: 2025-10-13 00:06:44.579 [INFO][4599] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1" Namespace="kube-system" Pod="coredns-66bc5c9577-55db8" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--55db8-eth0" Oct 13 00:06:44.603740 containerd[1544]: 2025-10-13 00:06:44.581 [INFO][4599] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1" Namespace="kube-system" Pod="coredns-66bc5c9577-55db8" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--55db8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--55db8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4a3d6e61-9a33-4840-8142-125a1b0a5221", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 6, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-ccbbacf556", ContainerID:"3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1", Pod:"coredns-66bc5c9577-55db8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69047420493", MAC:"62:97:a2:ee:9c:6b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:06:44.604310 containerd[1544]: 2025-10-13 00:06:44.598 [INFO][4599] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1" Namespace="kube-system" Pod="coredns-66bc5c9577-55db8" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-coredns--66bc5c9577--55db8-eth0" Oct 13 00:06:44.657543 containerd[1544]: time="2025-10-13T00:06:44.657310972Z" level=info msg="connecting to shim 3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1" address="unix:///run/containerd/s/82e6b60fa2fcc8192d928423b36a7aba50ab0026cac075e3d8fdd6872b023aef" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:06:44.705186 systemd[1]: Started cri-containerd-3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1.scope - libcontainer container 3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1. Oct 13 00:06:44.710199 systemd-networkd[1413]: caliefe80aa39f6: Link UP Oct 13 00:06:44.710829 systemd-networkd[1413]: caliefe80aa39f6: Gained carrier Oct 13 00:06:44.741694 containerd[1544]: 2025-10-13 00:06:44.458 [INFO][4608] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--c--ccbbacf556-k8s-calico--kube--controllers--7b6546f656--gvhgn-eth0 calico-kube-controllers-7b6546f656- calico-system 70f10094-62b2-4cbd-b7f0-bdb77a19b9b7 841 0 2025-10-13 00:06:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b6546f656 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-1-0-c-ccbbacf556 calico-kube-controllers-7b6546f656-gvhgn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliefe80aa39f6 [] [] }} ContainerID="1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1" Namespace="calico-system" Pod="calico-kube-controllers-7b6546f656-gvhgn" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--kube--controllers--7b6546f656--gvhgn-" Oct 13 00:06:44.741694 containerd[1544]: 2025-10-13 00:06:44.458 [INFO][4608] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1" Namespace="calico-system" Pod="calico-kube-controllers-7b6546f656-gvhgn" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--kube--controllers--7b6546f656--gvhgn-eth0" Oct 13 00:06:44.741694 containerd[1544]: 2025-10-13 00:06:44.504 [INFO][4624] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1" HandleID="k8s-pod-network.1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1" Workload="ci--4459--1--0--c--ccbbacf556-k8s-calico--kube--controllers--7b6546f656--gvhgn-eth0" Oct 13 00:06:44.741694 containerd[1544]: 2025-10-13 00:06:44.505 [INFO][4624] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1" HandleID="k8s-pod-network.1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1" Workload="ci--4459--1--0--c--ccbbacf556-k8s-calico--kube--controllers--7b6546f656--gvhgn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-0-c-ccbbacf556", "pod":"calico-kube-controllers-7b6546f656-gvhgn", "timestamp":"2025-10-13 00:06:44.504962296 +0000 UTC"}, Hostname:"ci-4459-1-0-c-ccbbacf556", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:06:44.741694 containerd[1544]: 2025-10-13 00:06:44.505 [INFO][4624] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:06:44.741694 containerd[1544]: 2025-10-13 00:06:44.566 [INFO][4624] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:06:44.741694 containerd[1544]: 2025-10-13 00:06:44.567 [INFO][4624] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-c-ccbbacf556' Oct 13 00:06:44.741694 containerd[1544]: 2025-10-13 00:06:44.620 [INFO][4624] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:44.741694 containerd[1544]: 2025-10-13 00:06:44.630 [INFO][4624] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:44.741694 containerd[1544]: 2025-10-13 00:06:44.643 [INFO][4624] ipam/ipam.go 511: Trying affinity for 192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:44.741694 containerd[1544]: 2025-10-13 00:06:44.650 [INFO][4624] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:44.741694 containerd[1544]: 2025-10-13 00:06:44.660 [INFO][4624] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:44.741694 containerd[1544]: 2025-10-13 00:06:44.660 [INFO][4624] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:44.741694 containerd[1544]: 2025-10-13 00:06:44.663 [INFO][4624] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1 Oct 13 00:06:44.741694 containerd[1544]: 2025-10-13 00:06:44.678 [INFO][4624] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:44.741694 containerd[1544]: 2025-10-13 00:06:44.695 [INFO][4624] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.71/26] block=192.168.88.64/26 handle="k8s-pod-network.1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:44.741694 containerd[1544]: 2025-10-13 00:06:44.696 [INFO][4624] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.71/26] handle="k8s-pod-network.1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:44.741694 containerd[1544]: 2025-10-13 00:06:44.696 [INFO][4624] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:06:44.741694 containerd[1544]: 2025-10-13 00:06:44.696 [INFO][4624] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.71/26] IPv6=[] ContainerID="1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1" HandleID="k8s-pod-network.1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1" Workload="ci--4459--1--0--c--ccbbacf556-k8s-calico--kube--controllers--7b6546f656--gvhgn-eth0" Oct 13 00:06:44.744660 containerd[1544]: 2025-10-13 00:06:44.700 [INFO][4608] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1" Namespace="calico-system" Pod="calico-kube-controllers-7b6546f656-gvhgn" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--kube--controllers--7b6546f656--gvhgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--ccbbacf556-k8s-calico--kube--controllers--7b6546f656--gvhgn-eth0", GenerateName:"calico-kube-controllers-7b6546f656-", Namespace:"calico-system", SelfLink:"", UID:"70f10094-62b2-4cbd-b7f0-bdb77a19b9b7", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 6, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b6546f656", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-ccbbacf556", ContainerID:"", Pod:"calico-kube-controllers-7b6546f656-gvhgn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefe80aa39f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:06:44.744660 containerd[1544]: 2025-10-13 00:06:44.700 [INFO][4608] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.71/32] ContainerID="1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1" Namespace="calico-system" Pod="calico-kube-controllers-7b6546f656-gvhgn" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--kube--controllers--7b6546f656--gvhgn-eth0" Oct 13 00:06:44.744660 containerd[1544]: 2025-10-13 00:06:44.700 [INFO][4608] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefe80aa39f6 ContainerID="1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1" Namespace="calico-system" Pod="calico-kube-controllers-7b6546f656-gvhgn" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--kube--controllers--7b6546f656--gvhgn-eth0" Oct 13 00:06:44.744660 containerd[1544]: 2025-10-13 00:06:44.714 [INFO][4608] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1" Namespace="calico-system" Pod="calico-kube-controllers-7b6546f656-gvhgn" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--kube--controllers--7b6546f656--gvhgn-eth0" Oct 13 00:06:44.744660 containerd[1544]: 2025-10-13 00:06:44.716 [INFO][4608] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1" Namespace="calico-system" Pod="calico-kube-controllers-7b6546f656-gvhgn" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--kube--controllers--7b6546f656--gvhgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--ccbbacf556-k8s-calico--kube--controllers--7b6546f656--gvhgn-eth0", GenerateName:"calico-kube-controllers-7b6546f656-", Namespace:"calico-system", SelfLink:"", UID:"70f10094-62b2-4cbd-b7f0-bdb77a19b9b7", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 6, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b6546f656", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-ccbbacf556", ContainerID:"1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1", Pod:"calico-kube-controllers-7b6546f656-gvhgn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefe80aa39f6", MAC:"ce:b2:20:df:66:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:06:44.744660 containerd[1544]: 2025-10-13 00:06:44.737 [INFO][4608] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1" Namespace="calico-system" Pod="calico-kube-controllers-7b6546f656-gvhgn" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--kube--controllers--7b6546f656--gvhgn-eth0" Oct 13 00:06:44.800407 containerd[1544]: time="2025-10-13T00:06:44.800363130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-55db8,Uid:4a3d6e61-9a33-4840-8142-125a1b0a5221,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1\"" Oct 13 00:06:44.807240 containerd[1544]: time="2025-10-13T00:06:44.806955704Z" level=info msg="connecting to shim 1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1" address="unix:///run/containerd/s/d1f220b8dd62dc9a93fa789429cde879e84005c9ad51304584f9245d5ea798a0" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:06:44.816211 containerd[1544]: time="2025-10-13T00:06:44.815970332Z" level=info msg="CreateContainer within sandbox \"3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 00:06:44.846181 systemd[1]: Started cri-containerd-1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1.scope - libcontainer container 1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1. Oct 13 00:06:44.856203 containerd[1544]: time="2025-10-13T00:06:44.856121921Z" level=info msg="Container 5c67bc44f38fb7c3d5dbbf80dc3969f7a448977fe5edf5439fc8eea602ce0527: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:06:44.865931 containerd[1544]: time="2025-10-13T00:06:44.865750052Z" level=info msg="CreateContainer within sandbox \"3fbffad4ebcde55c04340de7161138213cf4c40415f3bde2d1512bce97d586c1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c67bc44f38fb7c3d5dbbf80dc3969f7a448977fe5edf5439fc8eea602ce0527\"" Oct 13 00:06:44.868539 containerd[1544]: time="2025-10-13T00:06:44.868490758Z" level=info msg="StartContainer for \"5c67bc44f38fb7c3d5dbbf80dc3969f7a448977fe5edf5439fc8eea602ce0527\"" Oct 13 00:06:44.870037 containerd[1544]: time="2025-10-13T00:06:44.869994016Z" level=info msg="connecting to shim 5c67bc44f38fb7c3d5dbbf80dc3969f7a448977fe5edf5439fc8eea602ce0527" address="unix:///run/containerd/s/82e6b60fa2fcc8192d928423b36a7aba50ab0026cac075e3d8fdd6872b023aef" protocol=ttrpc version=3 Oct 13 00:06:44.901118 systemd[1]: Started cri-containerd-5c67bc44f38fb7c3d5dbbf80dc3969f7a448977fe5edf5439fc8eea602ce0527.scope - libcontainer container 5c67bc44f38fb7c3d5dbbf80dc3969f7a448977fe5edf5439fc8eea602ce0527. Oct 13 00:06:44.917295 containerd[1544]: time="2025-10-13T00:06:44.916868224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b6546f656-gvhgn,Uid:70f10094-62b2-4cbd-b7f0-bdb77a19b9b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1\"" Oct 13 00:06:44.958877 containerd[1544]: time="2025-10-13T00:06:44.958835202Z" level=info msg="StartContainer for \"5c67bc44f38fb7c3d5dbbf80dc3969f7a448977fe5edf5439fc8eea602ce0527\" returns successfully" Oct 13 00:06:45.386042 containerd[1544]: time="2025-10-13T00:06:45.385892628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5789d66748-lpcf6,Uid:c183318d-6e0e-4dec-a61a-7d306d33a93f,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:06:45.524177 systemd-networkd[1413]: cali5a0d1d6d911: Link UP Oct 13 00:06:45.529498 systemd-networkd[1413]: cali5a0d1d6d911: Gained carrier Oct 13 00:06:45.540487 systemd-networkd[1413]: calicec266e587a: Gained IPv6LL Oct 13 00:06:45.554927 containerd[1544]: 2025-10-13 00:06:45.435 [INFO][4784] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--lpcf6-eth0 calico-apiserver-5789d66748- calico-apiserver c183318d-6e0e-4dec-a61a-7d306d33a93f 845 0 2025-10-13 00:06:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5789d66748 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-1-0-c-ccbbacf556 calico-apiserver-5789d66748-lpcf6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5a0d1d6d911 [] [] }} ContainerID="019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9" Namespace="calico-apiserver" Pod="calico-apiserver-5789d66748-lpcf6" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--lpcf6-" Oct 13 00:06:45.554927 containerd[1544]: 2025-10-13 00:06:45.435 [INFO][4784] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9" Namespace="calico-apiserver" Pod="calico-apiserver-5789d66748-lpcf6" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--lpcf6-eth0" Oct 13 00:06:45.554927 containerd[1544]: 2025-10-13 00:06:45.462 [INFO][4792] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9" HandleID="k8s-pod-network.019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9" Workload="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--lpcf6-eth0" Oct 13 00:06:45.554927 containerd[1544]: 2025-10-13 00:06:45.462 [INFO][4792] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9" HandleID="k8s-pod-network.019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9" Workload="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--lpcf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b1f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-1-0-c-ccbbacf556", "pod":"calico-apiserver-5789d66748-lpcf6", "timestamp":"2025-10-13 00:06:45.462465213 +0000 UTC"}, Hostname:"ci-4459-1-0-c-ccbbacf556", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:06:45.554927 containerd[1544]: 2025-10-13 00:06:45.462 [INFO][4792] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:06:45.554927 containerd[1544]: 2025-10-13 00:06:45.462 [INFO][4792] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:06:45.554927 containerd[1544]: 2025-10-13 00:06:45.462 [INFO][4792] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-c-ccbbacf556' Oct 13 00:06:45.554927 containerd[1544]: 2025-10-13 00:06:45.474 [INFO][4792] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:45.554927 containerd[1544]: 2025-10-13 00:06:45.480 [INFO][4792] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:45.554927 containerd[1544]: 2025-10-13 00:06:45.486 [INFO][4792] ipam/ipam.go 511: Trying affinity for 192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:45.554927 containerd[1544]: 2025-10-13 00:06:45.488 [INFO][4792] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:45.554927 containerd[1544]: 2025-10-13 00:06:45.491 [INFO][4792] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:45.554927 containerd[1544]: 2025-10-13 00:06:45.492 [INFO][4792] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:45.554927 containerd[1544]: 2025-10-13 00:06:45.494 [INFO][4792] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9 Oct 13 00:06:45.554927 containerd[1544]: 2025-10-13 00:06:45.499 [INFO][4792] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:45.554927 containerd[1544]: 2025-10-13 00:06:45.516 [INFO][4792] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.72/26] block=192.168.88.64/26 handle="k8s-pod-network.019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:45.554927 containerd[1544]: 2025-10-13 00:06:45.516 [INFO][4792] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.72/26] handle="k8s-pod-network.019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9" host="ci-4459-1-0-c-ccbbacf556" Oct 13 00:06:45.554927 containerd[1544]: 2025-10-13 00:06:45.516 [INFO][4792] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:06:45.554927 containerd[1544]: 2025-10-13 00:06:45.516 [INFO][4792] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.72/26] IPv6=[] ContainerID="019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9" HandleID="k8s-pod-network.019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9" Workload="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--lpcf6-eth0" Oct 13 00:06:45.555837 containerd[1544]: 2025-10-13 00:06:45.518 [INFO][4784] cni-plugin/k8s.go 418: Populated endpoint ContainerID="019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9" Namespace="calico-apiserver" Pod="calico-apiserver-5789d66748-lpcf6" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--lpcf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--lpcf6-eth0", GenerateName:"calico-apiserver-5789d66748-", Namespace:"calico-apiserver", SelfLink:"", UID:"c183318d-6e0e-4dec-a61a-7d306d33a93f", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5789d66748", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-ccbbacf556", ContainerID:"", Pod:"calico-apiserver-5789d66748-lpcf6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a0d1d6d911", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:06:45.555837 containerd[1544]: 2025-10-13 00:06:45.518 [INFO][4784] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.72/32] ContainerID="019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9" Namespace="calico-apiserver" Pod="calico-apiserver-5789d66748-lpcf6" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--lpcf6-eth0" Oct 13 00:06:45.555837 containerd[1544]: 2025-10-13 00:06:45.518 [INFO][4784] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a0d1d6d911 ContainerID="019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9" Namespace="calico-apiserver" Pod="calico-apiserver-5789d66748-lpcf6" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--lpcf6-eth0" Oct 13 00:06:45.555837 containerd[1544]: 2025-10-13 00:06:45.533 [INFO][4784] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9" Namespace="calico-apiserver" Pod="calico-apiserver-5789d66748-lpcf6" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--lpcf6-eth0" Oct 13 00:06:45.555837 containerd[1544]: 2025-10-13 00:06:45.535 [INFO][4784] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9" Namespace="calico-apiserver" Pod="calico-apiserver-5789d66748-lpcf6" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--lpcf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--lpcf6-eth0", GenerateName:"calico-apiserver-5789d66748-", Namespace:"calico-apiserver", SelfLink:"", UID:"c183318d-6e0e-4dec-a61a-7d306d33a93f", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5789d66748", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-ccbbacf556", ContainerID:"019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9", Pod:"calico-apiserver-5789d66748-lpcf6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a0d1d6d911", MAC:"0e:95:af:b3:51:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:06:45.555837 containerd[1544]: 2025-10-13 00:06:45.549 [INFO][4784] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9" Namespace="calico-apiserver" Pod="calico-apiserver-5789d66748-lpcf6" WorkloadEndpoint="ci--4459--1--0--c--ccbbacf556-k8s-calico--apiserver--5789d66748--lpcf6-eth0" Oct 13 00:06:45.586520 containerd[1544]: time="2025-10-13T00:06:45.586208827Z" level=info msg="connecting to shim 019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9" address="unix:///run/containerd/s/0269d443025f09812d77ef24db3c5d039799d37f9528a1c07ec7d25a414eeaf2" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:06:45.613293 systemd[1]: Started cri-containerd-019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9.scope - libcontainer container 019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9. Oct 13 00:06:45.656449 containerd[1544]: time="2025-10-13T00:06:45.656247724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5789d66748-lpcf6,Uid:c183318d-6e0e-4dec-a61a-7d306d33a93f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9\"" Oct 13 00:06:45.678266 kubelet[2750]: I1013 00:06:45.678149 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-55db8" podStartSLOduration=40.678130034 podStartE2EDuration="40.678130034s" podCreationTimestamp="2025-10-13 00:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:06:45.674425213 +0000 UTC m=+47.426628169" watchObservedRunningTime="2025-10-13 00:06:45.678130034 +0000 UTC m=+47.430332910" Oct 13 00:06:46.500467 systemd-networkd[1413]: cali69047420493: Gained IPv6LL Oct 13 00:06:46.501791 systemd-networkd[1413]: caliefe80aa39f6: Gained IPv6LL Oct 13 00:06:47.268297 systemd-networkd[1413]: cali5a0d1d6d911: Gained IPv6LL Oct 13 00:06:47.499392 containerd[1544]: time="2025-10-13T00:06:47.499308776Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:47.501198 containerd[1544]: time="2025-10-13T00:06:47.501146284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Oct 13 00:06:47.502195 containerd[1544]: time="2025-10-13T00:06:47.502127440Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:47.505841 containerd[1544]: time="2025-10-13T00:06:47.505447122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:47.506866 containerd[1544]: time="2025-10-13T00:06:47.506816492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 3.920489353s" Oct 13 00:06:47.506866 containerd[1544]: time="2025-10-13T00:06:47.506864774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Oct 13 00:06:47.508424 containerd[1544]: time="2025-10-13T00:06:47.508386110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 00:06:47.514507 containerd[1544]: time="2025-10-13T00:06:47.514457893Z" level=info msg="CreateContainer within sandbox \"ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 00:06:47.527289 containerd[1544]: time="2025-10-13T00:06:47.527144480Z" level=info msg="Container 1f088c7f00102d9439e005e962c7e961123ac8a5285cc7431b3f491b30a9ca90: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:06:47.545066 containerd[1544]: time="2025-10-13T00:06:47.544996496Z" level=info msg="CreateContainer within sandbox \"ff21a457f22f55c6a49fb502d93bb257aff769d633c04a90629e2ddbe9666a97\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1f088c7f00102d9439e005e962c7e961123ac8a5285cc7431b3f491b30a9ca90\"" Oct 13 00:06:47.547892 containerd[1544]: time="2025-10-13T00:06:47.547584191Z" level=info msg="StartContainer for \"1f088c7f00102d9439e005e962c7e961123ac8a5285cc7431b3f491b30a9ca90\"" Oct 13 00:06:47.551915 containerd[1544]: time="2025-10-13T00:06:47.551762905Z" level=info msg="connecting to shim 1f088c7f00102d9439e005e962c7e961123ac8a5285cc7431b3f491b30a9ca90" address="unix:///run/containerd/s/36aab2f3e8467dd4361c95552697fb90f3c21249708e311d547cc000feb1515e" protocol=ttrpc version=3 Oct 13 00:06:47.580120 systemd[1]: Started cri-containerd-1f088c7f00102d9439e005e962c7e961123ac8a5285cc7431b3f491b30a9ca90.scope - libcontainer container 1f088c7f00102d9439e005e962c7e961123ac8a5285cc7431b3f491b30a9ca90. Oct 13 00:06:47.630770 containerd[1544]: time="2025-10-13T00:06:47.630675366Z" level=info msg="StartContainer for \"1f088c7f00102d9439e005e962c7e961123ac8a5285cc7431b3f491b30a9ca90\" returns successfully" Oct 13 00:06:48.674973 kubelet[2750]: I1013 00:06:48.674924 2750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:06:49.233815 containerd[1544]: time="2025-10-13T00:06:49.233754107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:49.236181 containerd[1544]: time="2025-10-13T00:06:49.236137552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Oct 13 00:06:49.237387 containerd[1544]: time="2025-10-13T00:06:49.237337755Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:49.243199 containerd[1544]: time="2025-10-13T00:06:49.243108122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:49.246052 containerd[1544]: time="2025-10-13T00:06:49.245975144Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.737546673s" Oct 13 00:06:49.246161 containerd[1544]: time="2025-10-13T00:06:49.246074988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Oct 13 00:06:49.249905 containerd[1544]: time="2025-10-13T00:06:49.249834882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 00:06:49.252969 containerd[1544]: time="2025-10-13T00:06:49.252922152Z" level=info msg="CreateContainer within sandbox \"03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 00:06:49.277951 containerd[1544]: time="2025-10-13T00:06:49.277873924Z" level=info msg="Container 3ef7fdc7ed34df4951323d1522a78d674cf42db743c195996a65408d1755b240: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:06:49.282828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1967626168.mount: Deactivated successfully. Oct 13 00:06:49.298520 containerd[1544]: time="2025-10-13T00:06:49.298470101Z" level=info msg="CreateContainer within sandbox \"03ef8a60fde6280f35f66fa096bac0cdab9d4e278472d489157068e1b5e47f93\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3ef7fdc7ed34df4951323d1522a78d674cf42db743c195996a65408d1755b240\"" Oct 13 00:06:49.300170 containerd[1544]: time="2025-10-13T00:06:49.300119080Z" level=info msg="StartContainer for \"3ef7fdc7ed34df4951323d1522a78d674cf42db743c195996a65408d1755b240\"" Oct 13 00:06:49.304062 containerd[1544]: time="2025-10-13T00:06:49.304012979Z" level=info msg="connecting to shim 3ef7fdc7ed34df4951323d1522a78d674cf42db743c195996a65408d1755b240" address="unix:///run/containerd/s/1e9e83c4b70819edf23efda1b0d558e9127664abd4e80515827f9e9e44603dad" protocol=ttrpc version=3 Oct 13 00:06:49.337564 systemd[1]: Started cri-containerd-3ef7fdc7ed34df4951323d1522a78d674cf42db743c195996a65408d1755b240.scope - libcontainer container 3ef7fdc7ed34df4951323d1522a78d674cf42db743c195996a65408d1755b240. Oct 13 00:06:49.445667 containerd[1544]: time="2025-10-13T00:06:49.445619481Z" level=info msg="StartContainer for \"3ef7fdc7ed34df4951323d1522a78d674cf42db743c195996a65408d1755b240\" returns successfully" Oct 13 00:06:49.717649 kubelet[2750]: I1013 00:06:49.717198 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5789d66748-bhnjj" podStartSLOduration=29.334216633 podStartE2EDuration="34.717175509s" podCreationTimestamp="2025-10-13 00:06:15 +0000 UTC" firstStartedPulling="2025-10-13 00:06:42.124719408 +0000 UTC m=+43.876922324" lastFinishedPulling="2025-10-13 00:06:47.507678284 +0000 UTC m=+49.259881200" observedRunningTime="2025-10-13 00:06:47.689149236 +0000 UTC m=+49.441352152" watchObservedRunningTime="2025-10-13 00:06:49.717175509 +0000 UTC m=+51.469378425" Oct 13 00:06:49.718620 kubelet[2750]: I1013 00:06:49.718209 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bc8bg" podStartSLOduration=21.54270053 podStartE2EDuration="28.718192305s" podCreationTimestamp="2025-10-13 00:06:21 +0000 UTC" firstStartedPulling="2025-10-13 00:06:42.072638766 +0000 UTC m=+43.824841682" lastFinishedPulling="2025-10-13 00:06:49.248130541 +0000 UTC m=+51.000333457" observedRunningTime="2025-10-13 00:06:49.716338839 +0000 UTC m=+51.468541755" watchObservedRunningTime="2025-10-13 00:06:49.718192305 +0000 UTC m=+51.470395221" Oct 13 00:06:49.779099 kubelet[2750]: I1013 00:06:49.779001 2750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:06:50.472203 kubelet[2750]: I1013 00:06:50.472061 2750 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 00:06:50.472203 kubelet[2750]: I1013 00:06:50.472117 2750 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 00:06:51.687513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1585649694.mount: Deactivated successfully. Oct 13 00:06:52.229961 containerd[1544]: time="2025-10-13T00:06:52.229396660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:52.230674 containerd[1544]: time="2025-10-13T00:06:52.230572460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Oct 13 00:06:52.231716 containerd[1544]: time="2025-10-13T00:06:52.231396569Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:52.235090 containerd[1544]: time="2025-10-13T00:06:52.235046334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:52.235857 containerd[1544]: time="2025-10-13T00:06:52.235816801Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.984999124s" Oct 13 00:06:52.241457 containerd[1544]: time="2025-10-13T00:06:52.235856762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Oct 13 00:06:52.243763 containerd[1544]: time="2025-10-13T00:06:52.243625430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 00:06:52.249474 containerd[1544]: time="2025-10-13T00:06:52.249400589Z" level=info msg="CreateContainer within sandbox \"bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 00:06:52.268711 containerd[1544]: time="2025-10-13T00:06:52.267114079Z" level=info msg="Container b4d34048289616115fc3842267920cd21cd53e2bbd3eb7072485c4f5ed692891: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:06:52.273690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3362691268.mount: Deactivated successfully. Oct 13 00:06:52.282808 containerd[1544]: time="2025-10-13T00:06:52.282748738Z" level=info msg="CreateContainer within sandbox \"bb28a19781da2f7986e3439cdb58c35362303cba0065e5331434863e8270e843\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b4d34048289616115fc3842267920cd21cd53e2bbd3eb7072485c4f5ed692891\"" Oct 13 00:06:52.284955 containerd[1544]: time="2025-10-13T00:06:52.284919812Z" level=info msg="StartContainer for \"b4d34048289616115fc3842267920cd21cd53e2bbd3eb7072485c4f5ed692891\"" Oct 13 00:06:52.287340 containerd[1544]: time="2025-10-13T00:06:52.287282054Z" level=info msg="connecting to shim b4d34048289616115fc3842267920cd21cd53e2bbd3eb7072485c4f5ed692891" address="unix:///run/containerd/s/e5d4176869a4b8f0ffdc26c6104328348b293a804f68b4565f777bdf469d51e7" protocol=ttrpc version=3 Oct 13 00:06:52.333144 systemd[1]: Started cri-containerd-b4d34048289616115fc3842267920cd21cd53e2bbd3eb7072485c4f5ed692891.scope - libcontainer container b4d34048289616115fc3842267920cd21cd53e2bbd3eb7072485c4f5ed692891. Oct 13 00:06:52.399537 containerd[1544]: time="2025-10-13T00:06:52.399417316Z" level=info msg="StartContainer for \"b4d34048289616115fc3842267920cd21cd53e2bbd3eb7072485c4f5ed692891\" returns successfully" Oct 13 00:06:52.728305 kubelet[2750]: I1013 00:06:52.728066 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-854f97d977-lzkp5" podStartSLOduration=23.311922609 podStartE2EDuration="31.728048516s" podCreationTimestamp="2025-10-13 00:06:21 +0000 UTC" firstStartedPulling="2025-10-13 00:06:43.826747057 +0000 UTC m=+45.578949973" lastFinishedPulling="2025-10-13 00:06:52.242872964 +0000 UTC m=+53.995075880" observedRunningTime="2025-10-13 00:06:52.72265169 +0000 UTC m=+54.474854606" watchObservedRunningTime="2025-10-13 00:06:52.728048516 +0000 UTC m=+54.480251432" Oct 13 00:06:52.861503 containerd[1544]: time="2025-10-13T00:06:52.861421910Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4d34048289616115fc3842267920cd21cd53e2bbd3eb7072485c4f5ed692891\" id:\"b5183c9acc6de8a10ded2b88057f691cb039a098a61d9398c2861fd2c73c2863\" pid:5013 exit_status:1 exited_at:{seconds:1760314012 nanos:848716512}" Oct 13 00:06:53.790363 containerd[1544]: time="2025-10-13T00:06:53.789873549Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4d34048289616115fc3842267920cd21cd53e2bbd3eb7072485c4f5ed692891\" id:\"04242c4dbb1f60a22307570a84d917ab47fb31e8c2ced36d009c782de65508fa\" pid:5039 exit_status:1 exited_at:{seconds:1760314013 nanos:789466535}" Oct 13 00:06:54.873940 containerd[1544]: time="2025-10-13T00:06:54.873521349Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4d34048289616115fc3842267920cd21cd53e2bbd3eb7072485c4f5ed692891\" id:\"e0844e63088cf1116eda0333ab7c85e2c71a17bf87e3fee58281c062125f4107\" pid:5065 exit_status:1 exited_at:{seconds:1760314014 nanos:872746403}" Oct 13 00:06:55.043812 containerd[1544]: time="2025-10-13T00:06:55.043755472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:55.044537 containerd[1544]: time="2025-10-13T00:06:55.044456536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Oct 13 00:06:55.045787 containerd[1544]: time="2025-10-13T00:06:55.045735139Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:55.048351 containerd[1544]: time="2025-10-13T00:06:55.048283584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:55.049295 containerd[1544]: time="2025-10-13T00:06:55.048997287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.805332416s" Oct 13 00:06:55.049295 containerd[1544]: time="2025-10-13T00:06:55.049039489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Oct 13 00:06:55.050808 containerd[1544]: time="2025-10-13T00:06:55.050751546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 00:06:55.071159 containerd[1544]: time="2025-10-13T00:06:55.071121546Z" level=info msg="CreateContainer within sandbox \"1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 00:06:55.083572 containerd[1544]: time="2025-10-13T00:06:55.081123519Z" level=info msg="Container df7cfd34d54cee22754968f7cb511a0202ea921564bb96ca455ca7bada6b176d: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:06:55.089031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3413211151.mount: Deactivated successfully. Oct 13 00:06:55.098228 containerd[1544]: time="2025-10-13T00:06:55.098062885Z" level=info msg="CreateContainer within sandbox \"1ce2e25a84b190a8175713e289d6e56d25a4deecda217ccac28ab18c46e377d1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"df7cfd34d54cee22754968f7cb511a0202ea921564bb96ca455ca7bada6b176d\"" Oct 13 00:06:55.101589 containerd[1544]: time="2025-10-13T00:06:55.100052751Z" level=info msg="StartContainer for \"df7cfd34d54cee22754968f7cb511a0202ea921564bb96ca455ca7bada6b176d\"" Oct 13 00:06:55.102557 containerd[1544]: time="2025-10-13T00:06:55.102518793Z" level=info msg="connecting to shim df7cfd34d54cee22754968f7cb511a0202ea921564bb96ca455ca7bada6b176d" address="unix:///run/containerd/s/d1f220b8dd62dc9a93fa789429cde879e84005c9ad51304584f9245d5ea798a0" protocol=ttrpc version=3 Oct 13 00:06:55.126127 systemd[1]: Started cri-containerd-df7cfd34d54cee22754968f7cb511a0202ea921564bb96ca455ca7bada6b176d.scope - libcontainer container df7cfd34d54cee22754968f7cb511a0202ea921564bb96ca455ca7bada6b176d. Oct 13 00:06:55.175119 containerd[1544]: time="2025-10-13T00:06:55.175076655Z" level=info msg="StartContainer for \"df7cfd34d54cee22754968f7cb511a0202ea921564bb96ca455ca7bada6b176d\" returns successfully" Oct 13 00:06:55.469799 containerd[1544]: time="2025-10-13T00:06:55.469670325Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:06:55.472926 containerd[1544]: time="2025-10-13T00:06:55.470651638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 00:06:55.475617 containerd[1544]: time="2025-10-13T00:06:55.475568562Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 424.778774ms" Oct 13 00:06:55.475880 containerd[1544]: time="2025-10-13T00:06:55.475773609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Oct 13 00:06:55.483246 containerd[1544]: time="2025-10-13T00:06:55.483198416Z" level=info msg="CreateContainer within sandbox \"019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 00:06:55.492606 containerd[1544]: time="2025-10-13T00:06:55.492271559Z" level=info msg="Container 35c4a243e92ba6e9894a2183d731ad884ec4000eca14d2c7a67925b8551a6fdf: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:06:55.502620 containerd[1544]: time="2025-10-13T00:06:55.502547022Z" level=info msg="CreateContainer within sandbox \"019c5722e0102a83b346b90aa97decfd17f42bc77ae158d108073b45b605a0a9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"35c4a243e92ba6e9894a2183d731ad884ec4000eca14d2c7a67925b8551a6fdf\"" Oct 13 00:06:55.503627 containerd[1544]: time="2025-10-13T00:06:55.503588337Z" level=info msg="StartContainer for \"35c4a243e92ba6e9894a2183d731ad884ec4000eca14d2c7a67925b8551a6fdf\"" Oct 13 00:06:55.507404 containerd[1544]: time="2025-10-13T00:06:55.507252339Z" level=info msg="connecting to shim 35c4a243e92ba6e9894a2183d731ad884ec4000eca14d2c7a67925b8551a6fdf" address="unix:///run/containerd/s/0269d443025f09812d77ef24db3c5d039799d37f9528a1c07ec7d25a414eeaf2" protocol=ttrpc version=3 Oct 13 00:06:55.536256 systemd[1]: Started cri-containerd-35c4a243e92ba6e9894a2183d731ad884ec4000eca14d2c7a67925b8551a6fdf.scope - libcontainer container 35c4a243e92ba6e9894a2183d731ad884ec4000eca14d2c7a67925b8551a6fdf. Oct 13 00:06:55.597479 containerd[1544]: time="2025-10-13T00:06:55.597399587Z" level=info msg="StartContainer for \"35c4a243e92ba6e9894a2183d731ad884ec4000eca14d2c7a67925b8551a6fdf\" returns successfully" Oct 13 00:06:55.737041 kubelet[2750]: I1013 00:06:55.735606 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5789d66748-lpcf6" podStartSLOduration=30.918124116 podStartE2EDuration="40.735588039s" podCreationTimestamp="2025-10-13 00:06:15 +0000 UTC" firstStartedPulling="2025-10-13 00:06:45.659155434 +0000 UTC m=+47.411358350" lastFinishedPulling="2025-10-13 00:06:55.476619357 +0000 UTC m=+57.228822273" observedRunningTime="2025-10-13 00:06:55.735255747 +0000 UTC m=+57.487458663" watchObservedRunningTime="2025-10-13 00:06:55.735588039 +0000 UTC m=+57.487790955" Oct 13 00:06:55.819189 containerd[1544]: time="2025-10-13T00:06:55.819147187Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df7cfd34d54cee22754968f7cb511a0202ea921564bb96ca455ca7bada6b176d\" id:\"969540d96b42ee7662cfcc6608d1b24158ae7cc45d2e368128776ca5add74c01\" pid:5167 exited_at:{seconds:1760314015 nanos:818079831}" Oct 13 00:06:55.841996 kubelet[2750]: I1013 00:06:55.841922 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b6546f656-gvhgn" podStartSLOduration=24.712939372 podStartE2EDuration="34.841887466s" podCreationTimestamp="2025-10-13 00:06:21 +0000 UTC" firstStartedPulling="2025-10-13 00:06:44.921418399 +0000 UTC m=+46.673621315" lastFinishedPulling="2025-10-13 00:06:55.050366453 +0000 UTC m=+56.802569409" observedRunningTime="2025-10-13 00:06:55.762214367 +0000 UTC m=+57.514417323" watchObservedRunningTime="2025-10-13 00:06:55.841887466 +0000 UTC m=+57.594090382" Oct 13 00:06:56.726644 kubelet[2750]: I1013 00:06:56.726027 2750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:07:06.693634 containerd[1544]: time="2025-10-13T00:07:06.693584701Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d41dc5c1b4da935419770bb7aa35db31f121f2ac5f375ac454d55583e1bf285\" id:\"5322219b1860ce8a739d3da61046a5ea5db890fe156e24fd3ca3b0fcff3a2418\" pid:5205 exited_at:{seconds:1760314026 nanos:693265837}" Oct 13 00:07:17.712512 kubelet[2750]: I1013 00:07:17.712184 2750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:07:24.852606 containerd[1544]: time="2025-10-13T00:07:24.852523700Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4d34048289616115fc3842267920cd21cd53e2bbd3eb7072485c4f5ed692891\" id:\"f1b4b124b215ca58449ac956d8c5bdc2ce83b111757241f60a26ae9d4d541544\" pid:5242 exited_at:{seconds:1760314044 nanos:852093268}" Oct 13 00:07:25.770390 containerd[1544]: time="2025-10-13T00:07:25.770316090Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df7cfd34d54cee22754968f7cb511a0202ea921564bb96ca455ca7bada6b176d\" id:\"3736baeaa89c71b3f6051bb6e89232bcacaa0c9a33f469aba4d200df0f95390c\" pid:5271 exited_at:{seconds:1760314045 nanos:769675182}" Oct 13 00:07:36.734742 containerd[1544]: time="2025-10-13T00:07:36.734678832Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d41dc5c1b4da935419770bb7aa35db31f121f2ac5f375ac454d55583e1bf285\" id:\"75ec96c2a987cf159d5bce5a6e34b09f88b4c6d71973e893e57be952d23ce9b1\" pid:5294 exited_at:{seconds:1760314056 nanos:734261795}" Oct 13 00:07:51.277598 containerd[1544]: time="2025-10-13T00:07:51.277191040Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4d34048289616115fc3842267920cd21cd53e2bbd3eb7072485c4f5ed692891\" id:\"f6f83ceaa75157df1e26e46d56fdd76fcaf650d8fc718f6c22d160cbfddc484f\" pid:5320 exited_at:{seconds:1760314071 nanos:276123239}" Oct 13 00:07:54.789807 containerd[1544]: time="2025-10-13T00:07:54.789551789Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4d34048289616115fc3842267920cd21cd53e2bbd3eb7072485c4f5ed692891\" id:\"0ba280896908e02252e9fd321190b088148e3f0e2b8cb5bb9cd553b7baecb0af\" pid:5343 exited_at:{seconds:1760314074 nanos:789233908}" Oct 13 00:07:55.761130 containerd[1544]: time="2025-10-13T00:07:55.761082437Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df7cfd34d54cee22754968f7cb511a0202ea921564bb96ca455ca7bada6b176d\" id:\"fb8e638d1eae3ecd4047062d96ab8c88b953f0cb62c6850dfbbec2dcb5f8e66e\" pid:5365 exited_at:{seconds:1760314075 nanos:760310594}" Oct 13 00:07:58.760983 containerd[1544]: time="2025-10-13T00:07:58.760864157Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df7cfd34d54cee22754968f7cb511a0202ea921564bb96ca455ca7bada6b176d\" id:\"365f057439fcaef177edb1a95e3e9f44a2ffca03edb9d68231586411514a698b\" pid:5394 exited_at:{seconds:1760314078 nanos:760521515}" Oct 13 00:08:06.669176 containerd[1544]: time="2025-10-13T00:08:06.669110315Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d41dc5c1b4da935419770bb7aa35db31f121f2ac5f375ac454d55583e1bf285\" id:\"af5862fff6af5dace48517c63ecefadc6e0ad432e24016e533cf4ccf60324b14\" pid:5420 exited_at:{seconds:1760314086 nanos:668668912}" Oct 13 00:08:24.795203 containerd[1544]: time="2025-10-13T00:08:24.795152076Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4d34048289616115fc3842267920cd21cd53e2bbd3eb7072485c4f5ed692891\" id:\"6d1d4945f57038c787ada539c942475f5a770745ed9a0d4203afedce564e6870\" pid:5466 exited_at:{seconds:1760314104 nanos:794560309}" Oct 13 00:08:25.769730 containerd[1544]: time="2025-10-13T00:08:25.769664796Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df7cfd34d54cee22754968f7cb511a0202ea921564bb96ca455ca7bada6b176d\" id:\"336e92568ca54923937dd5e71b25c590de4fbe3ba2f994e80c3ac924678ddefc\" pid:5488 exited_at:{seconds:1760314105 nanos:766689400}" Oct 13 00:08:35.306224 systemd[1]: Started sshd@7-5.75.247.119:22-139.178.89.65:60086.service - OpenSSH per-connection server daemon (139.178.89.65:60086). Oct 13 00:08:36.284461 sshd[5501]: Accepted publickey for core from 139.178.89.65 port 60086 ssh2: RSA SHA256:9hygYNV3qUvJXdEWXIOx7wVbZ1g8nxoR791t75pOQbs Oct 13 00:08:36.287177 sshd-session[5501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:08:36.296985 systemd-logind[1519]: New session 8 of user core. Oct 13 00:08:36.303161 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 00:08:36.763718 containerd[1544]: time="2025-10-13T00:08:36.763529856Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d41dc5c1b4da935419770bb7aa35db31f121f2ac5f375ac454d55583e1bf285\" id:\"7cabc53bffe39f34c766fd5c26c32c6b8d720601893d211b5d7ffbfa04de04a8\" pid:5519 exited_at:{seconds:1760314116 nanos:763153771}" Oct 13 00:08:37.054150 sshd[5506]: Connection closed by 139.178.89.65 port 60086 Oct 13 00:08:37.055240 sshd-session[5501]: pam_unix(sshd:session): session closed for user core Oct 13 00:08:37.062779 systemd[1]: sshd@7-5.75.247.119:22-139.178.89.65:60086.service: Deactivated successfully. Oct 13 00:08:37.067596 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 00:08:37.069375 systemd-logind[1519]: Session 8 logged out. Waiting for processes to exit. Oct 13 00:08:37.071062 systemd-logind[1519]: Removed session 8. Oct 13 00:08:42.227267 systemd[1]: Started sshd@8-5.75.247.119:22-139.178.89.65:49800.service - OpenSSH per-connection server daemon (139.178.89.65:49800). Oct 13 00:08:43.216079 sshd[5543]: Accepted publickey for core from 139.178.89.65 port 49800 ssh2: RSA SHA256:9hygYNV3qUvJXdEWXIOx7wVbZ1g8nxoR791t75pOQbs Oct 13 00:08:43.219007 sshd-session[5543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:08:43.226793 systemd-logind[1519]: New session 9 of user core. Oct 13 00:08:43.231196 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 00:08:43.971957 sshd[5546]: Connection closed by 139.178.89.65 port 49800 Oct 13 00:08:43.971779 sshd-session[5543]: pam_unix(sshd:session): session closed for user core Oct 13 00:08:43.979084 systemd[1]: sshd@8-5.75.247.119:22-139.178.89.65:49800.service: Deactivated successfully. Oct 13 00:08:43.979766 systemd-logind[1519]: Session 9 logged out. Waiting for processes to exit. Oct 13 00:08:43.983284 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 00:08:43.987124 systemd-logind[1519]: Removed session 9. Oct 13 00:08:44.137787 systemd[1]: Started sshd@9-5.75.247.119:22-139.178.89.65:49804.service - OpenSSH per-connection server daemon (139.178.89.65:49804). Oct 13 00:08:45.118452 sshd[5559]: Accepted publickey for core from 139.178.89.65 port 49804 ssh2: RSA SHA256:9hygYNV3qUvJXdEWXIOx7wVbZ1g8nxoR791t75pOQbs Oct 13 00:08:45.120681 sshd-session[5559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:08:45.128184 systemd-logind[1519]: New session 10 of user core. Oct 13 00:08:45.135177 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 00:08:45.908877 sshd[5562]: Connection closed by 139.178.89.65 port 49804 Oct 13 00:08:45.909470 sshd-session[5559]: pam_unix(sshd:session): session closed for user core Oct 13 00:08:45.917535 systemd[1]: sshd@9-5.75.247.119:22-139.178.89.65:49804.service: Deactivated successfully. Oct 13 00:08:45.922488 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 00:08:45.923546 systemd-logind[1519]: Session 10 logged out. Waiting for processes to exit. Oct 13 00:08:45.925156 systemd-logind[1519]: Removed session 10. Oct 13 00:08:46.081374 systemd[1]: Started sshd@10-5.75.247.119:22-139.178.89.65:49818.service - OpenSSH per-connection server daemon (139.178.89.65:49818). Oct 13 00:08:47.063962 sshd[5572]: Accepted publickey for core from 139.178.89.65 port 49818 ssh2: RSA SHA256:9hygYNV3qUvJXdEWXIOx7wVbZ1g8nxoR791t75pOQbs Oct 13 00:08:47.067062 sshd-session[5572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:08:47.074414 systemd-logind[1519]: New session 11 of user core. Oct 13 00:08:47.079229 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 00:08:47.807996 sshd[5575]: Connection closed by 139.178.89.65 port 49818 Oct 13 00:08:47.808510 sshd-session[5572]: pam_unix(sshd:session): session closed for user core Oct 13 00:08:47.814691 systemd[1]: sshd@10-5.75.247.119:22-139.178.89.65:49818.service: Deactivated successfully. Oct 13 00:08:47.819386 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 00:08:47.822229 systemd-logind[1519]: Session 11 logged out. Waiting for processes to exit. Oct 13 00:08:47.824391 systemd-logind[1519]: Removed session 11. Oct 13 00:08:51.249336 containerd[1544]: time="2025-10-13T00:08:51.249255467Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4d34048289616115fc3842267920cd21cd53e2bbd3eb7072485c4f5ed692891\" id:\"6460b1e46de0f1ba4909d3108b8bad2897652796fbfc696016257b554def5721\" pid:5603 exited_at:{seconds:1760314131 nanos:248792100}" Oct 13 00:08:52.981117 systemd[1]: Started sshd@11-5.75.247.119:22-139.178.89.65:35948.service - OpenSSH per-connection server daemon (139.178.89.65:35948). Oct 13 00:08:53.960799 sshd[5614]: Accepted publickey for core from 139.178.89.65 port 35948 ssh2: RSA SHA256:9hygYNV3qUvJXdEWXIOx7wVbZ1g8nxoR791t75pOQbs Oct 13 00:08:53.963436 sshd-session[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:08:53.970176 systemd-logind[1519]: New session 12 of user core. Oct 13 00:08:53.979217 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 00:08:54.702987 sshd[5617]: Connection closed by 139.178.89.65 port 35948 Oct 13 00:08:54.703719 sshd-session[5614]: pam_unix(sshd:session): session closed for user core Oct 13 00:08:54.710857 systemd[1]: sshd@11-5.75.247.119:22-139.178.89.65:35948.service: Deactivated successfully. Oct 13 00:08:54.714652 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 00:08:54.719003 systemd-logind[1519]: Session 12 logged out. Waiting for processes to exit. Oct 13 00:08:54.722201 systemd-logind[1519]: Removed session 12. Oct 13 00:08:54.791143 containerd[1544]: time="2025-10-13T00:08:54.791030833Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4d34048289616115fc3842267920cd21cd53e2bbd3eb7072485c4f5ed692891\" id:\"e06763ff1f61d5c1abacbc8244f1551ec90c14203197342e0676d39982333062\" pid:5641 exited_at:{seconds:1760314134 nanos:790422463}" Oct 13 00:08:55.758800 containerd[1544]: time="2025-10-13T00:08:55.758757657Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df7cfd34d54cee22754968f7cb511a0202ea921564bb96ca455ca7bada6b176d\" id:\"57815cdf28529ec8123eb00eadcffe7c70860ff027857a60494f21d16a740ea2\" pid:5662 exited_at:{seconds:1760314135 nanos:758384131}" Oct 13 00:08:58.778299 containerd[1544]: time="2025-10-13T00:08:58.778234961Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df7cfd34d54cee22754968f7cb511a0202ea921564bb96ca455ca7bada6b176d\" id:\"59600b171ca9500821bcd507ddfd5f24e6b7701b3bafd857a21210ed8f04e75f\" pid:5687 exited_at:{seconds:1760314138 nanos:777509149}" Oct 13 00:08:59.876129 systemd[1]: Started sshd@12-5.75.247.119:22-139.178.89.65:35950.service - OpenSSH per-connection server daemon (139.178.89.65:35950). Oct 13 00:09:00.852187 sshd[5696]: Accepted publickey for core from 139.178.89.65 port 35950 ssh2: RSA SHA256:9hygYNV3qUvJXdEWXIOx7wVbZ1g8nxoR791t75pOQbs Oct 13 00:09:00.854356 sshd-session[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:09:00.870169 systemd-logind[1519]: New session 13 of user core. Oct 13 00:09:00.877288 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 00:09:01.596037 sshd[5699]: Connection closed by 139.178.89.65 port 35950 Oct 13 00:09:01.597518 sshd-session[5696]: pam_unix(sshd:session): session closed for user core Oct 13 00:09:01.603248 systemd[1]: sshd@12-5.75.247.119:22-139.178.89.65:35950.service: Deactivated successfully. Oct 13 00:09:01.608013 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 00:09:01.610803 systemd-logind[1519]: Session 13 logged out. Waiting for processes to exit. Oct 13 00:09:01.614868 systemd-logind[1519]: Removed session 13. Oct 13 00:09:01.761980 systemd[1]: Started sshd@13-5.75.247.119:22-139.178.89.65:35966.service - OpenSSH per-connection server daemon (139.178.89.65:35966). Oct 13 00:09:02.741282 sshd[5711]: Accepted publickey for core from 139.178.89.65 port 35966 ssh2: RSA SHA256:9hygYNV3qUvJXdEWXIOx7wVbZ1g8nxoR791t75pOQbs Oct 13 00:09:02.743364 sshd-session[5711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:09:02.749549 systemd-logind[1519]: New session 14 of user core. Oct 13 00:09:02.755313 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 00:09:03.651009 sshd[5714]: Connection closed by 139.178.89.65 port 35966 Oct 13 00:09:03.651969 sshd-session[5711]: pam_unix(sshd:session): session closed for user core Oct 13 00:09:03.660327 systemd[1]: sshd@13-5.75.247.119:22-139.178.89.65:35966.service: Deactivated successfully. Oct 13 00:09:03.667370 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 00:09:03.670336 systemd-logind[1519]: Session 14 logged out. Waiting for processes to exit. Oct 13 00:09:03.673676 systemd-logind[1519]: Removed session 14. Oct 13 00:09:03.828188 systemd[1]: Started sshd@14-5.75.247.119:22-139.178.89.65:33558.service - OpenSSH per-connection server daemon (139.178.89.65:33558). Oct 13 00:09:04.831935 sshd[5724]: Accepted publickey for core from 139.178.89.65 port 33558 ssh2: RSA SHA256:9hygYNV3qUvJXdEWXIOx7wVbZ1g8nxoR791t75pOQbs Oct 13 00:09:04.833176 sshd-session[5724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:09:04.838463 systemd-logind[1519]: New session 15 of user core. Oct 13 00:09:04.847217 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 00:09:06.279462 sshd[5727]: Connection closed by 139.178.89.65 port 33558 Oct 13 00:09:06.280383 sshd-session[5724]: pam_unix(sshd:session): session closed for user core Oct 13 00:09:06.286120 systemd[1]: sshd@14-5.75.247.119:22-139.178.89.65:33558.service: Deactivated successfully. Oct 13 00:09:06.289424 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 00:09:06.291045 systemd-logind[1519]: Session 15 logged out. Waiting for processes to exit. Oct 13 00:09:06.292642 systemd-logind[1519]: Removed session 15. Oct 13 00:09:06.450464 systemd[1]: Started sshd@15-5.75.247.119:22-139.178.89.65:33564.service - OpenSSH per-connection server daemon (139.178.89.65:33564). Oct 13 00:09:06.683626 containerd[1544]: time="2025-10-13T00:09:06.683485833Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d41dc5c1b4da935419770bb7aa35db31f121f2ac5f375ac454d55583e1bf285\" id:\"fc3a206986f3a6f9ae16fdcb3bc98f63ec8f5de8161d9ed699db666a13e70528\" pid:5763 exited_at:{seconds:1760314146 nanos:683030825}" Oct 13 00:09:07.444697 sshd[5746]: Accepted publickey for core from 139.178.89.65 port 33564 ssh2: RSA SHA256:9hygYNV3qUvJXdEWXIOx7wVbZ1g8nxoR791t75pOQbs Oct 13 00:09:07.447327 sshd-session[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:09:07.455667 systemd-logind[1519]: New session 16 of user core. Oct 13 00:09:07.461139 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 00:09:08.401999 sshd[5776]: Connection closed by 139.178.89.65 port 33564 Oct 13 00:09:08.403045 sshd-session[5746]: pam_unix(sshd:session): session closed for user core Oct 13 00:09:08.408650 systemd[1]: sshd@15-5.75.247.119:22-139.178.89.65:33564.service: Deactivated successfully. Oct 13 00:09:08.412258 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 00:09:08.417391 systemd-logind[1519]: Session 16 logged out. Waiting for processes to exit. Oct 13 00:09:08.419769 systemd-logind[1519]: Removed session 16. Oct 13 00:09:08.573243 systemd[1]: Started sshd@16-5.75.247.119:22-139.178.89.65:33574.service - OpenSSH per-connection server daemon (139.178.89.65:33574). Oct 13 00:09:09.578629 sshd[5786]: Accepted publickey for core from 139.178.89.65 port 33574 ssh2: RSA SHA256:9hygYNV3qUvJXdEWXIOx7wVbZ1g8nxoR791t75pOQbs Oct 13 00:09:09.580718 sshd-session[5786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:09:09.591025 systemd-logind[1519]: New session 17 of user core. Oct 13 00:09:09.598117 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 00:09:10.383023 sshd[5789]: Connection closed by 139.178.89.65 port 33574 Oct 13 00:09:10.383643 sshd-session[5786]: pam_unix(sshd:session): session closed for user core Oct 13 00:09:10.396371 systemd-logind[1519]: Session 17 logged out. Waiting for processes to exit. Oct 13 00:09:10.396763 systemd[1]: sshd@16-5.75.247.119:22-139.178.89.65:33574.service: Deactivated successfully. Oct 13 00:09:10.400584 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 00:09:10.408203 systemd-logind[1519]: Removed session 17. Oct 13 00:09:15.548251 systemd[1]: Started sshd@17-5.75.247.119:22-139.178.89.65:36054.service - OpenSSH per-connection server daemon (139.178.89.65:36054). Oct 13 00:09:16.536566 sshd[5803]: Accepted publickey for core from 139.178.89.65 port 36054 ssh2: RSA SHA256:9hygYNV3qUvJXdEWXIOx7wVbZ1g8nxoR791t75pOQbs Oct 13 00:09:16.538974 sshd-session[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:09:16.546976 systemd-logind[1519]: New session 18 of user core. Oct 13 00:09:16.553219 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 00:09:17.287130 sshd[5806]: Connection closed by 139.178.89.65 port 36054 Oct 13 00:09:17.289159 sshd-session[5803]: pam_unix(sshd:session): session closed for user core Oct 13 00:09:17.296309 systemd[1]: sshd@17-5.75.247.119:22-139.178.89.65:36054.service: Deactivated successfully. Oct 13 00:09:17.299057 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 00:09:17.302643 systemd-logind[1519]: Session 18 logged out. Waiting for processes to exit. Oct 13 00:09:17.304473 systemd-logind[1519]: Removed session 18. Oct 13 00:09:22.465243 systemd[1]: Started sshd@18-5.75.247.119:22-139.178.89.65:60568.service - OpenSSH per-connection server daemon (139.178.89.65:60568). Oct 13 00:09:23.455711 sshd[5824]: Accepted publickey for core from 139.178.89.65 port 60568 ssh2: RSA SHA256:9hygYNV3qUvJXdEWXIOx7wVbZ1g8nxoR791t75pOQbs Oct 13 00:09:23.458813 sshd-session[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:09:23.464977 systemd-logind[1519]: New session 19 of user core. Oct 13 00:09:23.470166 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 00:09:24.201473 sshd[5827]: Connection closed by 139.178.89.65 port 60568 Oct 13 00:09:24.202302 sshd-session[5824]: pam_unix(sshd:session): session closed for user core Oct 13 00:09:24.207567 systemd-logind[1519]: Session 19 logged out. Waiting for processes to exit. Oct 13 00:09:24.208787 systemd[1]: sshd@18-5.75.247.119:22-139.178.89.65:60568.service: Deactivated successfully. Oct 13 00:09:24.211868 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 00:09:24.216099 systemd-logind[1519]: Removed session 19. Oct 13 00:09:24.789017 containerd[1544]: time="2025-10-13T00:09:24.788958071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4d34048289616115fc3842267920cd21cd53e2bbd3eb7072485c4f5ed692891\" id:\"dee9d715e4581a9c748a7426e6068b4ee019183bd76871bcfd990f24d7174722\" pid:5850 exited_at:{seconds:1760314164 nanos:788346810}" Oct 13 00:09:25.760070 containerd[1544]: time="2025-10-13T00:09:25.760017183Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df7cfd34d54cee22754968f7cb511a0202ea921564bb96ca455ca7bada6b176d\" id:\"d37e797620e068ef92e553f3eab8025853b7c4337e70187108a987d0a59f51f0\" pid:5874 exited_at:{seconds:1760314165 nanos:759637010}" Oct 13 00:09:36.680454 containerd[1544]: time="2025-10-13T00:09:36.680403978Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d41dc5c1b4da935419770bb7aa35db31f121f2ac5f375ac454d55583e1bf285\" id:\"c55106e8f35ca8288f1053181d216d7e55188fb66e2303e18d02b7eb64b6bddc\" pid:5896 exited_at:{seconds:1760314176 nanos:679866521}" Oct 13 00:09:39.625599 systemd[1]: cri-containerd-0effa64754f08969349683c44ff405cc645fe84ece32e1fce4439dbf5959017a.scope: Deactivated successfully. Oct 13 00:09:39.627149 systemd[1]: cri-containerd-0effa64754f08969349683c44ff405cc645fe84ece32e1fce4439dbf5959017a.scope: Consumed 4.732s CPU time, 69M memory peak, 2.4M read from disk. Oct 13 00:09:39.630034 containerd[1544]: time="2025-10-13T00:09:39.627692078Z" level=info msg="received exit event container_id:\"0effa64754f08969349683c44ff405cc645fe84ece32e1fce4439dbf5959017a\" id:\"0effa64754f08969349683c44ff405cc645fe84ece32e1fce4439dbf5959017a\" pid:2589 exit_status:1 exited_at:{seconds:1760314179 nanos:625859302}" Oct 13 00:09:39.630034 containerd[1544]: time="2025-10-13T00:09:39.628459262Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0effa64754f08969349683c44ff405cc645fe84ece32e1fce4439dbf5959017a\" id:\"0effa64754f08969349683c44ff405cc645fe84ece32e1fce4439dbf5959017a\" pid:2589 exit_status:1 exited_at:{seconds:1760314179 nanos:625859302}" Oct 13 00:09:39.670091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0effa64754f08969349683c44ff405cc645fe84ece32e1fce4439dbf5959017a-rootfs.mount: Deactivated successfully. Oct 13 00:09:39.815253 kubelet[2750]: E1013 00:09:39.815157 2750 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44916->10.0.0.2:2379: read: connection timed out" Oct 13 00:09:40.315665 systemd[1]: cri-containerd-273a41726e1ef688e7e20d32cd7434a54443c39ad2daa7f61e33922d23cfac3d.scope: Deactivated successfully. Oct 13 00:09:40.316192 systemd[1]: cri-containerd-273a41726e1ef688e7e20d32cd7434a54443c39ad2daa7f61e33922d23cfac3d.scope: Consumed 22.717s CPU time, 106.6M memory peak, 3.6M read from disk. Oct 13 00:09:40.319223 containerd[1544]: time="2025-10-13T00:09:40.319107528Z" level=info msg="received exit event container_id:\"273a41726e1ef688e7e20d32cd7434a54443c39ad2daa7f61e33922d23cfac3d\" id:\"273a41726e1ef688e7e20d32cd7434a54443c39ad2daa7f61e33922d23cfac3d\" pid:3074 exit_status:1 exited_at:{seconds:1760314180 nanos:318511710}" Oct 13 00:09:40.320370 containerd[1544]: time="2025-10-13T00:09:40.320301885Z" level=info msg="TaskExit event in podsandbox handler container_id:\"273a41726e1ef688e7e20d32cd7434a54443c39ad2daa7f61e33922d23cfac3d\" id:\"273a41726e1ef688e7e20d32cd7434a54443c39ad2daa7f61e33922d23cfac3d\" pid:3074 exit_status:1 exited_at:{seconds:1760314180 nanos:318511710}" Oct 13 00:09:40.338620 kubelet[2750]: I1013 00:09:40.338522 2750 scope.go:117] "RemoveContainer" containerID="0effa64754f08969349683c44ff405cc645fe84ece32e1fce4439dbf5959017a" Oct 13 00:09:40.343371 containerd[1544]: time="2025-10-13T00:09:40.343268750Z" level=info msg="CreateContainer within sandbox \"33a4730611b026bfe7455881e39fb981f9f26265442176c726bf3d2d3c6f0947\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Oct 13 00:09:40.350204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-273a41726e1ef688e7e20d32cd7434a54443c39ad2daa7f61e33922d23cfac3d-rootfs.mount: Deactivated successfully. Oct 13 00:09:40.370814 containerd[1544]: time="2025-10-13T00:09:40.368771533Z" level=info msg="Container aad5085ad0e3d43eb0fb539d1381d16ccfeb077c71c0a3cb759002ff6fa32e4f: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:09:40.380218 containerd[1544]: time="2025-10-13T00:09:40.380175363Z" level=info msg="CreateContainer within sandbox \"33a4730611b026bfe7455881e39fb981f9f26265442176c726bf3d2d3c6f0947\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"aad5085ad0e3d43eb0fb539d1381d16ccfeb077c71c0a3cb759002ff6fa32e4f\"" Oct 13 00:09:40.381383 containerd[1544]: time="2025-10-13T00:09:40.381351359Z" level=info msg="StartContainer for \"aad5085ad0e3d43eb0fb539d1381d16ccfeb077c71c0a3cb759002ff6fa32e4f\"" Oct 13 00:09:40.384736 containerd[1544]: time="2025-10-13T00:09:40.384584618Z" level=info msg="connecting to shim aad5085ad0e3d43eb0fb539d1381d16ccfeb077c71c0a3cb759002ff6fa32e4f" address="unix:///run/containerd/s/fe3cecfa28607faa2e262efcbc8732c5e51b7dd64e233789bf3740aaa86709af" protocol=ttrpc version=3 Oct 13 00:09:40.410184 systemd[1]: Started cri-containerd-aad5085ad0e3d43eb0fb539d1381d16ccfeb077c71c0a3cb759002ff6fa32e4f.scope - libcontainer container aad5085ad0e3d43eb0fb539d1381d16ccfeb077c71c0a3cb759002ff6fa32e4f. Oct 13 00:09:40.458438 containerd[1544]: time="2025-10-13T00:09:40.458402845Z" level=info msg="StartContainer for \"aad5085ad0e3d43eb0fb539d1381d16ccfeb077c71c0a3cb759002ff6fa32e4f\" returns successfully" Oct 13 00:09:41.349729 kubelet[2750]: I1013 00:09:41.349688 2750 scope.go:117] "RemoveContainer" containerID="273a41726e1ef688e7e20d32cd7434a54443c39ad2daa7f61e33922d23cfac3d" Oct 13 00:09:41.354486 containerd[1544]: time="2025-10-13T00:09:41.354444496Z" level=info msg="CreateContainer within sandbox \"d70f787bef6ceb4846edd9ffa1bb4d79bc0716c474d787d1c2a119325d6a421f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 13 00:09:41.369293 containerd[1544]: time="2025-10-13T00:09:41.367512495Z" level=info msg="Container 98fed32f3189ae640aa311d10d1d9dc51a064f7c2359f9a448fdf9a53316f264: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:09:41.381098 containerd[1544]: time="2025-10-13T00:09:41.381043028Z" level=info msg="CreateContainer within sandbox \"d70f787bef6ceb4846edd9ffa1bb4d79bc0716c474d787d1c2a119325d6a421f\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"98fed32f3189ae640aa311d10d1d9dc51a064f7c2359f9a448fdf9a53316f264\"" Oct 13 00:09:41.383034 containerd[1544]: time="2025-10-13T00:09:41.382989568Z" level=info msg="StartContainer for \"98fed32f3189ae640aa311d10d1d9dc51a064f7c2359f9a448fdf9a53316f264\"" Oct 13 00:09:41.389098 containerd[1544]: time="2025-10-13T00:09:41.389052193Z" level=info msg="connecting to shim 98fed32f3189ae640aa311d10d1d9dc51a064f7c2359f9a448fdf9a53316f264" address="unix:///run/containerd/s/c71ae397777d68019ca248bc4512026497878a8ba4dea67198b913eba17690d7" protocol=ttrpc version=3 Oct 13 00:09:41.425134 systemd[1]: Started cri-containerd-98fed32f3189ae640aa311d10d1d9dc51a064f7c2359f9a448fdf9a53316f264.scope - libcontainer container 98fed32f3189ae640aa311d10d1d9dc51a064f7c2359f9a448fdf9a53316f264. Oct 13 00:09:41.474647 containerd[1544]: time="2025-10-13T00:09:41.474604565Z" level=info msg="StartContainer for \"98fed32f3189ae640aa311d10d1d9dc51a064f7c2359f9a448fdf9a53316f264\" returns successfully" Oct 13 00:09:43.781955 kubelet[2750]: E1013 00:09:43.781611 2750 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44576->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4459-1-0-c-ccbbacf556.186de46a04087e5e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4459-1-0-c-ccbbacf556,UID:ba5a03efa834908df43ca220df12a187,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4459-1-0-c-ccbbacf556,},FirstTimestamp:2025-10-13 00:09:33.357063774 +0000 UTC m=+215.109266690,LastTimestamp:2025-10-13 00:09:33.357063774 +0000 UTC m=+215.109266690,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-1-0-c-ccbbacf556,}" Oct 13 00:09:45.007736 systemd[1]: cri-containerd-416aebd1a40b7f542776212b8d02114755ab2eb0ea3f2ea5611bd407bb9e583f.scope: Deactivated successfully. Oct 13 00:09:45.008134 systemd[1]: cri-containerd-416aebd1a40b7f542776212b8d02114755ab2eb0ea3f2ea5611bd407bb9e583f.scope: Consumed 4.736s CPU time, 26.5M memory peak, 3.1M read from disk. Oct 13 00:09:45.011566 containerd[1544]: time="2025-10-13T00:09:45.011492116Z" level=info msg="received exit event container_id:\"416aebd1a40b7f542776212b8d02114755ab2eb0ea3f2ea5611bd407bb9e583f\" id:\"416aebd1a40b7f542776212b8d02114755ab2eb0ea3f2ea5611bd407bb9e583f\" pid:2565 exit_status:1 exited_at:{seconds:1760314185 nanos:11071904}" Oct 13 00:09:45.012403 containerd[1544]: time="2025-10-13T00:09:45.011801565Z" level=info msg="TaskExit event in podsandbox handler container_id:\"416aebd1a40b7f542776212b8d02114755ab2eb0ea3f2ea5611bd407bb9e583f\" id:\"416aebd1a40b7f542776212b8d02114755ab2eb0ea3f2ea5611bd407bb9e583f\" pid:2565 exit_status:1 exited_at:{seconds:1760314185 nanos:11071904}" Oct 13 00:09:45.037773 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-416aebd1a40b7f542776212b8d02114755ab2eb0ea3f2ea5611bd407bb9e583f-rootfs.mount: Deactivated successfully.