Aug 12 23:43:38.791436 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 12 23:43:38.791461 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Aug 12 21:51:24 -00 2025 Aug 12 23:43:38.791471 kernel: KASLR enabled Aug 12 23:43:38.791477 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Aug 12 23:43:38.791483 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Aug 12 23:43:38.791488 kernel: random: crng init done Aug 12 23:43:38.791495 kernel: secureboot: Secure boot disabled Aug 12 23:43:38.791501 kernel: ACPI: Early table checksum verification disabled Aug 12 23:43:38.791507 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Aug 12 23:43:38.791512 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Aug 12 23:43:38.791520 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:43:38.791525 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:43:38.791531 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:43:38.791576 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:43:38.791585 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:43:38.791593 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:43:38.791600 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:43:38.791605 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:43:38.791612 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:43:38.791618 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Aug 12 23:43:38.791623 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Aug 12 23:43:38.791629 kernel: ACPI: Use ACPI SPCR as default console: Yes Aug 12 23:43:38.791635 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Aug 12 23:43:38.791641 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Aug 12 23:43:38.791647 kernel: Zone ranges: Aug 12 23:43:38.791654 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Aug 12 23:43:38.791660 kernel: DMA32 empty Aug 12 23:43:38.791666 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Aug 12 23:43:38.791672 kernel: Device empty Aug 12 23:43:38.791678 kernel: Movable zone start for each node Aug 12 23:43:38.791684 kernel: Early memory node ranges Aug 12 23:43:38.791690 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Aug 12 23:43:38.791696 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Aug 12 23:43:38.791702 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Aug 12 23:43:38.791708 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Aug 12 23:43:38.791714 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Aug 12 23:43:38.791719 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Aug 12 23:43:38.791725 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Aug 12 23:43:38.791733 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Aug 12 23:43:38.791739 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Aug 12 23:43:38.791748 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Aug 12 23:43:38.791754 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Aug 12 23:43:38.791761 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Aug 12 23:43:38.791768 kernel: psci: probing for conduit method from ACPI. Aug 12 23:43:38.791775 kernel: psci: PSCIv1.1 detected in firmware. Aug 12 23:43:38.791781 kernel: psci: Using standard PSCI v0.2 function IDs Aug 12 23:43:38.791787 kernel: psci: Trusted OS migration not required Aug 12 23:43:38.791793 kernel: psci: SMC Calling Convention v1.1 Aug 12 23:43:38.791800 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 12 23:43:38.791807 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Aug 12 23:43:38.791813 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Aug 12 23:43:38.791820 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 12 23:43:38.791826 kernel: Detected PIPT I-cache on CPU0 Aug 12 23:43:38.791832 kernel: CPU features: detected: GIC system register CPU interface Aug 12 23:43:38.791840 kernel: CPU features: detected: Spectre-v4 Aug 12 23:43:38.791847 kernel: CPU features: detected: Spectre-BHB Aug 12 23:43:38.791853 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 12 23:43:38.791860 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 12 23:43:38.791866 kernel: CPU features: detected: ARM erratum 1418040 Aug 12 23:43:38.791873 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 12 23:43:38.791879 kernel: alternatives: applying boot alternatives Aug 12 23:43:38.791887 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=ce82f1ef836ba8581e59ce9db4eef4240d287b2b5f9937c28f0cd024f4dc9107 Aug 12 23:43:38.791894 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 12 23:43:38.791900 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 12 23:43:38.791909 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 12 23:43:38.791915 kernel: Fallback order for Node 0: 0 Aug 12 23:43:38.791921 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Aug 12 23:43:38.791928 kernel: Policy zone: Normal Aug 12 23:43:38.791934 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 12 23:43:38.791940 kernel: software IO TLB: area num 2. Aug 12 23:43:38.791946 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Aug 12 23:43:38.791953 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 12 23:43:38.791959 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 12 23:43:38.791966 kernel: rcu: RCU event tracing is enabled. Aug 12 23:43:38.791973 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 12 23:43:38.791979 kernel: Trampoline variant of Tasks RCU enabled. Aug 12 23:43:38.791987 kernel: Tracing variant of Tasks RCU enabled. Aug 12 23:43:38.791994 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 12 23:43:38.792000 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 12 23:43:38.792006 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 12 23:43:38.792013 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 12 23:43:38.792019 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 12 23:43:38.792026 kernel: GICv3: 256 SPIs implemented Aug 12 23:43:38.792032 kernel: GICv3: 0 Extended SPIs implemented Aug 12 23:43:38.792038 kernel: Root IRQ handler: gic_handle_irq Aug 12 23:43:38.792045 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 12 23:43:38.792051 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Aug 12 23:43:38.792057 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 12 23:43:38.792065 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 12 23:43:38.792072 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Aug 12 23:43:38.792079 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Aug 12 23:43:38.792085 kernel: GICv3: using LPI property table @0x0000000100120000 Aug 12 23:43:38.792092 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Aug 12 23:43:38.792098 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 12 23:43:38.792105 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:43:38.792111 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 12 23:43:38.792118 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 12 23:43:38.792124 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 12 23:43:38.792131 kernel: Console: colour dummy device 80x25 Aug 12 23:43:38.792139 kernel: ACPI: Core revision 20240827 Aug 12 23:43:38.792146 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 12 23:43:38.792153 kernel: pid_max: default: 32768 minimum: 301 Aug 12 23:43:38.792160 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 12 23:43:38.792166 kernel: landlock: Up and running. Aug 12 23:43:38.792173 kernel: SELinux: Initializing. Aug 12 23:43:38.792179 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:43:38.792187 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:43:38.792193 kernel: rcu: Hierarchical SRCU implementation. Aug 12 23:43:38.792214 kernel: rcu: Max phase no-delay instances is 400. Aug 12 23:43:38.792222 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 12 23:43:38.792228 kernel: Remapping and enabling EFI services. Aug 12 23:43:38.792235 kernel: smp: Bringing up secondary CPUs ... Aug 12 23:43:38.792241 kernel: Detected PIPT I-cache on CPU1 Aug 12 23:43:38.792248 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 12 23:43:38.792254 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Aug 12 23:43:38.792261 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:43:38.792267 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 12 23:43:38.792276 kernel: smp: Brought up 1 node, 2 CPUs Aug 12 23:43:38.792288 kernel: SMP: Total of 2 processors activated. Aug 12 23:43:38.792295 kernel: CPU: All CPU(s) started at EL1 Aug 12 23:43:38.792303 kernel: CPU features: detected: 32-bit EL0 Support Aug 12 23:43:38.792310 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 12 23:43:38.792317 kernel: CPU features: detected: Common not Private translations Aug 12 23:43:38.792324 kernel: CPU features: detected: CRC32 instructions Aug 12 23:43:38.792331 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 12 23:43:38.792339 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 12 23:43:38.792346 kernel: CPU features: detected: LSE atomic instructions Aug 12 23:43:38.792353 kernel: CPU features: detected: Privileged Access Never Aug 12 23:43:38.792360 kernel: CPU features: detected: RAS Extension Support Aug 12 23:43:38.792367 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 12 23:43:38.792374 kernel: alternatives: applying system-wide alternatives Aug 12 23:43:38.792381 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Aug 12 23:43:38.792388 kernel: Memory: 3859044K/4096000K available (11136K kernel code, 2436K rwdata, 9080K rodata, 39488K init, 1038K bss, 215476K reserved, 16384K cma-reserved) Aug 12 23:43:38.792395 kernel: devtmpfs: initialized Aug 12 23:43:38.792414 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 12 23:43:38.792423 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 12 23:43:38.792430 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 12 23:43:38.792437 kernel: 0 pages in range for non-PLT usage Aug 12 23:43:38.792444 kernel: 508432 pages in range for PLT usage Aug 12 23:43:38.792451 kernel: pinctrl core: initialized pinctrl subsystem Aug 12 23:43:38.792457 kernel: SMBIOS 3.0.0 present. Aug 12 23:43:38.792465 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Aug 12 23:43:38.792471 kernel: DMI: Memory slots populated: 1/1 Aug 12 23:43:38.792480 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 12 23:43:38.792487 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 12 23:43:38.792495 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 12 23:43:38.792502 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 12 23:43:38.792508 kernel: audit: initializing netlink subsys (disabled) Aug 12 23:43:38.792515 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Aug 12 23:43:38.792522 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 12 23:43:38.792529 kernel: cpuidle: using governor menu Aug 12 23:43:38.792546 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 12 23:43:38.792555 kernel: ASID allocator initialised with 32768 entries Aug 12 23:43:38.792562 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 12 23:43:38.792569 kernel: Serial: AMBA PL011 UART driver Aug 12 23:43:38.792576 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 12 23:43:38.792583 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 12 23:43:38.792590 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 12 23:43:38.792597 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 12 23:43:38.792604 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 12 23:43:38.792611 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 12 23:43:38.792620 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 12 23:43:38.792627 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 12 23:43:38.792633 kernel: ACPI: Added _OSI(Module Device) Aug 12 23:43:38.792640 kernel: ACPI: Added _OSI(Processor Device) Aug 12 23:43:38.792647 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 12 23:43:38.792654 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 12 23:43:38.792660 kernel: ACPI: Interpreter enabled Aug 12 23:43:38.792667 kernel: ACPI: Using GIC for interrupt routing Aug 12 23:43:38.792675 kernel: ACPI: MCFG table detected, 1 entries Aug 12 23:43:38.792683 kernel: ACPI: CPU0 has been hot-added Aug 12 23:43:38.792689 kernel: ACPI: CPU1 has been hot-added Aug 12 23:43:38.792696 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 12 23:43:38.792703 kernel: printk: legacy console [ttyAMA0] enabled Aug 12 23:43:38.792710 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 12 23:43:38.792912 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 12 23:43:38.792996 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 12 23:43:38.793057 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 12 23:43:38.793118 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 12 23:43:38.793175 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 12 23:43:38.793184 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 12 23:43:38.793191 kernel: PCI host bridge to bus 0000:00 Aug 12 23:43:38.793314 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 12 23:43:38.793372 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 12 23:43:38.793425 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 12 23:43:38.793481 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 12 23:43:38.793573 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Aug 12 23:43:38.793650 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Aug 12 23:43:38.793720 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Aug 12 23:43:38.793827 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Aug 12 23:43:38.793912 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Aug 12 23:43:38.793979 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Aug 12 23:43:38.794039 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Aug 12 23:43:38.794097 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Aug 12 23:43:38.794156 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Aug 12 23:43:38.794826 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Aug 12 23:43:38.794911 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Aug 12 23:43:38.794971 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Aug 12 23:43:38.795036 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Aug 12 23:43:38.795103 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Aug 12 23:43:38.795162 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Aug 12 23:43:38.795246 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Aug 12 23:43:38.795308 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Aug 12 23:43:38.795367 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Aug 12 23:43:38.795435 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Aug 12 23:43:38.795500 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Aug 12 23:43:38.795579 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Aug 12 23:43:38.795641 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Aug 12 23:43:38.795699 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Aug 12 23:43:38.795766 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Aug 12 23:43:38.795825 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Aug 12 23:43:38.795883 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Aug 12 23:43:38.795943 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Aug 12 23:43:38.796001 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Aug 12 23:43:38.796073 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Aug 12 23:43:38.796133 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Aug 12 23:43:38.796191 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Aug 12 23:43:38.796320 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Aug 12 23:43:38.796381 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Aug 12 23:43:38.796453 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Aug 12 23:43:38.796512 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Aug 12 23:43:38.796618 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Aug 12 23:43:38.796681 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Aug 12 23:43:38.796739 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Aug 12 23:43:38.796806 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Aug 12 23:43:38.796867 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Aug 12 23:43:38.796929 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Aug 12 23:43:38.796987 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Aug 12 23:43:38.797053 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Aug 12 23:43:38.797114 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Aug 12 23:43:38.797176 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Aug 12 23:43:38.797634 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Aug 12 23:43:38.797754 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Aug 12 23:43:38.797817 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Aug 12 23:43:38.797887 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Aug 12 23:43:38.797949 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Aug 12 23:43:38.798010 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Aug 12 23:43:38.798070 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Aug 12 23:43:38.798139 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Aug 12 23:43:38.799288 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Aug 12 23:43:38.799407 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Aug 12 23:43:38.799472 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Aug 12 23:43:38.799534 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Aug 12 23:43:38.799667 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Aug 12 23:43:38.799734 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Aug 12 23:43:38.799811 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Aug 12 23:43:38.799872 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Aug 12 23:43:38.799942 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Aug 12 23:43:38.800003 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Aug 12 23:43:38.800064 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Aug 12 23:43:38.800134 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Aug 12 23:43:38.800471 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Aug 12 23:43:38.800608 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Aug 12 23:43:38.800678 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Aug 12 23:43:38.800752 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Aug 12 23:43:38.800816 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Aug 12 23:43:38.800876 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Aug 12 23:43:38.800941 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Aug 12 23:43:38.801001 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Aug 12 23:43:38.801064 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Aug 12 23:43:38.801127 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Aug 12 23:43:38.801186 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Aug 12 23:43:38.802336 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Aug 12 23:43:38.802412 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Aug 12 23:43:38.802473 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Aug 12 23:43:38.802582 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Aug 12 23:43:38.802657 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Aug 12 23:43:38.802717 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Aug 12 23:43:38.802775 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Aug 12 23:43:38.802836 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Aug 12 23:43:38.802896 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Aug 12 23:43:38.802954 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Aug 12 23:43:38.803021 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Aug 12 23:43:38.803080 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Aug 12 23:43:38.803139 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Aug 12 23:43:38.803230 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Aug 12 23:43:38.803296 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Aug 12 23:43:38.803355 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Aug 12 23:43:38.803418 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Aug 12 23:43:38.803480 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Aug 12 23:43:38.803549 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Aug 12 23:43:38.803615 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Aug 12 23:43:38.803675 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Aug 12 23:43:38.803736 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Aug 12 23:43:38.803794 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Aug 12 23:43:38.803857 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Aug 12 23:43:38.803918 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Aug 12 23:43:38.803979 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Aug 12 23:43:38.804037 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Aug 12 23:43:38.804095 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Aug 12 23:43:38.804154 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Aug 12 23:43:38.805307 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Aug 12 23:43:38.805392 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Aug 12 23:43:38.805457 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Aug 12 23:43:38.805525 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Aug 12 23:43:38.805646 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Aug 12 23:43:38.805710 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Aug 12 23:43:38.805775 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Aug 12 23:43:38.805834 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Aug 12 23:43:38.805902 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Aug 12 23:43:38.807252 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Aug 12 23:43:38.807355 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Aug 12 23:43:38.807424 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Aug 12 23:43:38.807487 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Aug 12 23:43:38.807565 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Aug 12 23:43:38.807633 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Aug 12 23:43:38.807695 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Aug 12 23:43:38.807756 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Aug 12 23:43:38.807816 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Aug 12 23:43:38.807877 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Aug 12 23:43:38.807936 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Aug 12 23:43:38.807997 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Aug 12 23:43:38.808055 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Aug 12 23:43:38.808116 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Aug 12 23:43:38.808176 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Aug 12 23:43:38.809305 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Aug 12 23:43:38.809381 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Aug 12 23:43:38.809458 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Aug 12 23:43:38.809552 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Aug 12 23:43:38.809638 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Aug 12 23:43:38.809722 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Aug 12 23:43:38.809785 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Aug 12 23:43:38.809896 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Aug 12 23:43:38.809981 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Aug 12 23:43:38.810043 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Aug 12 23:43:38.810103 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Aug 12 23:43:38.810164 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Aug 12 23:43:38.810242 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Aug 12 23:43:38.810306 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Aug 12 23:43:38.810369 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Aug 12 23:43:38.810427 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Aug 12 23:43:38.810485 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Aug 12 23:43:38.810593 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Aug 12 23:43:38.810662 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Aug 12 23:43:38.810723 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Aug 12 23:43:38.810782 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Aug 12 23:43:38.810852 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Aug 12 23:43:38.810911 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Aug 12 23:43:38.810977 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Aug 12 23:43:38.811037 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Aug 12 23:43:38.811096 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Aug 12 23:43:38.811157 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Aug 12 23:43:38.811710 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Aug 12 23:43:38.811798 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Aug 12 23:43:38.811861 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Aug 12 23:43:38.811920 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Aug 12 23:43:38.811979 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Aug 12 23:43:38.812040 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Aug 12 23:43:38.812106 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Aug 12 23:43:38.812169 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Aug 12 23:43:38.812406 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Aug 12 23:43:38.812481 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Aug 12 23:43:38.812603 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Aug 12 23:43:38.812672 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Aug 12 23:43:38.812739 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Aug 12 23:43:38.812801 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Aug 12 23:43:38.812862 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Aug 12 23:43:38.812927 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Aug 12 23:43:38.812988 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Aug 12 23:43:38.813050 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Aug 12 23:43:38.813108 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Aug 12 23:43:38.813172 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Aug 12 23:43:38.813913 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Aug 12 23:43:38.813988 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Aug 12 23:43:38.814061 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Aug 12 23:43:38.814126 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Aug 12 23:43:38.814184 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Aug 12 23:43:38.814272 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Aug 12 23:43:38.814341 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Aug 12 23:43:38.814407 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 12 23:43:38.814463 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 12 23:43:38.814581 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 12 23:43:38.814667 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Aug 12 23:43:38.814725 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Aug 12 23:43:38.814779 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Aug 12 23:43:38.814845 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Aug 12 23:43:38.814899 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Aug 12 23:43:38.814952 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Aug 12 23:43:38.815013 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Aug 12 23:43:38.815068 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Aug 12 23:43:38.815122 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Aug 12 23:43:38.815187 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Aug 12 23:43:38.815278 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Aug 12 23:43:38.815334 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Aug 12 23:43:38.815400 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Aug 12 23:43:38.815455 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Aug 12 23:43:38.815509 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Aug 12 23:43:38.815585 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Aug 12 23:43:38.815649 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Aug 12 23:43:38.815703 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Aug 12 23:43:38.815767 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Aug 12 23:43:38.815822 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Aug 12 23:43:38.815875 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Aug 12 23:43:38.815935 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Aug 12 23:43:38.815991 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Aug 12 23:43:38.816045 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Aug 12 23:43:38.816105 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Aug 12 23:43:38.816160 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Aug 12 23:43:38.816781 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Aug 12 23:43:38.816801 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 12 23:43:38.816809 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 12 23:43:38.817090 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 12 23:43:38.817106 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 12 23:43:38.817114 kernel: iommu: Default domain type: Translated Aug 12 23:43:38.817121 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 12 23:43:38.817129 kernel: efivars: Registered efivars operations Aug 12 23:43:38.817136 kernel: vgaarb: loaded Aug 12 23:43:38.817143 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 12 23:43:38.817151 kernel: VFS: Disk quotas dquot_6.6.0 Aug 12 23:43:38.817159 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 12 23:43:38.817166 kernel: pnp: PnP ACPI init Aug 12 23:43:38.817280 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 12 23:43:38.817294 kernel: pnp: PnP ACPI: found 1 devices Aug 12 23:43:38.817302 kernel: NET: Registered PF_INET protocol family Aug 12 23:43:38.817309 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 12 23:43:38.817316 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 12 23:43:38.817324 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 12 23:43:38.817331 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 12 23:43:38.817339 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 12 23:43:38.817349 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 12 23:43:38.817356 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:43:38.817363 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:43:38.817371 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 12 23:43:38.817481 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Aug 12 23:43:38.817493 kernel: PCI: CLS 0 bytes, default 64 Aug 12 23:43:38.817501 kernel: kvm [1]: HYP mode not available Aug 12 23:43:38.817508 kernel: Initialise system trusted keyrings Aug 12 23:43:38.817516 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 12 23:43:38.817525 kernel: Key type asymmetric registered Aug 12 23:43:38.817533 kernel: Asymmetric key parser 'x509' registered Aug 12 23:43:38.817574 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 12 23:43:38.817582 kernel: io scheduler mq-deadline registered Aug 12 23:43:38.817590 kernel: io scheduler kyber registered Aug 12 23:43:38.817597 kernel: io scheduler bfq registered Aug 12 23:43:38.817605 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Aug 12 23:43:38.817681 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Aug 12 23:43:38.817743 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Aug 12 23:43:38.817808 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 12 23:43:38.817869 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Aug 12 23:43:38.817929 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Aug 12 23:43:38.817989 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 12 23:43:38.818052 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Aug 12 23:43:38.818116 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Aug 12 23:43:38.818174 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 12 23:43:38.819331 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Aug 12 23:43:38.819412 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Aug 12 23:43:38.819473 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 12 23:43:38.819569 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Aug 12 23:43:38.819647 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Aug 12 23:43:38.819708 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 12 23:43:38.819770 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Aug 12 23:43:38.819829 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Aug 12 23:43:38.819888 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 12 23:43:38.819956 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Aug 12 23:43:38.820017 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Aug 12 23:43:38.820077 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 12 23:43:38.820142 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Aug 12 23:43:38.821307 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Aug 12 23:43:38.821404 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 12 23:43:38.821415 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Aug 12 23:43:38.821482 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Aug 12 23:43:38.821559 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Aug 12 23:43:38.821621 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 12 23:43:38.821631 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 12 23:43:38.821639 kernel: ACPI: button: Power Button [PWRB] Aug 12 23:43:38.821647 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 12 23:43:38.821711 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Aug 12 23:43:38.821776 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Aug 12 23:43:38.821787 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 12 23:43:38.821797 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Aug 12 23:43:38.821858 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Aug 12 23:43:38.821868 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Aug 12 23:43:38.821876 kernel: thunder_xcv, ver 1.0 Aug 12 23:43:38.821883 kernel: thunder_bgx, ver 1.0 Aug 12 23:43:38.821891 kernel: nicpf, ver 1.0 Aug 12 23:43:38.821898 kernel: nicvf, ver 1.0 Aug 12 23:43:38.821974 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 12 23:43:38.822034 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-12T23:43:38 UTC (1755042218) Aug 12 23:43:38.822045 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 12 23:43:38.822053 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Aug 12 23:43:38.822060 kernel: NET: Registered PF_INET6 protocol family Aug 12 23:43:38.822067 kernel: watchdog: NMI not fully supported Aug 12 23:43:38.822075 kernel: watchdog: Hard watchdog permanently disabled Aug 12 23:43:38.822082 kernel: Segment Routing with IPv6 Aug 12 23:43:38.822089 kernel: In-situ OAM (IOAM) with IPv6 Aug 12 23:43:38.822097 kernel: NET: Registered PF_PACKET protocol family Aug 12 23:43:38.822106 kernel: Key type dns_resolver registered Aug 12 23:43:38.822113 kernel: registered taskstats version 1 Aug 12 23:43:38.822121 kernel: Loading compiled-in X.509 certificates Aug 12 23:43:38.822128 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: e74bfacfa68399ed7282bf533dd5901fdb84b882' Aug 12 23:43:38.822135 kernel: Demotion targets for Node 0: null Aug 12 23:43:38.822143 kernel: Key type .fscrypt registered Aug 12 23:43:38.822150 kernel: Key type fscrypt-provisioning registered Aug 12 23:43:38.822157 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 12 23:43:38.822166 kernel: ima: Allocated hash algorithm: sha1 Aug 12 23:43:38.822173 kernel: ima: No architecture policies found Aug 12 23:43:38.822181 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 12 23:43:38.822188 kernel: clk: Disabling unused clocks Aug 12 23:43:38.822206 kernel: PM: genpd: Disabling unused power domains Aug 12 23:43:38.822228 kernel: Warning: unable to open an initial console. Aug 12 23:43:38.822237 kernel: Freeing unused kernel memory: 39488K Aug 12 23:43:38.822244 kernel: Run /init as init process Aug 12 23:43:38.822252 kernel: with arguments: Aug 12 23:43:38.822262 kernel: /init Aug 12 23:43:38.822269 kernel: with environment: Aug 12 23:43:38.822276 kernel: HOME=/ Aug 12 23:43:38.822283 kernel: TERM=linux Aug 12 23:43:38.822290 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 12 23:43:38.822298 systemd[1]: Successfully made /usr/ read-only. Aug 12 23:43:38.822309 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 12 23:43:38.822317 systemd[1]: Detected virtualization kvm. Aug 12 23:43:38.822327 systemd[1]: Detected architecture arm64. Aug 12 23:43:38.822334 systemd[1]: Running in initrd. Aug 12 23:43:38.822342 systemd[1]: No hostname configured, using default hostname. Aug 12 23:43:38.822350 systemd[1]: Hostname set to . Aug 12 23:43:38.822357 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:43:38.822365 systemd[1]: Queued start job for default target initrd.target. Aug 12 23:43:38.822373 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:43:38.822381 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:43:38.822391 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 12 23:43:38.822399 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:43:38.822407 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 12 23:43:38.822416 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 12 23:43:38.822425 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 12 23:43:38.822435 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 12 23:43:38.822443 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:43:38.822452 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:43:38.822460 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:43:38.822468 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:43:38.822476 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:43:38.822484 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:43:38.822491 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:43:38.822499 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:43:38.822507 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 12 23:43:38.822516 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 12 23:43:38.822524 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:43:38.822532 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:43:38.822569 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:43:38.822578 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:43:38.822585 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 12 23:43:38.822593 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:43:38.822601 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 12 23:43:38.822609 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 12 23:43:38.822619 systemd[1]: Starting systemd-fsck-usr.service... Aug 12 23:43:38.822627 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:43:38.822635 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:43:38.822643 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:43:38.822650 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 12 23:43:38.822687 systemd-journald[243]: Collecting audit messages is disabled. Aug 12 23:43:38.822708 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:43:38.822716 systemd[1]: Finished systemd-fsck-usr.service. Aug 12 23:43:38.822726 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 12 23:43:38.822734 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 12 23:43:38.822741 kernel: Bridge firewalling registered Aug 12 23:43:38.822749 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:43:38.822757 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:43:38.822766 systemd-journald[243]: Journal started Aug 12 23:43:38.822784 systemd-journald[243]: Runtime Journal (/run/log/journal/d243420ef846422cb1b94f76b823f61f) is 8M, max 76.5M, 68.5M free. Aug 12 23:43:38.790169 systemd-modules-load[245]: Inserted module 'overlay' Aug 12 23:43:38.813736 systemd-modules-load[245]: Inserted module 'br_netfilter' Aug 12 23:43:38.826316 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:43:38.827592 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:43:38.830836 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 12 23:43:38.833257 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:43:38.841062 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:43:38.846759 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:43:38.850458 systemd-tmpfiles[259]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 12 23:43:38.855000 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:43:38.858438 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:43:38.862322 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:43:38.871735 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:43:38.875339 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:43:38.878104 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 12 23:43:38.899020 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=ce82f1ef836ba8581e59ce9db4eef4240d287b2b5f9937c28f0cd024f4dc9107 Aug 12 23:43:38.902838 systemd-resolved[276]: Positive Trust Anchors: Aug 12 23:43:38.902854 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:43:38.902885 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:43:38.908280 systemd-resolved[276]: Defaulting to hostname 'linux'. Aug 12 23:43:38.909236 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:43:38.912379 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:43:39.004289 kernel: SCSI subsystem initialized Aug 12 23:43:39.008233 kernel: Loading iSCSI transport class v2.0-870. Aug 12 23:43:39.016243 kernel: iscsi: registered transport (tcp) Aug 12 23:43:39.029423 kernel: iscsi: registered transport (qla4xxx) Aug 12 23:43:39.029487 kernel: QLogic iSCSI HBA Driver Aug 12 23:43:39.054063 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 12 23:43:39.082962 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:43:39.088258 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 12 23:43:39.139422 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 12 23:43:39.141823 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 12 23:43:39.210262 kernel: raid6: neonx8 gen() 15656 MB/s Aug 12 23:43:39.227275 kernel: raid6: neonx4 gen() 15597 MB/s Aug 12 23:43:39.244278 kernel: raid6: neonx2 gen() 13111 MB/s Aug 12 23:43:39.261253 kernel: raid6: neonx1 gen() 10384 MB/s Aug 12 23:43:39.278265 kernel: raid6: int64x8 gen() 6745 MB/s Aug 12 23:43:39.295254 kernel: raid6: int64x4 gen() 7293 MB/s Aug 12 23:43:39.312263 kernel: raid6: int64x2 gen() 6070 MB/s Aug 12 23:43:39.329263 kernel: raid6: int64x1 gen() 5002 MB/s Aug 12 23:43:39.329344 kernel: raid6: using algorithm neonx8 gen() 15656 MB/s Aug 12 23:43:39.346271 kernel: raid6: .... xor() 11857 MB/s, rmw enabled Aug 12 23:43:39.346352 kernel: raid6: using neon recovery algorithm Aug 12 23:43:39.351401 kernel: xor: measuring software checksum speed Aug 12 23:43:39.351469 kernel: 8regs : 21573 MB/sec Aug 12 23:43:39.352569 kernel: 32regs : 21693 MB/sec Aug 12 23:43:39.352616 kernel: arm64_neon : 28080 MB/sec Aug 12 23:43:39.352628 kernel: xor: using function: arm64_neon (28080 MB/sec) Aug 12 23:43:39.407285 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 12 23:43:39.415430 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:43:39.420390 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:43:39.446937 systemd-udevd[494]: Using default interface naming scheme 'v255'. Aug 12 23:43:39.452345 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:43:39.457097 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 12 23:43:39.491374 dracut-pre-trigger[502]: rd.md=0: removing MD RAID activation Aug 12 23:43:39.522490 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:43:39.525038 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:43:39.590585 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:43:39.594519 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 12 23:43:39.668456 kernel: ACPI: bus type USB registered Aug 12 23:43:39.668504 kernel: usbcore: registered new interface driver usbfs Aug 12 23:43:39.668515 kernel: usbcore: registered new interface driver hub Aug 12 23:43:39.669345 kernel: usbcore: registered new device driver usb Aug 12 23:43:39.686240 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Aug 12 23:43:39.688410 kernel: scsi host0: Virtio SCSI HBA Aug 12 23:43:39.695366 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 12 23:43:39.695428 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 12 23:43:39.727478 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Aug 12 23:43:39.727688 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Aug 12 23:43:39.727771 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Aug 12 23:43:39.729288 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Aug 12 23:43:39.735625 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Aug 12 23:43:39.735795 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Aug 12 23:43:39.737360 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:43:39.740503 kernel: sd 0:0:0:1: Power-on or device reset occurred Aug 12 23:43:39.744695 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Aug 12 23:43:39.744807 kernel: sd 0:0:0:1: [sda] Write Protect is off Aug 12 23:43:39.744920 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Aug 12 23:43:39.745854 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 12 23:43:39.745971 kernel: sr 0:0:0:0: Power-on or device reset occurred Aug 12 23:43:39.746071 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Aug 12 23:43:39.746144 kernel: hub 1-0:1.0: USB hub found Aug 12 23:43:39.746262 kernel: hub 1-0:1.0: 4 ports detected Aug 12 23:43:39.746341 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 12 23:43:39.746351 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Aug 12 23:43:39.743723 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:43:39.747477 kernel: hub 2-0:1.0: USB hub found Aug 12 23:43:39.749487 kernel: hub 2-0:1.0: 4 ports detected Aug 12 23:43:39.746486 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:43:39.751850 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Aug 12 23:43:39.753903 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 12 23:43:39.753947 kernel: GPT:17805311 != 80003071 Aug 12 23:43:39.753958 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 12 23:43:39.753968 kernel: GPT:17805311 != 80003071 Aug 12 23:43:39.753977 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 12 23:43:39.753986 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 12 23:43:39.753610 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:43:39.756574 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Aug 12 23:43:39.785252 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:43:39.822791 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 12 23:43:39.831163 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 12 23:43:39.855283 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 12 23:43:39.865860 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 12 23:43:39.866646 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 12 23:43:39.875295 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 12 23:43:39.878265 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 12 23:43:39.880773 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:43:39.883607 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:43:39.885026 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:43:39.889900 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 12 23:43:39.909441 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 12 23:43:39.909502 disk-uuid[601]: Primary Header is updated. Aug 12 23:43:39.909502 disk-uuid[601]: Secondary Entries is updated. Aug 12 23:43:39.909502 disk-uuid[601]: Secondary Header is updated. Aug 12 23:43:39.919253 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:43:39.983356 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Aug 12 23:43:40.115915 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Aug 12 23:43:40.115971 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Aug 12 23:43:40.117263 kernel: usbcore: registered new interface driver usbhid Aug 12 23:43:40.117310 kernel: usbhid: USB HID core driver Aug 12 23:43:40.223307 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Aug 12 23:43:40.351245 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Aug 12 23:43:40.404297 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Aug 12 23:43:40.942241 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 12 23:43:40.944024 disk-uuid[606]: The operation has completed successfully. Aug 12 23:43:41.002602 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 12 23:43:41.002727 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 12 23:43:41.025740 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 12 23:43:41.044688 sh[625]: Success Aug 12 23:43:41.062497 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 12 23:43:41.062756 kernel: device-mapper: uevent: version 1.0.3 Aug 12 23:43:41.062778 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 12 23:43:41.076259 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Aug 12 23:43:41.135488 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 12 23:43:41.139274 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 12 23:43:41.151847 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 12 23:43:41.163252 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 12 23:43:41.163326 kernel: BTRFS: device fsid 7658cdd8-2ee4-4f84-82be-1f808605c89c devid 1 transid 42 /dev/mapper/usr (254:0) scanned by mount (637) Aug 12 23:43:41.166291 kernel: BTRFS info (device dm-0): first mount of filesystem 7658cdd8-2ee4-4f84-82be-1f808605c89c Aug 12 23:43:41.166416 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:43:41.166434 kernel: BTRFS info (device dm-0): using free-space-tree Aug 12 23:43:41.175732 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 12 23:43:41.177932 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 12 23:43:41.178880 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 12 23:43:41.179748 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 12 23:43:41.183480 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 12 23:43:41.220653 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (670) Aug 12 23:43:41.220719 kernel: BTRFS info (device sda6): first mount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:43:41.221520 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:43:41.221583 kernel: BTRFS info (device sda6): using free-space-tree Aug 12 23:43:41.230246 kernel: BTRFS info (device sda6): last unmount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:43:41.232709 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 12 23:43:41.235414 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 12 23:43:41.312346 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:43:41.315464 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:43:41.351796 systemd-networkd[807]: lo: Link UP Aug 12 23:43:41.352386 systemd-networkd[807]: lo: Gained carrier Aug 12 23:43:41.354797 systemd-networkd[807]: Enumeration completed Aug 12 23:43:41.355435 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:43:41.356095 systemd[1]: Reached target network.target - Network. Aug 12 23:43:41.357739 systemd-networkd[807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:43:41.357743 systemd-networkd[807]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:43:41.359024 systemd-networkd[807]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:43:41.359027 systemd-networkd[807]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:43:41.359576 systemd-networkd[807]: eth0: Link UP Aug 12 23:43:41.361835 systemd-networkd[807]: eth1: Link UP Aug 12 23:43:41.362039 systemd-networkd[807]: eth0: Gained carrier Aug 12 23:43:41.362052 systemd-networkd[807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:43:41.365718 systemd-networkd[807]: eth1: Gained carrier Aug 12 23:43:41.365730 systemd-networkd[807]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:43:41.382458 ignition[723]: Ignition 2.21.0 Aug 12 23:43:41.382471 ignition[723]: Stage: fetch-offline Aug 12 23:43:41.382509 ignition[723]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:43:41.385830 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:43:41.382516 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 12 23:43:41.382729 ignition[723]: parsed url from cmdline: "" Aug 12 23:43:41.382733 ignition[723]: no config URL provided Aug 12 23:43:41.382737 ignition[723]: reading system config file "/usr/lib/ignition/user.ign" Aug 12 23:43:41.382744 ignition[723]: no config at "/usr/lib/ignition/user.ign" Aug 12 23:43:41.382748 ignition[723]: failed to fetch config: resource requires networking Aug 12 23:43:41.389733 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 12 23:43:41.383857 ignition[723]: Ignition finished successfully Aug 12 23:43:41.391269 systemd-networkd[807]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Aug 12 23:43:41.422006 systemd-networkd[807]: eth0: DHCPv4 address 49.13.54.157/32, gateway 172.31.1.1 acquired from 172.31.1.1 Aug 12 23:43:41.427715 ignition[815]: Ignition 2.21.0 Aug 12 23:43:41.427771 ignition[815]: Stage: fetch Aug 12 23:43:41.427955 ignition[815]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:43:41.427967 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 12 23:43:41.428059 ignition[815]: parsed url from cmdline: "" Aug 12 23:43:41.428062 ignition[815]: no config URL provided Aug 12 23:43:41.428067 ignition[815]: reading system config file "/usr/lib/ignition/user.ign" Aug 12 23:43:41.428074 ignition[815]: no config at "/usr/lib/ignition/user.ign" Aug 12 23:43:41.428177 ignition[815]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Aug 12 23:43:41.435806 ignition[815]: GET result: OK Aug 12 23:43:41.436427 ignition[815]: parsing config with SHA512: a56dffd1adf5d0d31cda2374620c70f3a3de34cf6210a7e724e534e02234b1501485e8414d6e1fece7def961ae37e08826cf9804e82f91dcfb9b8e574f1082aa Aug 12 23:43:41.443792 unknown[815]: fetched base config from "system" Aug 12 23:43:41.443816 unknown[815]: fetched base config from "system" Aug 12 23:43:41.444229 ignition[815]: fetch: fetch complete Aug 12 23:43:41.443822 unknown[815]: fetched user config from "hetzner" Aug 12 23:43:41.444242 ignition[815]: fetch: fetch passed Aug 12 23:43:41.446499 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 12 23:43:41.444338 ignition[815]: Ignition finished successfully Aug 12 23:43:41.448140 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 12 23:43:41.490653 ignition[822]: Ignition 2.21.0 Aug 12 23:43:41.490670 ignition[822]: Stage: kargs Aug 12 23:43:41.490871 ignition[822]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:43:41.490883 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 12 23:43:41.494820 ignition[822]: kargs: kargs passed Aug 12 23:43:41.496391 ignition[822]: Ignition finished successfully Aug 12 23:43:41.500257 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 12 23:43:41.502624 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 12 23:43:41.532030 ignition[829]: Ignition 2.21.0 Aug 12 23:43:41.532047 ignition[829]: Stage: disks Aug 12 23:43:41.532188 ignition[829]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:43:41.534301 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 12 23:43:41.532218 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 12 23:43:41.532974 ignition[829]: disks: disks passed Aug 12 23:43:41.533023 ignition[829]: Ignition finished successfully Aug 12 23:43:41.538404 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 12 23:43:41.539408 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 12 23:43:41.540682 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:43:41.541945 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:43:41.542870 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:43:41.544728 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 12 23:43:41.578548 systemd-fsck[838]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Aug 12 23:43:41.582592 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 12 23:43:41.584960 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 12 23:43:41.672611 kernel: EXT4-fs (sda9): mounted filesystem d634334e-91a3-4b77-89ab-775bdd78a572 r/w with ordered data mode. Quota mode: none. Aug 12 23:43:41.674109 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 12 23:43:41.676180 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 12 23:43:41.678291 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:43:41.680275 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 12 23:43:41.687143 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 12 23:43:41.690629 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 12 23:43:41.692788 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:43:41.696766 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 12 23:43:41.702989 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (846) Aug 12 23:43:41.701627 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 12 23:43:41.707730 kernel: BTRFS info (device sda6): first mount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:43:41.707783 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:43:41.709327 kernel: BTRFS info (device sda6): using free-space-tree Aug 12 23:43:41.722064 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:43:41.756134 coreos-metadata[848]: Aug 12 23:43:41.755 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Aug 12 23:43:41.756134 coreos-metadata[848]: Aug 12 23:43:41.756 INFO Fetch successful Aug 12 23:43:41.759819 coreos-metadata[848]: Aug 12 23:43:41.757 INFO wrote hostname ci-4372-1-0-f-e67fdcf04d to /sysroot/etc/hostname Aug 12 23:43:41.763534 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 12 23:43:41.765469 initrd-setup-root[875]: cut: /sysroot/etc/passwd: No such file or directory Aug 12 23:43:41.770893 initrd-setup-root[882]: cut: /sysroot/etc/group: No such file or directory Aug 12 23:43:41.776573 initrd-setup-root[889]: cut: /sysroot/etc/shadow: No such file or directory Aug 12 23:43:41.781266 initrd-setup-root[896]: cut: /sysroot/etc/gshadow: No such file or directory Aug 12 23:43:41.885267 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 12 23:43:41.887413 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 12 23:43:41.889600 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 12 23:43:41.907244 kernel: BTRFS info (device sda6): last unmount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:43:41.928454 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 12 23:43:41.937222 ignition[964]: INFO : Ignition 2.21.0 Aug 12 23:43:41.937222 ignition[964]: INFO : Stage: mount Aug 12 23:43:41.940184 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:43:41.940184 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 12 23:43:41.940184 ignition[964]: INFO : mount: mount passed Aug 12 23:43:41.940184 ignition[964]: INFO : Ignition finished successfully Aug 12 23:43:41.940052 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 12 23:43:41.945463 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 12 23:43:42.164034 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 12 23:43:42.167303 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:43:42.192272 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (975) Aug 12 23:43:42.194778 kernel: BTRFS info (device sda6): first mount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:43:42.194831 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:43:42.194842 kernel: BTRFS info (device sda6): using free-space-tree Aug 12 23:43:42.201017 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:43:42.231499 ignition[992]: INFO : Ignition 2.21.0 Aug 12 23:43:42.231499 ignition[992]: INFO : Stage: files Aug 12 23:43:42.232982 ignition[992]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:43:42.232982 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 12 23:43:42.234969 ignition[992]: DEBUG : files: compiled without relabeling support, skipping Aug 12 23:43:42.236234 ignition[992]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 12 23:43:42.236234 ignition[992]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 12 23:43:42.240116 ignition[992]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 12 23:43:42.241134 ignition[992]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 12 23:43:42.242266 ignition[992]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 12 23:43:42.242096 unknown[992]: wrote ssh authorized keys file for user: core Aug 12 23:43:42.246185 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 12 23:43:42.246185 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Aug 12 23:43:42.385685 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 12 23:43:42.677277 systemd-networkd[807]: eth1: Gained IPv6LL Aug 12 23:43:43.067469 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 12 23:43:43.067469 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 12 23:43:43.071367 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 12 23:43:43.071367 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:43:43.071367 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:43:43.071367 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:43:43.071367 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:43:43.071367 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:43:43.071367 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:43:43.071367 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:43:43.071367 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:43:43.071367 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 12 23:43:43.083144 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 12 23:43:43.083144 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 12 23:43:43.083144 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Aug 12 23:43:43.331786 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 12 23:43:43.380734 systemd-networkd[807]: eth0: Gained IPv6LL Aug 12 23:43:43.544894 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 12 23:43:43.546166 ignition[992]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 12 23:43:43.548568 ignition[992]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:43:43.553570 ignition[992]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:43:43.553570 ignition[992]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 12 23:43:43.553570 ignition[992]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 12 23:43:43.553570 ignition[992]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 12 23:43:43.553570 ignition[992]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 12 23:43:43.553570 ignition[992]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 12 23:43:43.561869 ignition[992]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Aug 12 23:43:43.561869 ignition[992]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Aug 12 23:43:43.561869 ignition[992]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:43:43.561869 ignition[992]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:43:43.561869 ignition[992]: INFO : files: files passed Aug 12 23:43:43.561869 ignition[992]: INFO : Ignition finished successfully Aug 12 23:43:43.559028 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 12 23:43:43.563090 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 12 23:43:43.569287 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 12 23:43:43.577270 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 12 23:43:43.577364 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 12 23:43:43.584251 initrd-setup-root-after-ignition[1022]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:43:43.584251 initrd-setup-root-after-ignition[1022]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:43:43.587340 initrd-setup-root-after-ignition[1026]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:43:43.587821 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:43:43.590475 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 12 23:43:43.593721 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 12 23:43:43.652619 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 12 23:43:43.652786 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 12 23:43:43.655435 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 12 23:43:43.657154 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 12 23:43:43.659004 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 12 23:43:43.660748 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 12 23:43:43.689034 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:43:43.691618 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 12 23:43:43.715937 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:43:43.717579 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:43:43.718556 systemd[1]: Stopped target timers.target - Timer Units. Aug 12 23:43:43.719714 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 12 23:43:43.719904 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:43:43.721590 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 12 23:43:43.722966 systemd[1]: Stopped target basic.target - Basic System. Aug 12 23:43:43.724575 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 12 23:43:43.725938 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:43:43.727720 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 12 23:43:43.728919 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 12 23:43:43.730097 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 12 23:43:43.731052 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:43:43.732093 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 12 23:43:43.733125 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 12 23:43:43.734065 systemd[1]: Stopped target swap.target - Swaps. Aug 12 23:43:43.734843 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 12 23:43:43.735006 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:43:43.736254 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:43:43.737363 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:43:43.738384 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 12 23:43:43.738519 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:43:43.739575 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 12 23:43:43.739736 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 12 23:43:43.741217 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 12 23:43:43.741337 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:43:43.742354 systemd[1]: ignition-files.service: Deactivated successfully. Aug 12 23:43:43.742450 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 12 23:43:43.743349 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 12 23:43:43.743486 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 12 23:43:43.746403 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 12 23:43:43.747110 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 12 23:43:43.748954 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:43:43.752410 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 12 23:43:43.753562 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 12 23:43:43.754261 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:43:43.754968 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 12 23:43:43.755067 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:43:43.761989 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 12 23:43:43.762083 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 12 23:43:43.772143 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 12 23:43:43.776916 ignition[1046]: INFO : Ignition 2.21.0 Aug 12 23:43:43.776916 ignition[1046]: INFO : Stage: umount Aug 12 23:43:43.780550 ignition[1046]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:43:43.780550 ignition[1046]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 12 23:43:43.780550 ignition[1046]: INFO : umount: umount passed Aug 12 23:43:43.780550 ignition[1046]: INFO : Ignition finished successfully Aug 12 23:43:43.780267 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 12 23:43:43.780413 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 12 23:43:43.782651 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 12 23:43:43.782700 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 12 23:43:43.783308 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 12 23:43:43.783349 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 12 23:43:43.784626 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 12 23:43:43.784672 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 12 23:43:43.793802 systemd[1]: Stopped target network.target - Network. Aug 12 23:43:43.794961 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 12 23:43:43.795027 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:43:43.796249 systemd[1]: Stopped target paths.target - Path Units. Aug 12 23:43:43.797733 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 12 23:43:43.804243 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:43:43.806570 systemd[1]: Stopped target slices.target - Slice Units. Aug 12 23:43:43.807351 systemd[1]: Stopped target sockets.target - Socket Units. Aug 12 23:43:43.810149 systemd[1]: iscsid.socket: Deactivated successfully. Aug 12 23:43:43.810230 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:43:43.812403 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 12 23:43:43.812446 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:43:43.813332 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 12 23:43:43.813388 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 12 23:43:43.814244 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 12 23:43:43.814281 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 12 23:43:43.815919 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 12 23:43:43.816831 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 12 23:43:43.818474 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 12 23:43:43.818582 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 12 23:43:43.819588 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 12 23:43:43.819761 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 12 23:43:43.827532 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 12 23:43:43.827693 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 12 23:43:43.831825 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 12 23:43:43.832102 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 12 23:43:43.832240 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 12 23:43:43.836843 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 12 23:43:43.837366 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 12 23:43:43.838839 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 12 23:43:43.838878 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:43:43.840811 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 12 23:43:43.842009 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 12 23:43:43.842060 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:43:43.842788 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:43:43.842834 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:43:43.846326 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 12 23:43:43.846386 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 12 23:43:43.847529 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 12 23:43:43.847573 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:43:43.849895 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:43:43.852813 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 12 23:43:43.852899 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:43:43.869788 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 12 23:43:43.871054 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:43:43.872524 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 12 23:43:43.872673 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 12 23:43:43.875119 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 12 23:43:43.875272 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 12 23:43:43.876063 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 12 23:43:43.876100 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:43:43.877591 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 12 23:43:43.877645 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:43:43.879971 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 12 23:43:43.880020 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 12 23:43:43.881669 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:43:43.881721 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:43:43.884718 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 12 23:43:43.885812 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 12 23:43:43.885871 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:43:43.886668 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 12 23:43:43.886707 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:43:43.887920 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:43:43.887961 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:43:43.892587 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 12 23:43:43.892641 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 12 23:43:43.892674 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:43:43.902859 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 12 23:43:43.903073 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 12 23:43:43.905015 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 12 23:43:43.906952 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 12 23:43:43.925681 systemd[1]: Switching root. Aug 12 23:43:43.970267 systemd-journald[243]: Journal stopped Aug 12 23:43:44.933664 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Aug 12 23:43:44.936268 kernel: SELinux: policy capability network_peer_controls=1 Aug 12 23:43:44.936291 kernel: SELinux: policy capability open_perms=1 Aug 12 23:43:44.936300 kernel: SELinux: policy capability extended_socket_class=1 Aug 12 23:43:44.936309 kernel: SELinux: policy capability always_check_network=0 Aug 12 23:43:44.936322 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 12 23:43:44.936331 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 12 23:43:44.936339 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 12 23:43:44.936349 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 12 23:43:44.936358 kernel: SELinux: policy capability userspace_initial_context=0 Aug 12 23:43:44.936366 kernel: audit: type=1403 audit(1755042224.107:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 12 23:43:44.936377 systemd[1]: Successfully loaded SELinux policy in 56.647ms. Aug 12 23:43:44.936403 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.257ms. Aug 12 23:43:44.936413 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 12 23:43:44.936425 systemd[1]: Detected virtualization kvm. Aug 12 23:43:44.936435 systemd[1]: Detected architecture arm64. Aug 12 23:43:44.936445 systemd[1]: Detected first boot. Aug 12 23:43:44.936455 systemd[1]: Hostname set to . Aug 12 23:43:44.936471 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:43:44.936486 kernel: NET: Registered PF_VSOCK protocol family Aug 12 23:43:44.936498 zram_generator::config[1091]: No configuration found. Aug 12 23:43:44.936538 systemd[1]: Populated /etc with preset unit settings. Aug 12 23:43:44.936554 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 12 23:43:44.936564 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 12 23:43:44.936574 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 12 23:43:44.936583 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 12 23:43:44.936593 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 12 23:43:44.936607 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 12 23:43:44.936619 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 12 23:43:44.936633 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 12 23:43:44.936647 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 12 23:43:44.936657 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 12 23:43:44.936673 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 12 23:43:44.936685 systemd[1]: Created slice user.slice - User and Session Slice. Aug 12 23:43:44.936695 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:43:44.936705 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:43:44.936716 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 12 23:43:44.936726 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 12 23:43:44.936736 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 12 23:43:44.936747 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:43:44.936758 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 12 23:43:44.936768 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:43:44.936779 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:43:44.936789 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 12 23:43:44.936799 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 12 23:43:44.936809 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 12 23:43:44.936820 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 12 23:43:44.936829 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:43:44.936840 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:43:44.936849 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:43:44.936859 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:43:44.936870 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 12 23:43:44.936880 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 12 23:43:44.936891 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 12 23:43:44.936901 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:43:44.936911 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:43:44.936921 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:43:44.936931 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 12 23:43:44.936942 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 12 23:43:44.936952 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 12 23:43:44.936968 systemd[1]: Mounting media.mount - External Media Directory... Aug 12 23:43:44.936989 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 12 23:43:44.937000 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 12 23:43:44.937010 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 12 23:43:44.937020 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 12 23:43:44.937030 systemd[1]: Reached target machines.target - Containers. Aug 12 23:43:44.937040 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 12 23:43:44.937050 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:43:44.937060 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:43:44.937072 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 12 23:43:44.937082 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:43:44.937092 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:43:44.937102 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:43:44.937111 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 12 23:43:44.937121 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:43:44.937131 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 12 23:43:44.937141 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 12 23:43:44.937156 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 12 23:43:44.937167 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 12 23:43:44.937176 systemd[1]: Stopped systemd-fsck-usr.service. Aug 12 23:43:44.937187 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:43:44.937209 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:43:44.937225 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:43:44.937238 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 12 23:43:44.937249 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 12 23:43:44.937259 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 12 23:43:44.937272 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:43:44.937285 systemd[1]: verity-setup.service: Deactivated successfully. Aug 12 23:43:44.937299 systemd[1]: Stopped verity-setup.service. Aug 12 23:43:44.937309 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 12 23:43:44.937319 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 12 23:43:44.937329 systemd[1]: Mounted media.mount - External Media Directory. Aug 12 23:43:44.937339 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 12 23:43:44.937349 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 12 23:43:44.937359 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 12 23:43:44.937370 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:43:44.937388 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 12 23:43:44.937400 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 12 23:43:44.937410 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:43:44.937419 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:43:44.937430 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:43:44.937440 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:43:44.937450 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:43:44.937461 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 12 23:43:44.937472 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:43:44.937520 systemd-journald[1162]: Collecting audit messages is disabled. Aug 12 23:43:44.937547 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 12 23:43:44.937557 kernel: fuse: init (API version 7.41) Aug 12 23:43:44.937567 systemd-journald[1162]: Journal started Aug 12 23:43:44.937589 systemd-journald[1162]: Runtime Journal (/run/log/journal/d243420ef846422cb1b94f76b823f61f) is 8M, max 76.5M, 68.5M free. Aug 12 23:43:44.641981 systemd[1]: Queued start job for default target multi-user.target. Aug 12 23:43:44.667569 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 12 23:43:44.668051 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 12 23:43:44.943656 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:43:44.949463 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 12 23:43:44.960315 kernel: loop: module loaded Aug 12 23:43:44.958794 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 12 23:43:44.958967 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 12 23:43:44.962675 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:43:44.963920 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 12 23:43:44.965059 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 12 23:43:44.965097 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:43:44.968103 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 12 23:43:44.971439 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 12 23:43:44.972136 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:43:44.975476 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 12 23:43:44.982728 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 12 23:43:44.984339 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:43:44.985920 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 12 23:43:44.991821 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 12 23:43:44.993359 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:43:44.993718 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:43:44.995226 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 12 23:43:45.000033 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:43:45.007347 kernel: ACPI: bus type drm_connector registered Aug 12 23:43:45.012618 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:43:45.014105 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:43:45.014303 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:43:45.019471 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 12 23:43:45.022551 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 12 23:43:45.024699 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 12 23:43:45.029021 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 12 23:43:45.031900 systemd-journald[1162]: Time spent on flushing to /var/log/journal/d243420ef846422cb1b94f76b823f61f is 74.429ms for 1164 entries. Aug 12 23:43:45.031900 systemd-journald[1162]: System Journal (/var/log/journal/d243420ef846422cb1b94f76b823f61f) is 8M, max 584.8M, 576.8M free. Aug 12 23:43:45.118579 systemd-journald[1162]: Received client request to flush runtime journal. Aug 12 23:43:45.118616 kernel: loop0: detected capacity change from 0 to 138376 Aug 12 23:43:45.118628 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 12 23:43:45.034404 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 12 23:43:45.098549 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 12 23:43:45.105615 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:43:45.123766 kernel: loop1: detected capacity change from 0 to 207008 Aug 12 23:43:45.122681 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 12 23:43:45.139211 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 12 23:43:45.143813 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:43:45.176222 kernel: loop2: detected capacity change from 0 to 107312 Aug 12 23:43:45.199912 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Aug 12 23:43:45.199930 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Aug 12 23:43:45.213263 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:43:45.229326 kernel: loop3: detected capacity change from 0 to 8 Aug 12 23:43:45.250231 kernel: loop4: detected capacity change from 0 to 138376 Aug 12 23:43:45.279235 kernel: loop5: detected capacity change from 0 to 207008 Aug 12 23:43:45.303649 kernel: loop6: detected capacity change from 0 to 107312 Aug 12 23:43:45.328447 kernel: loop7: detected capacity change from 0 to 8 Aug 12 23:43:45.329710 (sd-merge)[1232]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Aug 12 23:43:45.331897 (sd-merge)[1232]: Merged extensions into '/usr'. Aug 12 23:43:45.341356 systemd[1]: Reload requested from client PID 1208 ('systemd-sysext') (unit systemd-sysext.service)... Aug 12 23:43:45.341494 systemd[1]: Reloading... Aug 12 23:43:45.463286 zram_generator::config[1255]: No configuration found. Aug 12 23:43:45.516229 ldconfig[1200]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 12 23:43:45.583411 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:43:45.661337 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 12 23:43:45.661672 systemd[1]: Reloading finished in 318 ms. Aug 12 23:43:45.692465 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 12 23:43:45.695223 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 12 23:43:45.701493 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 12 23:43:45.710418 systemd[1]: Starting ensure-sysext.service... Aug 12 23:43:45.713471 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:43:45.725155 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 12 23:43:45.726068 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 12 23:43:45.730316 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:43:45.733332 systemd[1]: Reload requested from client PID 1297 ('systemctl') (unit ensure-sysext.service)... Aug 12 23:43:45.733351 systemd[1]: Reloading... Aug 12 23:43:45.750230 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 12 23:43:45.750565 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 12 23:43:45.750820 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 12 23:43:45.751025 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 12 23:43:45.751701 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 12 23:43:45.751918 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Aug 12 23:43:45.751970 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Aug 12 23:43:45.761379 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:43:45.761392 systemd-tmpfiles[1298]: Skipping /boot Aug 12 23:43:45.771071 systemd-udevd[1302]: Using default interface naming scheme 'v255'. Aug 12 23:43:45.774043 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:43:45.774061 systemd-tmpfiles[1298]: Skipping /boot Aug 12 23:43:45.804233 zram_generator::config[1325]: No configuration found. Aug 12 23:43:46.004155 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:43:46.122473 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Aug 12 23:43:46.122546 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 12 23:43:46.122561 kernel: [drm] features: -context_init Aug 12 23:43:46.124669 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 12 23:43:46.124931 systemd[1]: Reloading finished in 391 ms. Aug 12 23:43:46.125637 kernel: [drm] number of scanouts: 1 Aug 12 23:43:46.127219 kernel: [drm] number of cap sets: 0 Aug 12 23:43:46.129305 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Aug 12 23:43:46.129349 kernel: mousedev: PS/2 mouse device common for all mice Aug 12 23:43:46.136058 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:43:46.148181 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:43:46.165791 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:43:46.171470 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 12 23:43:46.172325 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:43:46.174579 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:43:46.177312 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:43:46.180799 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:43:46.181804 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:43:46.181923 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:43:46.188742 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 12 23:43:46.202506 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:43:46.207514 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:43:46.210640 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 12 23:43:46.220917 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:43:46.222273 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:43:46.224256 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:43:46.224438 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:43:46.224553 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:43:46.229903 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:43:46.232660 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:43:46.242418 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:43:46.243430 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:43:46.243479 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:43:46.246474 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 12 23:43:46.248775 systemd[1]: Finished ensure-sysext.service. Aug 12 23:43:46.249673 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:43:46.250274 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:43:46.252672 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:43:46.270637 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:43:46.279090 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:43:46.283151 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 12 23:43:46.285725 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 12 23:43:46.286868 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:43:46.287760 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:43:46.298132 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:43:46.298742 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:43:46.304066 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:43:46.307331 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 12 23:43:46.314974 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 12 23:43:46.320238 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Aug 12 23:43:46.321567 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 12 23:43:46.326836 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:43:46.329678 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:43:46.334873 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:43:46.337589 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:43:46.338221 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:43:46.338263 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:43:46.338287 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:43:46.361590 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 12 23:43:46.377845 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:43:46.378036 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:43:46.379373 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:43:46.379595 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:43:46.380759 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:43:46.385279 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 12 23:43:46.387484 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:43:46.387681 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:43:46.388437 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:43:46.396983 augenrules[1475]: No rules Aug 12 23:43:46.398864 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:43:46.404987 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:43:46.409020 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 12 23:43:46.413622 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 12 23:43:46.445366 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 12 23:43:46.487101 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:43:46.606901 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:43:46.612257 systemd-networkd[1417]: lo: Link UP Aug 12 23:43:46.612268 systemd-networkd[1417]: lo: Gained carrier Aug 12 23:43:46.614002 systemd-networkd[1417]: Enumeration completed Aug 12 23:43:46.614340 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:43:46.616387 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:43:46.616398 systemd-networkd[1417]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:43:46.617083 systemd-networkd[1417]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:43:46.617086 systemd-networkd[1417]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:43:46.617383 systemd-networkd[1417]: eth0: Link UP Aug 12 23:43:46.617476 systemd-networkd[1417]: eth0: Gained carrier Aug 12 23:43:46.617491 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:43:46.619034 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 12 23:43:46.622429 systemd-networkd[1417]: eth1: Link UP Aug 12 23:43:46.623057 systemd-networkd[1417]: eth1: Gained carrier Aug 12 23:43:46.623079 systemd-networkd[1417]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:43:46.623600 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 12 23:43:46.638038 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 12 23:43:46.639390 systemd[1]: Reached target time-set.target - System Time Set. Aug 12 23:43:46.646085 systemd-resolved[1419]: Positive Trust Anchors: Aug 12 23:43:46.647568 systemd-resolved[1419]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:43:46.647653 systemd-resolved[1419]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:43:46.652278 systemd-networkd[1417]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Aug 12 23:43:46.652944 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. Aug 12 23:43:46.664892 systemd-resolved[1419]: Using system hostname 'ci-4372-1-0-f-e67fdcf04d'. Aug 12 23:43:46.669165 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:43:46.670077 systemd[1]: Reached target network.target - Network. Aug 12 23:43:46.672290 systemd-networkd[1417]: eth0: DHCPv4 address 49.13.54.157/32, gateway 172.31.1.1 acquired from 172.31.1.1 Aug 12 23:43:46.672371 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:43:46.673270 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:43:46.674088 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 12 23:43:46.674869 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 12 23:43:46.675645 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. Aug 12 23:43:46.676368 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 12 23:43:46.677064 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 12 23:43:46.677822 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 12 23:43:46.678560 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 12 23:43:46.678598 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:43:46.679099 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:43:46.680996 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 12 23:43:46.683424 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 12 23:43:46.686575 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 12 23:43:46.687460 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 12 23:43:46.688188 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 12 23:43:46.691646 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 12 23:43:46.693381 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 12 23:43:46.698293 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 12 23:43:46.699239 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 12 23:43:46.700452 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:43:46.702277 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:43:46.702863 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:43:46.702904 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:43:46.706336 systemd[1]: Starting containerd.service - containerd container runtime... Aug 12 23:43:46.708533 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 12 23:43:46.712380 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 12 23:43:46.720236 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 12 23:43:46.724406 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 12 23:43:46.726759 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 12 23:43:46.727331 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 12 23:43:46.730388 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 12 23:43:46.732037 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 12 23:43:46.736777 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Aug 12 23:43:46.741491 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 12 23:43:46.744672 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 12 23:43:46.750437 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 12 23:43:46.751861 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 12 23:43:46.753352 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 12 23:43:46.756403 systemd[1]: Starting update-engine.service - Update Engine... Aug 12 23:43:46.757053 jq[1511]: false Aug 12 23:43:46.769349 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 12 23:43:46.778271 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 12 23:43:46.779288 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 12 23:43:46.779509 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 12 23:43:46.781139 extend-filesystems[1514]: Found /dev/sda6 Aug 12 23:43:46.790268 extend-filesystems[1514]: Found /dev/sda9 Aug 12 23:43:46.790656 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 12 23:43:46.790874 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 12 23:43:46.794875 extend-filesystems[1514]: Checking size of /dev/sda9 Aug 12 23:43:46.816391 coreos-metadata[1508]: Aug 12 23:43:46.816 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Aug 12 23:43:46.817435 jq[1522]: true Aug 12 23:43:46.821893 coreos-metadata[1508]: Aug 12 23:43:46.821 INFO Fetch successful Aug 12 23:43:46.821893 coreos-metadata[1508]: Aug 12 23:43:46.821 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Aug 12 23:43:46.823774 coreos-metadata[1508]: Aug 12 23:43:46.822 INFO Fetch successful Aug 12 23:43:46.824232 extend-filesystems[1514]: Resized partition /dev/sda9 Aug 12 23:43:46.828104 extend-filesystems[1553]: resize2fs 1.47.2 (1-Jan-2025) Aug 12 23:43:46.840989 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Aug 12 23:43:46.841057 tar[1531]: linux-arm64/LICENSE Aug 12 23:43:46.841057 tar[1531]: linux-arm64/helm Aug 12 23:43:46.853962 systemd[1]: motdgen.service: Deactivated successfully. Aug 12 23:43:46.856661 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 12 23:43:46.859055 (ntainerd)[1548]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 12 23:43:46.863185 update_engine[1521]: I20250812 23:43:46.861709 1521 main.cc:92] Flatcar Update Engine starting Aug 12 23:43:46.872602 jq[1552]: true Aug 12 23:43:46.872162 dbus-daemon[1509]: [system] SELinux support is enabled Aug 12 23:43:46.874171 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 12 23:43:46.888780 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 12 23:43:46.888820 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 12 23:43:46.892443 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 12 23:43:46.892479 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 12 23:43:46.894085 update_engine[1521]: I20250812 23:43:46.894019 1521 update_check_scheduler.cc:74] Next update check in 5m30s Aug 12 23:43:46.894645 systemd[1]: Started update-engine.service - Update Engine. Aug 12 23:43:46.921834 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 12 23:43:47.029175 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Aug 12 23:43:47.034403 systemd-logind[1520]: New seat seat0. Aug 12 23:43:47.040124 extend-filesystems[1553]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 12 23:43:47.040124 extend-filesystems[1553]: old_desc_blocks = 1, new_desc_blocks = 5 Aug 12 23:43:47.040124 extend-filesystems[1553]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Aug 12 23:43:47.046282 extend-filesystems[1514]: Resized filesystem in /dev/sda9 Aug 12 23:43:47.045812 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 12 23:43:47.046024 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 12 23:43:47.048177 systemd-logind[1520]: Watching system buttons on /dev/input/event0 (Power Button) Aug 12 23:43:47.048229 systemd-logind[1520]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Aug 12 23:43:47.048612 systemd[1]: Started systemd-logind.service - User Login Management. Aug 12 23:43:47.067183 bash[1583]: Updated "/home/core/.ssh/authorized_keys" Aug 12 23:43:47.069802 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 12 23:43:47.074588 systemd[1]: Starting sshkeys.service... Aug 12 23:43:47.084425 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 12 23:43:47.085482 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 12 23:43:47.120123 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 12 23:43:47.125393 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 12 23:43:47.195599 coreos-metadata[1595]: Aug 12 23:43:47.195 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Aug 12 23:43:47.197872 coreos-metadata[1595]: Aug 12 23:43:47.197 INFO Fetch successful Aug 12 23:43:47.201180 unknown[1595]: wrote ssh authorized keys file for user: core Aug 12 23:43:47.223265 locksmithd[1564]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 12 23:43:47.236943 update-ssh-keys[1602]: Updated "/home/core/.ssh/authorized_keys" Aug 12 23:43:47.238996 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 12 23:43:47.248018 systemd[1]: Finished sshkeys.service. Aug 12 23:43:47.320836 containerd[1548]: time="2025-08-12T23:43:47Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 12 23:43:47.325787 containerd[1548]: time="2025-08-12T23:43:47.325732480Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 12 23:43:47.348871 containerd[1548]: time="2025-08-12T23:43:47.348819720Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.92µs" Aug 12 23:43:47.348871 containerd[1548]: time="2025-08-12T23:43:47.348861280Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 12 23:43:47.348871 containerd[1548]: time="2025-08-12T23:43:47.348881000Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 12 23:43:47.349079 containerd[1548]: time="2025-08-12T23:43:47.349055640Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 12 23:43:47.349107 containerd[1548]: time="2025-08-12T23:43:47.349079200Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 12 23:43:47.349125 containerd[1548]: time="2025-08-12T23:43:47.349106920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 12 23:43:47.349193 containerd[1548]: time="2025-08-12T23:43:47.349168240Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 12 23:43:47.349193 containerd[1548]: time="2025-08-12T23:43:47.349187240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 12 23:43:47.349471 containerd[1548]: time="2025-08-12T23:43:47.349441400Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 12 23:43:47.349471 containerd[1548]: time="2025-08-12T23:43:47.349465840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 12 23:43:47.349574 containerd[1548]: time="2025-08-12T23:43:47.349479440Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 12 23:43:47.349574 containerd[1548]: time="2025-08-12T23:43:47.349487800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 12 23:43:47.349614 containerd[1548]: time="2025-08-12T23:43:47.349593080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 12 23:43:47.349819 containerd[1548]: time="2025-08-12T23:43:47.349792880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 12 23:43:47.349852 containerd[1548]: time="2025-08-12T23:43:47.349831960Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 12 23:43:47.349852 containerd[1548]: time="2025-08-12T23:43:47.349843840Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 12 23:43:47.349980 containerd[1548]: time="2025-08-12T23:43:47.349964480Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 12 23:43:47.353321 containerd[1548]: time="2025-08-12T23:43:47.353287360Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 12 23:43:47.353420 containerd[1548]: time="2025-08-12T23:43:47.353400400Z" level=info msg="metadata content store policy set" policy=shared Aug 12 23:43:47.360013 containerd[1548]: time="2025-08-12T23:43:47.359965480Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 12 23:43:47.360107 containerd[1548]: time="2025-08-12T23:43:47.360032680Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 12 23:43:47.360107 containerd[1548]: time="2025-08-12T23:43:47.360048480Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 12 23:43:47.360107 containerd[1548]: time="2025-08-12T23:43:47.360062080Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 12 23:43:47.360107 containerd[1548]: time="2025-08-12T23:43:47.360075920Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 12 23:43:47.360107 containerd[1548]: time="2025-08-12T23:43:47.360091080Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 12 23:43:47.360107 containerd[1548]: time="2025-08-12T23:43:47.360104760Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 12 23:43:47.360239 containerd[1548]: time="2025-08-12T23:43:47.360117240Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 12 23:43:47.360239 containerd[1548]: time="2025-08-12T23:43:47.360129920Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 12 23:43:47.360239 containerd[1548]: time="2025-08-12T23:43:47.360140720Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 12 23:43:47.360239 containerd[1548]: time="2025-08-12T23:43:47.360150640Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 12 23:43:47.360239 containerd[1548]: time="2025-08-12T23:43:47.360164080Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 12 23:43:47.360653 containerd[1548]: time="2025-08-12T23:43:47.360315480Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 12 23:43:47.360653 containerd[1548]: time="2025-08-12T23:43:47.360345560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 12 23:43:47.360653 containerd[1548]: time="2025-08-12T23:43:47.360369440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 12 23:43:47.360653 containerd[1548]: time="2025-08-12T23:43:47.360381080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 12 23:43:47.360653 containerd[1548]: time="2025-08-12T23:43:47.360391960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 12 23:43:47.360653 containerd[1548]: time="2025-08-12T23:43:47.360403560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 12 23:43:47.360653 containerd[1548]: time="2025-08-12T23:43:47.360415440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 12 23:43:47.360653 containerd[1548]: time="2025-08-12T23:43:47.360425760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 12 23:43:47.360653 containerd[1548]: time="2025-08-12T23:43:47.360438480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 12 23:43:47.360653 containerd[1548]: time="2025-08-12T23:43:47.360449520Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 12 23:43:47.360653 containerd[1548]: time="2025-08-12T23:43:47.360465640Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 12 23:43:47.361093 containerd[1548]: time="2025-08-12T23:43:47.360667240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 12 23:43:47.361093 containerd[1548]: time="2025-08-12T23:43:47.360684720Z" level=info msg="Start snapshots syncer" Aug 12 23:43:47.361093 containerd[1548]: time="2025-08-12T23:43:47.360708720Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 12 23:43:47.361153 containerd[1548]: time="2025-08-12T23:43:47.360921800Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 12 23:43:47.361153 containerd[1548]: time="2025-08-12T23:43:47.360968360Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 12 23:43:47.361153 containerd[1548]: time="2025-08-12T23:43:47.361052560Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 12 23:43:47.361293 containerd[1548]: time="2025-08-12T23:43:47.361173720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 12 23:43:47.363280 containerd[1548]: time="2025-08-12T23:43:47.363224440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 12 23:43:47.363280 containerd[1548]: time="2025-08-12T23:43:47.363257600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 12 23:43:47.363644 containerd[1548]: time="2025-08-12T23:43:47.363289800Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 12 23:43:47.363644 containerd[1548]: time="2025-08-12T23:43:47.363306440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 12 23:43:47.363644 containerd[1548]: time="2025-08-12T23:43:47.363317920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 12 23:43:47.363644 containerd[1548]: time="2025-08-12T23:43:47.363329760Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 12 23:43:47.363644 containerd[1548]: time="2025-08-12T23:43:47.363400640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 12 23:43:47.363644 containerd[1548]: time="2025-08-12T23:43:47.363415680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 12 23:43:47.363644 containerd[1548]: time="2025-08-12T23:43:47.363440960Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 12 23:43:47.365426 containerd[1548]: time="2025-08-12T23:43:47.363487880Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 12 23:43:47.365426 containerd[1548]: time="2025-08-12T23:43:47.365236640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 12 23:43:47.365426 containerd[1548]: time="2025-08-12T23:43:47.365250440Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 12 23:43:47.365426 containerd[1548]: time="2025-08-12T23:43:47.365260000Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 12 23:43:47.365426 containerd[1548]: time="2025-08-12T23:43:47.365268320Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 12 23:43:47.365426 containerd[1548]: time="2025-08-12T23:43:47.365294520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 12 23:43:47.365426 containerd[1548]: time="2025-08-12T23:43:47.365308240Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 12 23:43:47.365426 containerd[1548]: time="2025-08-12T23:43:47.365395480Z" level=info msg="runtime interface created" Aug 12 23:43:47.365426 containerd[1548]: time="2025-08-12T23:43:47.365403000Z" level=info msg="created NRI interface" Aug 12 23:43:47.365426 containerd[1548]: time="2025-08-12T23:43:47.365414280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 12 23:43:47.365426 containerd[1548]: time="2025-08-12T23:43:47.365430400Z" level=info msg="Connect containerd service" Aug 12 23:43:47.365665 containerd[1548]: time="2025-08-12T23:43:47.365485640Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 12 23:43:47.367599 containerd[1548]: time="2025-08-12T23:43:47.367466800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:43:47.543273 containerd[1548]: time="2025-08-12T23:43:47.542425960Z" level=info msg="Start subscribing containerd event" Aug 12 23:43:47.543273 containerd[1548]: time="2025-08-12T23:43:47.542710480Z" level=info msg="Start recovering state" Aug 12 23:43:47.543273 containerd[1548]: time="2025-08-12T23:43:47.543050880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 12 23:43:47.543273 containerd[1548]: time="2025-08-12T23:43:47.543121720Z" level=info msg="Start event monitor" Aug 12 23:43:47.543273 containerd[1548]: time="2025-08-12T23:43:47.543143880Z" level=info msg="Start cni network conf syncer for default" Aug 12 23:43:47.543273 containerd[1548]: time="2025-08-12T23:43:47.543166320Z" level=info msg="Start streaming server" Aug 12 23:43:47.543273 containerd[1548]: time="2025-08-12T23:43:47.543182760Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 12 23:43:47.547212 containerd[1548]: time="2025-08-12T23:43:47.543190920Z" level=info msg="runtime interface starting up..." Aug 12 23:43:47.547212 containerd[1548]: time="2025-08-12T23:43:47.543534000Z" level=info msg="starting plugins..." Aug 12 23:43:47.547212 containerd[1548]: time="2025-08-12T23:43:47.543561160Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 12 23:43:47.547212 containerd[1548]: time="2025-08-12T23:43:47.543122200Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 12 23:43:47.547212 containerd[1548]: time="2025-08-12T23:43:47.543839720Z" level=info msg="containerd successfully booted in 0.223920s" Aug 12 23:43:47.543952 systemd[1]: Started containerd.service - containerd container runtime. Aug 12 23:43:47.598295 tar[1531]: linux-arm64/README.md Aug 12 23:43:47.618251 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 12 23:43:47.988377 systemd-networkd[1417]: eth1: Gained IPv6LL Aug 12 23:43:47.989330 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. Aug 12 23:43:47.989709 systemd-networkd[1417]: eth0: Gained IPv6LL Aug 12 23:43:47.991681 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. Aug 12 23:43:47.995401 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 12 23:43:47.996553 systemd[1]: Reached target network-online.target - Network is Online. Aug 12 23:43:48.000413 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:43:48.003614 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 12 23:43:48.040383 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 12 23:43:48.072531 sshd_keygen[1557]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 12 23:43:48.107184 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 12 23:43:48.111711 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 12 23:43:48.134093 systemd[1]: issuegen.service: Deactivated successfully. Aug 12 23:43:48.135009 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 12 23:43:48.140707 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 12 23:43:48.165380 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 12 23:43:48.171987 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 12 23:43:48.175060 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 12 23:43:48.176290 systemd[1]: Reached target getty.target - Login Prompts. Aug 12 23:43:48.872157 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:43:48.874987 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 12 23:43:48.876360 systemd[1]: Startup finished in 2.313s (kernel) + 5.491s (initrd) + 4.825s (userspace) = 12.631s. Aug 12 23:43:48.884280 (kubelet)[1656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:43:49.413047 kubelet[1656]: E0812 23:43:49.412980 1656 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:43:49.415735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:43:49.415904 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:43:49.416278 systemd[1]: kubelet.service: Consumed 926ms CPU time, 257M memory peak. Aug 12 23:43:53.030108 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 12 23:43:53.032075 systemd[1]: Started sshd@0-49.13.54.157:22-139.178.68.195:42506.service - OpenSSH per-connection server daemon (139.178.68.195:42506). Aug 12 23:43:54.128159 sshd[1668]: Accepted publickey for core from 139.178.68.195 port 42506 ssh2: RSA SHA256:mYYevtwiboZ0WAYUecJRosD7TZEB8o4Se1lqZszh41o Aug 12 23:43:54.131836 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:43:54.139449 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 12 23:43:54.142528 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 12 23:43:54.152755 systemd-logind[1520]: New session 1 of user core. Aug 12 23:43:54.167112 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 12 23:43:54.171052 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 12 23:43:54.189383 (systemd)[1672]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:43:54.192572 systemd-logind[1520]: New session c1 of user core. Aug 12 23:43:54.363276 systemd[1672]: Queued start job for default target default.target. Aug 12 23:43:54.374759 systemd[1672]: Created slice app.slice - User Application Slice. Aug 12 23:43:54.374806 systemd[1672]: Reached target paths.target - Paths. Aug 12 23:43:54.374864 systemd[1672]: Reached target timers.target - Timers. Aug 12 23:43:54.377024 systemd[1672]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 12 23:43:54.393648 systemd[1672]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 12 23:43:54.393787 systemd[1672]: Reached target sockets.target - Sockets. Aug 12 23:43:54.393852 systemd[1672]: Reached target basic.target - Basic System. Aug 12 23:43:54.393891 systemd[1672]: Reached target default.target - Main User Target. Aug 12 23:43:54.393925 systemd[1672]: Startup finished in 191ms. Aug 12 23:43:54.394107 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 12 23:43:54.403568 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 12 23:43:55.130767 systemd[1]: Started sshd@1-49.13.54.157:22-139.178.68.195:42512.service - OpenSSH per-connection server daemon (139.178.68.195:42512). Aug 12 23:43:56.148183 sshd[1683]: Accepted publickey for core from 139.178.68.195 port 42512 ssh2: RSA SHA256:mYYevtwiboZ0WAYUecJRosD7TZEB8o4Se1lqZszh41o Aug 12 23:43:56.150278 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:43:56.157545 systemd-logind[1520]: New session 2 of user core. Aug 12 23:43:56.164611 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 12 23:43:56.838283 sshd[1685]: Connection closed by 139.178.68.195 port 42512 Aug 12 23:43:56.839294 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Aug 12 23:43:56.846150 systemd-logind[1520]: Session 2 logged out. Waiting for processes to exit. Aug 12 23:43:56.846612 systemd[1]: sshd@1-49.13.54.157:22-139.178.68.195:42512.service: Deactivated successfully. Aug 12 23:43:56.848579 systemd[1]: session-2.scope: Deactivated successfully. Aug 12 23:43:56.851858 systemd-logind[1520]: Removed session 2. Aug 12 23:43:57.012457 systemd[1]: Started sshd@2-49.13.54.157:22-139.178.68.195:42520.service - OpenSSH per-connection server daemon (139.178.68.195:42520). Aug 12 23:43:58.036243 sshd[1691]: Accepted publickey for core from 139.178.68.195 port 42520 ssh2: RSA SHA256:mYYevtwiboZ0WAYUecJRosD7TZEB8o4Se1lqZszh41o Aug 12 23:43:58.038085 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:43:58.043422 systemd-logind[1520]: New session 3 of user core. Aug 12 23:43:58.051554 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 12 23:43:58.720872 sshd[1693]: Connection closed by 139.178.68.195 port 42520 Aug 12 23:43:58.721855 sshd-session[1691]: pam_unix(sshd:session): session closed for user core Aug 12 23:43:58.726854 systemd-logind[1520]: Session 3 logged out. Waiting for processes to exit. Aug 12 23:43:58.727230 systemd[1]: sshd@2-49.13.54.157:22-139.178.68.195:42520.service: Deactivated successfully. Aug 12 23:43:58.729756 systemd[1]: session-3.scope: Deactivated successfully. Aug 12 23:43:58.734340 systemd-logind[1520]: Removed session 3. Aug 12 23:43:58.894653 systemd[1]: Started sshd@3-49.13.54.157:22-139.178.68.195:42534.service - OpenSSH per-connection server daemon (139.178.68.195:42534). Aug 12 23:43:59.666975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 12 23:43:59.670609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:43:59.817516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:43:59.823875 (kubelet)[1709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:43:59.875355 kubelet[1709]: E0812 23:43:59.875245 1709 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:43:59.879460 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:43:59.879610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:43:59.881269 systemd[1]: kubelet.service: Consumed 167ms CPU time, 107.7M memory peak. Aug 12 23:43:59.912299 sshd[1699]: Accepted publickey for core from 139.178.68.195 port 42534 ssh2: RSA SHA256:mYYevtwiboZ0WAYUecJRosD7TZEB8o4Se1lqZszh41o Aug 12 23:43:59.914982 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:43:59.922269 systemd-logind[1520]: New session 4 of user core. Aug 12 23:43:59.929565 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 12 23:44:00.601360 sshd[1716]: Connection closed by 139.178.68.195 port 42534 Aug 12 23:44:00.602507 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Aug 12 23:44:00.606492 systemd[1]: sshd@3-49.13.54.157:22-139.178.68.195:42534.service: Deactivated successfully. Aug 12 23:44:00.608333 systemd[1]: session-4.scope: Deactivated successfully. Aug 12 23:44:00.611821 systemd-logind[1520]: Session 4 logged out. Waiting for processes to exit. Aug 12 23:44:00.613821 systemd-logind[1520]: Removed session 4. Aug 12 23:44:00.775966 systemd[1]: Started sshd@4-49.13.54.157:22-139.178.68.195:60454.service - OpenSSH per-connection server daemon (139.178.68.195:60454). Aug 12 23:44:01.790102 sshd[1722]: Accepted publickey for core from 139.178.68.195 port 60454 ssh2: RSA SHA256:mYYevtwiboZ0WAYUecJRosD7TZEB8o4Se1lqZszh41o Aug 12 23:44:01.792258 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:44:01.797920 systemd-logind[1520]: New session 5 of user core. Aug 12 23:44:01.806564 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 12 23:44:02.328803 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 12 23:44:02.329875 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:44:02.348573 sudo[1725]: pam_unix(sudo:session): session closed for user root Aug 12 23:44:02.511401 sshd[1724]: Connection closed by 139.178.68.195 port 60454 Aug 12 23:44:02.511149 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Aug 12 23:44:02.517020 systemd-logind[1520]: Session 5 logged out. Waiting for processes to exit. Aug 12 23:44:02.517939 systemd[1]: sshd@4-49.13.54.157:22-139.178.68.195:60454.service: Deactivated successfully. Aug 12 23:44:02.520387 systemd[1]: session-5.scope: Deactivated successfully. Aug 12 23:44:02.524668 systemd-logind[1520]: Removed session 5. Aug 12 23:44:02.684697 systemd[1]: Started sshd@5-49.13.54.157:22-139.178.68.195:60462.service - OpenSSH per-connection server daemon (139.178.68.195:60462). Aug 12 23:44:03.695304 sshd[1731]: Accepted publickey for core from 139.178.68.195 port 60462 ssh2: RSA SHA256:mYYevtwiboZ0WAYUecJRosD7TZEB8o4Se1lqZszh41o Aug 12 23:44:03.697669 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:44:03.705360 systemd-logind[1520]: New session 6 of user core. Aug 12 23:44:03.713514 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 12 23:44:04.223670 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 12 23:44:04.224046 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:44:04.229931 sudo[1735]: pam_unix(sudo:session): session closed for user root Aug 12 23:44:04.236697 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 12 23:44:04.237015 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:44:04.246984 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:44:04.303587 augenrules[1757]: No rules Aug 12 23:44:04.305192 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:44:04.305725 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:44:04.309072 sudo[1734]: pam_unix(sudo:session): session closed for user root Aug 12 23:44:04.471107 sshd[1733]: Connection closed by 139.178.68.195 port 60462 Aug 12 23:44:04.470969 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Aug 12 23:44:04.476500 systemd[1]: sshd@5-49.13.54.157:22-139.178.68.195:60462.service: Deactivated successfully. Aug 12 23:44:04.479685 systemd[1]: session-6.scope: Deactivated successfully. Aug 12 23:44:04.482438 systemd-logind[1520]: Session 6 logged out. Waiting for processes to exit. Aug 12 23:44:04.483595 systemd-logind[1520]: Removed session 6. Aug 12 23:44:04.646331 systemd[1]: Started sshd@6-49.13.54.157:22-139.178.68.195:60472.service - OpenSSH per-connection server daemon (139.178.68.195:60472). Aug 12 23:44:05.668479 sshd[1766]: Accepted publickey for core from 139.178.68.195 port 60472 ssh2: RSA SHA256:mYYevtwiboZ0WAYUecJRosD7TZEB8o4Se1lqZszh41o Aug 12 23:44:05.670277 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:44:05.675495 systemd-logind[1520]: New session 7 of user core. Aug 12 23:44:05.682522 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 12 23:44:06.197378 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 12 23:44:06.197684 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:44:06.548540 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 12 23:44:06.559800 (dockerd)[1787]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 12 23:44:06.803071 dockerd[1787]: time="2025-08-12T23:44:06.802623600Z" level=info msg="Starting up" Aug 12 23:44:06.807614 dockerd[1787]: time="2025-08-12T23:44:06.807297560Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 12 23:44:06.866531 dockerd[1787]: time="2025-08-12T23:44:06.866495040Z" level=info msg="Loading containers: start." Aug 12 23:44:06.880213 kernel: Initializing XFRM netlink socket Aug 12 23:44:07.075330 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. Aug 12 23:44:07.127932 systemd-networkd[1417]: docker0: Link UP Aug 12 23:44:07.133108 systemd-timesyncd[1435]: Contacted time server 116.203.96.227:123 (2.flatcar.pool.ntp.org). Aug 12 23:44:07.133520 systemd-timesyncd[1435]: Initial clock synchronization to Tue 2025-08-12 23:44:07.124415 UTC. Aug 12 23:44:07.133856 dockerd[1787]: time="2025-08-12T23:44:07.133789200Z" level=info msg="Loading containers: done." Aug 12 23:44:07.154221 dockerd[1787]: time="2025-08-12T23:44:07.154131000Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 12 23:44:07.154435 dockerd[1787]: time="2025-08-12T23:44:07.154276440Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 12 23:44:07.154467 dockerd[1787]: time="2025-08-12T23:44:07.154447520Z" level=info msg="Initializing buildkit" Aug 12 23:44:07.184756 dockerd[1787]: time="2025-08-12T23:44:07.184684800Z" level=info msg="Completed buildkit initialization" Aug 12 23:44:07.196501 dockerd[1787]: time="2025-08-12T23:44:07.196442600Z" level=info msg="Daemon has completed initialization" Aug 12 23:44:07.196501 dockerd[1787]: time="2025-08-12T23:44:07.196555760Z" level=info msg="API listen on /run/docker.sock" Aug 12 23:44:07.197107 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 12 23:44:08.316159 containerd[1548]: time="2025-08-12T23:44:08.316095683Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 12 23:44:08.950146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2486190784.mount: Deactivated successfully. Aug 12 23:44:10.031034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 12 23:44:10.033441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:44:10.035689 containerd[1548]: time="2025-08-12T23:44:10.034668242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:10.036858 containerd[1548]: time="2025-08-12T23:44:10.036811853Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=26327873" Aug 12 23:44:10.037994 containerd[1548]: time="2025-08-12T23:44:10.037934374Z" level=info msg="ImageCreate event name:\"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:10.042222 containerd[1548]: time="2025-08-12T23:44:10.041732500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:10.043603 containerd[1548]: time="2025-08-12T23:44:10.043558937Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"26324581\" in 1.72739309s" Aug 12 23:44:10.043718 containerd[1548]: time="2025-08-12T23:44:10.043704669Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\"" Aug 12 23:44:10.044657 containerd[1548]: time="2025-08-12T23:44:10.044633600Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 12 23:44:10.178971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:44:10.191755 (kubelet)[2051]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:44:10.248627 kubelet[2051]: E0812 23:44:10.248563 2051 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:44:10.253274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:44:10.253550 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:44:10.255365 systemd[1]: kubelet.service: Consumed 166ms CPU time, 104.6M memory peak. Aug 12 23:44:11.195558 containerd[1548]: time="2025-08-12T23:44:11.195482355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:11.197254 containerd[1548]: time="2025-08-12T23:44:11.197064270Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=22529716" Aug 12 23:44:11.198930 containerd[1548]: time="2025-08-12T23:44:11.198871608Z" level=info msg="ImageCreate event name:\"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:11.204585 containerd[1548]: time="2025-08-12T23:44:11.204508927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:11.205855 containerd[1548]: time="2025-08-12T23:44:11.205656231Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"24065486\" in 1.160881015s" Aug 12 23:44:11.205855 containerd[1548]: time="2025-08-12T23:44:11.205704650Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\"" Aug 12 23:44:11.206505 containerd[1548]: time="2025-08-12T23:44:11.206286278Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 12 23:44:12.306243 containerd[1548]: time="2025-08-12T23:44:12.305238057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:12.306856 containerd[1548]: time="2025-08-12T23:44:12.306809660Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=17484158" Aug 12 23:44:12.308788 containerd[1548]: time="2025-08-12T23:44:12.308734558Z" level=info msg="ImageCreate event name:\"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:12.312663 containerd[1548]: time="2025-08-12T23:44:12.312616463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:12.313642 containerd[1548]: time="2025-08-12T23:44:12.313445886Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"19019946\" in 1.107123144s" Aug 12 23:44:12.313864 containerd[1548]: time="2025-08-12T23:44:12.313712938Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\"" Aug 12 23:44:12.314637 containerd[1548]: time="2025-08-12T23:44:12.314575028Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 12 23:44:13.275461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1593799229.mount: Deactivated successfully. Aug 12 23:44:13.697392 containerd[1548]: time="2025-08-12T23:44:13.696547678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:13.699385 containerd[1548]: time="2025-08-12T23:44:13.699281398Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=27378431" Aug 12 23:44:13.701437 containerd[1548]: time="2025-08-12T23:44:13.701335856Z" level=info msg="ImageCreate event name:\"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:13.704991 containerd[1548]: time="2025-08-12T23:44:13.704917813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:13.706231 containerd[1548]: time="2025-08-12T23:44:13.705743139Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"27377424\" in 1.391127327s" Aug 12 23:44:13.706231 containerd[1548]: time="2025-08-12T23:44:13.705802836Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\"" Aug 12 23:44:13.706497 containerd[1548]: time="2025-08-12T23:44:13.706456028Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 12 23:44:14.265897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2698196830.mount: Deactivated successfully. Aug 12 23:44:15.019469 containerd[1548]: time="2025-08-12T23:44:15.019391306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:15.020695 containerd[1548]: time="2025-08-12T23:44:15.020647806Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Aug 12 23:44:15.022180 containerd[1548]: time="2025-08-12T23:44:15.022112037Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:15.026628 containerd[1548]: time="2025-08-12T23:44:15.026587660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:15.028298 containerd[1548]: time="2025-08-12T23:44:15.028239068Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.321741455s" Aug 12 23:44:15.028475 containerd[1548]: time="2025-08-12T23:44:15.028401853Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 12 23:44:15.029060 containerd[1548]: time="2025-08-12T23:44:15.029017368Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 12 23:44:15.516593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3477735208.mount: Deactivated successfully. Aug 12 23:44:15.525223 containerd[1548]: time="2025-08-12T23:44:15.525135072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:44:15.527454 containerd[1548]: time="2025-08-12T23:44:15.527398076Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Aug 12 23:44:15.528098 containerd[1548]: time="2025-08-12T23:44:15.528015629Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:44:15.531098 containerd[1548]: time="2025-08-12T23:44:15.531037019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:44:15.532238 containerd[1548]: time="2025-08-12T23:44:15.532186954Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 503.124281ms" Aug 12 23:44:15.532331 containerd[1548]: time="2025-08-12T23:44:15.532238337Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 12 23:44:15.532752 containerd[1548]: time="2025-08-12T23:44:15.532710659Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 12 23:44:16.067767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1692900206.mount: Deactivated successfully. Aug 12 23:44:17.599003 containerd[1548]: time="2025-08-12T23:44:17.598905124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:17.600598 containerd[1548]: time="2025-08-12T23:44:17.600541363Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812537" Aug 12 23:44:17.601629 containerd[1548]: time="2025-08-12T23:44:17.601546548Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:17.605699 containerd[1548]: time="2025-08-12T23:44:17.605645783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:17.607380 containerd[1548]: time="2025-08-12T23:44:17.607092678Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.074338154s" Aug 12 23:44:17.607380 containerd[1548]: time="2025-08-12T23:44:17.607131427Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Aug 12 23:44:20.280820 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 12 23:44:20.282792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:44:20.434379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:44:20.441723 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:44:20.489862 kubelet[2211]: E0812 23:44:20.489799 2211 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:44:20.494153 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:44:20.495055 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:44:20.495641 systemd[1]: kubelet.service: Consumed 158ms CPU time, 107M memory peak. Aug 12 23:44:23.748815 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:44:23.749605 systemd[1]: kubelet.service: Consumed 158ms CPU time, 107M memory peak. Aug 12 23:44:23.753542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:44:23.785537 systemd[1]: Reload requested from client PID 2225 ('systemctl') (unit session-7.scope)... Aug 12 23:44:23.785556 systemd[1]: Reloading... Aug 12 23:44:23.916258 zram_generator::config[2269]: No configuration found. Aug 12 23:44:23.992995 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:44:24.096441 systemd[1]: Reloading finished in 310 ms. Aug 12 23:44:24.157825 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 12 23:44:24.157934 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 12 23:44:24.158248 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:44:24.158298 systemd[1]: kubelet.service: Consumed 99ms CPU time, 95.2M memory peak. Aug 12 23:44:24.160611 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:44:24.306262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:44:24.320627 (kubelet)[2317]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:44:24.380894 kubelet[2317]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:44:24.380894 kubelet[2317]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 12 23:44:24.380894 kubelet[2317]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:44:24.382231 kubelet[2317]: I0812 23:44:24.381330 2317 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:44:24.772723 kubelet[2317]: I0812 23:44:24.772578 2317 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 12 23:44:24.772934 kubelet[2317]: I0812 23:44:24.772920 2317 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:44:24.773371 kubelet[2317]: I0812 23:44:24.773351 2317 server.go:954] "Client rotation is on, will bootstrap in background" Aug 12 23:44:24.808147 kubelet[2317]: E0812 23:44:24.808099 2317 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://49.13.54.157:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 49.13.54.157:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:44:24.811190 kubelet[2317]: I0812 23:44:24.811147 2317 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:44:24.822945 kubelet[2317]: I0812 23:44:24.822859 2317 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 12 23:44:24.826542 kubelet[2317]: I0812 23:44:24.826488 2317 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:44:24.827775 kubelet[2317]: I0812 23:44:24.827671 2317 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:44:24.828050 kubelet[2317]: I0812 23:44:24.827761 2317 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-1-0-f-e67fdcf04d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:44:24.828224 kubelet[2317]: I0812 23:44:24.828111 2317 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:44:24.828224 kubelet[2317]: I0812 23:44:24.828128 2317 container_manager_linux.go:304] "Creating device plugin manager" Aug 12 23:44:24.828454 kubelet[2317]: I0812 23:44:24.828409 2317 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:44:24.832216 kubelet[2317]: I0812 23:44:24.832166 2317 kubelet.go:446] "Attempting to sync node with API server" Aug 12 23:44:24.832347 kubelet[2317]: I0812 23:44:24.832322 2317 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:44:24.832417 kubelet[2317]: I0812 23:44:24.832361 2317 kubelet.go:352] "Adding apiserver pod source" Aug 12 23:44:24.832417 kubelet[2317]: I0812 23:44:24.832375 2317 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:44:24.844097 kubelet[2317]: I0812 23:44:24.844063 2317 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 12 23:44:24.846158 kubelet[2317]: I0812 23:44:24.844814 2317 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:44:24.846158 kubelet[2317]: W0812 23:44:24.844945 2317 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 12 23:44:24.846158 kubelet[2317]: I0812 23:44:24.845981 2317 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 12 23:44:24.846158 kubelet[2317]: I0812 23:44:24.846017 2317 server.go:1287] "Started kubelet" Aug 12 23:44:24.846382 kubelet[2317]: W0812 23:44:24.846255 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.13.54.157:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-1-0-f-e67fdcf04d&limit=500&resourceVersion=0": dial tcp 49.13.54.157:6443: connect: connection refused Aug 12 23:44:24.846382 kubelet[2317]: E0812 23:44:24.846314 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://49.13.54.157:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-1-0-f-e67fdcf04d&limit=500&resourceVersion=0\": dial tcp 49.13.54.157:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:44:24.850021 kubelet[2317]: W0812 23:44:24.849978 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.13.54.157:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 49.13.54.157:6443: connect: connection refused Aug 12 23:44:24.850159 kubelet[2317]: E0812 23:44:24.850141 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://49.13.54.157:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.54.157:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:44:24.850326 kubelet[2317]: I0812 23:44:24.850292 2317 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:44:24.854000 kubelet[2317]: I0812 23:44:24.853916 2317 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:44:24.854334 kubelet[2317]: I0812 23:44:24.854311 2317 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:44:24.854969 kubelet[2317]: E0812 23:44:24.854681 2317 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.54.157:6443/api/v1/namespaces/default/events\": dial tcp 49.13.54.157:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372-1-0-f-e67fdcf04d.185b29a429f14924 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372-1-0-f-e67fdcf04d,UID:ci-4372-1-0-f-e67fdcf04d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372-1-0-f-e67fdcf04d,},FirstTimestamp:2025-08-12 23:44:24.845994276 +0000 UTC m=+0.518315057,LastTimestamp:2025-08-12 23:44:24.845994276 +0000 UTC m=+0.518315057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372-1-0-f-e67fdcf04d,}" Aug 12 23:44:24.855078 kubelet[2317]: I0812 23:44:24.855018 2317 server.go:479] "Adding debug handlers to kubelet server" Aug 12 23:44:24.856392 kubelet[2317]: I0812 23:44:24.856326 2317 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:44:24.859550 kubelet[2317]: I0812 23:44:24.859520 2317 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:44:24.862782 kubelet[2317]: E0812 23:44:24.862716 2317 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:44:24.863998 kubelet[2317]: I0812 23:44:24.863963 2317 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 12 23:44:24.864087 kubelet[2317]: E0812 23:44:24.864069 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372-1-0-f-e67fdcf04d\" not found" Aug 12 23:44:24.864129 kubelet[2317]: I0812 23:44:24.864116 2317 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 12 23:44:24.864171 kubelet[2317]: I0812 23:44:24.864167 2317 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:44:24.864678 kubelet[2317]: W0812 23:44:24.864528 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.13.54.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.54.157:6443: connect: connection refused Aug 12 23:44:24.864678 kubelet[2317]: E0812 23:44:24.864576 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://49.13.54.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.13.54.157:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:44:24.864678 kubelet[2317]: E0812 23:44:24.864655 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.54.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-1-0-f-e67fdcf04d?timeout=10s\": dial tcp 49.13.54.157:6443: connect: connection refused" interval="200ms" Aug 12 23:44:24.864968 kubelet[2317]: I0812 23:44:24.864939 2317 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:44:24.866457 kubelet[2317]: I0812 23:44:24.866430 2317 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:44:24.866457 kubelet[2317]: I0812 23:44:24.866450 2317 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:44:24.879072 kubelet[2317]: I0812 23:44:24.879025 2317 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:44:24.880261 kubelet[2317]: I0812 23:44:24.880229 2317 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:44:24.880376 kubelet[2317]: I0812 23:44:24.880363 2317 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 12 23:44:24.880468 kubelet[2317]: I0812 23:44:24.880453 2317 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 12 23:44:24.880526 kubelet[2317]: I0812 23:44:24.880515 2317 kubelet.go:2382] "Starting kubelet main sync loop" Aug 12 23:44:24.880650 kubelet[2317]: E0812 23:44:24.880622 2317 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:44:24.895572 kubelet[2317]: W0812 23:44:24.895352 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.13.54.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.54.157:6443: connect: connection refused Aug 12 23:44:24.895572 kubelet[2317]: E0812 23:44:24.895414 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://49.13.54.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.13.54.157:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:44:24.898657 kubelet[2317]: I0812 23:44:24.898624 2317 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 12 23:44:24.898657 kubelet[2317]: I0812 23:44:24.898646 2317 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 12 23:44:24.898657 kubelet[2317]: I0812 23:44:24.898665 2317 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:44:24.901535 kubelet[2317]: I0812 23:44:24.901489 2317 policy_none.go:49] "None policy: Start" Aug 12 23:44:24.901607 kubelet[2317]: I0812 23:44:24.901533 2317 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 12 23:44:24.901607 kubelet[2317]: I0812 23:44:24.901564 2317 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:44:24.910614 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 12 23:44:24.925237 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 12 23:44:24.931787 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 12 23:44:24.946228 kubelet[2317]: I0812 23:44:24.946098 2317 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:44:24.946620 kubelet[2317]: I0812 23:44:24.946591 2317 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:44:24.946960 kubelet[2317]: I0812 23:44:24.946781 2317 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:44:24.948035 kubelet[2317]: I0812 23:44:24.947588 2317 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:44:24.951318 kubelet[2317]: E0812 23:44:24.951148 2317 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 12 23:44:24.951454 kubelet[2317]: E0812 23:44:24.951441 2317 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372-1-0-f-e67fdcf04d\" not found" Aug 12 23:44:24.994977 systemd[1]: Created slice kubepods-burstable-pod0ba9cc9d3f3dc4d27ceea1b7e8d51ba3.slice - libcontainer container kubepods-burstable-pod0ba9cc9d3f3dc4d27ceea1b7e8d51ba3.slice. Aug 12 23:44:25.010487 kubelet[2317]: E0812 23:44:25.010013 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-f-e67fdcf04d\" not found" node="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:25.014068 systemd[1]: Created slice kubepods-burstable-pod69984ed8d88af2f77d5d8b4933456971.slice - libcontainer container kubepods-burstable-pod69984ed8d88af2f77d5d8b4933456971.slice. Aug 12 23:44:25.017882 kubelet[2317]: E0812 23:44:25.017661 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-f-e67fdcf04d\" not found" node="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:25.020678 systemd[1]: Created slice kubepods-burstable-pod5a18637e370445f2fc5007cd65b5be83.slice - libcontainer container kubepods-burstable-pod5a18637e370445f2fc5007cd65b5be83.slice. Aug 12 23:44:25.024367 kubelet[2317]: E0812 23:44:25.024270 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-f-e67fdcf04d\" not found" node="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:25.053120 kubelet[2317]: I0812 23:44:25.053027 2317 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:25.054323 kubelet[2317]: E0812 23:44:25.054179 2317 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.54.157:6443/api/v1/nodes\": dial tcp 49.13.54.157:6443: connect: connection refused" node="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:25.065973 kubelet[2317]: I0812 23:44:25.065879 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ba9cc9d3f3dc4d27ceea1b7e8d51ba3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-1-0-f-e67fdcf04d\" (UID: \"0ba9cc9d3f3dc4d27ceea1b7e8d51ba3\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:25.066149 kubelet[2317]: I0812 23:44:25.065954 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a18637e370445f2fc5007cd65b5be83-ca-certs\") pod \"kube-apiserver-ci-4372-1-0-f-e67fdcf04d\" (UID: \"5a18637e370445f2fc5007cd65b5be83\") " pod="kube-system/kube-apiserver-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:25.066149 kubelet[2317]: I0812 23:44:25.066028 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a18637e370445f2fc5007cd65b5be83-k8s-certs\") pod \"kube-apiserver-ci-4372-1-0-f-e67fdcf04d\" (UID: \"5a18637e370445f2fc5007cd65b5be83\") " pod="kube-system/kube-apiserver-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:25.066149 kubelet[2317]: I0812 23:44:25.066073 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a18637e370445f2fc5007cd65b5be83-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-1-0-f-e67fdcf04d\" (UID: \"5a18637e370445f2fc5007cd65b5be83\") " pod="kube-system/kube-apiserver-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:25.066149 kubelet[2317]: I0812 23:44:25.066124 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ba9cc9d3f3dc4d27ceea1b7e8d51ba3-ca-certs\") pod \"kube-controller-manager-ci-4372-1-0-f-e67fdcf04d\" (UID: \"0ba9cc9d3f3dc4d27ceea1b7e8d51ba3\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:25.066358 kubelet[2317]: I0812 23:44:25.066156 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0ba9cc9d3f3dc4d27ceea1b7e8d51ba3-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-1-0-f-e67fdcf04d\" (UID: \"0ba9cc9d3f3dc4d27ceea1b7e8d51ba3\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:25.066358 kubelet[2317]: I0812 23:44:25.066187 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ba9cc9d3f3dc4d27ceea1b7e8d51ba3-k8s-certs\") pod \"kube-controller-manager-ci-4372-1-0-f-e67fdcf04d\" (UID: \"0ba9cc9d3f3dc4d27ceea1b7e8d51ba3\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:25.066358 kubelet[2317]: I0812 23:44:25.066255 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0ba9cc9d3f3dc4d27ceea1b7e8d51ba3-kubeconfig\") pod \"kube-controller-manager-ci-4372-1-0-f-e67fdcf04d\" (UID: \"0ba9cc9d3f3dc4d27ceea1b7e8d51ba3\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:25.066358 kubelet[2317]: I0812 23:44:25.066295 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/69984ed8d88af2f77d5d8b4933456971-kubeconfig\") pod \"kube-scheduler-ci-4372-1-0-f-e67fdcf04d\" (UID: \"69984ed8d88af2f77d5d8b4933456971\") " pod="kube-system/kube-scheduler-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:25.066941 kubelet[2317]: E0812 23:44:25.066864 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.54.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-1-0-f-e67fdcf04d?timeout=10s\": dial tcp 49.13.54.157:6443: connect: connection refused" interval="400ms" Aug 12 23:44:25.258158 kubelet[2317]: I0812 23:44:25.258023 2317 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:25.258887 kubelet[2317]: E0812 23:44:25.258826 2317 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.54.157:6443/api/v1/nodes\": dial tcp 49.13.54.157:6443: connect: connection refused" node="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:25.313482 containerd[1548]: time="2025-08-12T23:44:25.313338515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-1-0-f-e67fdcf04d,Uid:0ba9cc9d3f3dc4d27ceea1b7e8d51ba3,Namespace:kube-system,Attempt:0,}" Aug 12 23:44:25.319529 containerd[1548]: time="2025-08-12T23:44:25.319093746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-1-0-f-e67fdcf04d,Uid:69984ed8d88af2f77d5d8b4933456971,Namespace:kube-system,Attempt:0,}" Aug 12 23:44:25.326892 containerd[1548]: time="2025-08-12T23:44:25.326733766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-1-0-f-e67fdcf04d,Uid:5a18637e370445f2fc5007cd65b5be83,Namespace:kube-system,Attempt:0,}" Aug 12 23:44:25.357731 containerd[1548]: time="2025-08-12T23:44:25.357399029Z" level=info msg="connecting to shim 8130b61d5c4b873247e576ccbd0b493376027f575100ce2bbdefd57f26b82e39" address="unix:///run/containerd/s/3bc6a9503806106dd2badf2c4a541d00d97b92b1ef522f8c2949024fa5fa7c46" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:44:25.358157 containerd[1548]: time="2025-08-12T23:44:25.358125822Z" level=info msg="connecting to shim eb51cb9053c6ea3168ae6bd78967b69494f6f8926ec0b099f26a52385dd4e998" address="unix:///run/containerd/s/9cdc186dae95ea666b5885e5fb549b4e54da042d75b0c6b1d13659a2ddea5613" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:44:25.398681 systemd[1]: Started cri-containerd-8130b61d5c4b873247e576ccbd0b493376027f575100ce2bbdefd57f26b82e39.scope - libcontainer container 8130b61d5c4b873247e576ccbd0b493376027f575100ce2bbdefd57f26b82e39. Aug 12 23:44:25.404245 containerd[1548]: time="2025-08-12T23:44:25.403134090Z" level=info msg="connecting to shim d2e0fb4e84c84c146d7752acdd59fdb881d76c38af684a18a16000ffc2dfe7d8" address="unix:///run/containerd/s/0a7d2a85a3e933c0a47643e9d304b218ea29218fe71b56cf3ac81e9acc4bd537" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:44:25.404458 systemd[1]: Started cri-containerd-eb51cb9053c6ea3168ae6bd78967b69494f6f8926ec0b099f26a52385dd4e998.scope - libcontainer container eb51cb9053c6ea3168ae6bd78967b69494f6f8926ec0b099f26a52385dd4e998. Aug 12 23:44:25.433423 systemd[1]: Started cri-containerd-d2e0fb4e84c84c146d7752acdd59fdb881d76c38af684a18a16000ffc2dfe7d8.scope - libcontainer container d2e0fb4e84c84c146d7752acdd59fdb881d76c38af684a18a16000ffc2dfe7d8. Aug 12 23:44:25.468510 kubelet[2317]: E0812 23:44:25.468465 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.54.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-1-0-f-e67fdcf04d?timeout=10s\": dial tcp 49.13.54.157:6443: connect: connection refused" interval="800ms" Aug 12 23:44:25.475959 containerd[1548]: time="2025-08-12T23:44:25.475690847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-1-0-f-e67fdcf04d,Uid:0ba9cc9d3f3dc4d27ceea1b7e8d51ba3,Namespace:kube-system,Attempt:0,} returns sandbox id \"8130b61d5c4b873247e576ccbd0b493376027f575100ce2bbdefd57f26b82e39\"" Aug 12 23:44:25.483396 containerd[1548]: time="2025-08-12T23:44:25.483343265Z" level=info msg="CreateContainer within sandbox \"8130b61d5c4b873247e576ccbd0b493376027f575100ce2bbdefd57f26b82e39\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 12 23:44:25.495636 containerd[1548]: time="2025-08-12T23:44:25.495560923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-1-0-f-e67fdcf04d,Uid:69984ed8d88af2f77d5d8b4933456971,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb51cb9053c6ea3168ae6bd78967b69494f6f8926ec0b099f26a52385dd4e998\"" Aug 12 23:44:25.498312 containerd[1548]: time="2025-08-12T23:44:25.498261610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-1-0-f-e67fdcf04d,Uid:5a18637e370445f2fc5007cd65b5be83,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2e0fb4e84c84c146d7752acdd59fdb881d76c38af684a18a16000ffc2dfe7d8\"" Aug 12 23:44:25.499302 containerd[1548]: time="2025-08-12T23:44:25.499156493Z" level=info msg="Container c0c56fe3f092fea32db8e05a42c15810c03d84d5d4d33d81543c2d69e4b68bae: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:44:25.499970 containerd[1548]: time="2025-08-12T23:44:25.499795781Z" level=info msg="CreateContainer within sandbox \"eb51cb9053c6ea3168ae6bd78967b69494f6f8926ec0b099f26a52385dd4e998\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 12 23:44:25.501709 containerd[1548]: time="2025-08-12T23:44:25.501579428Z" level=info msg="CreateContainer within sandbox \"d2e0fb4e84c84c146d7752acdd59fdb881d76c38af684a18a16000ffc2dfe7d8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 12 23:44:25.512376 containerd[1548]: time="2025-08-12T23:44:25.512290950Z" level=info msg="Container 66f3137dd5319fd94bc80cab6dd90e871befd7126420037eda1ab8c96361f929: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:44:25.518465 containerd[1548]: time="2025-08-12T23:44:25.518413436Z" level=info msg="CreateContainer within sandbox \"8130b61d5c4b873247e576ccbd0b493376027f575100ce2bbdefd57f26b82e39\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c0c56fe3f092fea32db8e05a42c15810c03d84d5d4d33d81543c2d69e4b68bae\"" Aug 12 23:44:25.518797 containerd[1548]: time="2025-08-12T23:44:25.518767814Z" level=info msg="Container 3ed0d7b4b2a6df11058c5703c7c04f5443e69dcd250d3868db8a029ea58c1510: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:44:25.519729 containerd[1548]: time="2025-08-12T23:44:25.519597109Z" level=info msg="StartContainer for \"c0c56fe3f092fea32db8e05a42c15810c03d84d5d4d33d81543c2d69e4b68bae\"" Aug 12 23:44:25.521867 containerd[1548]: time="2025-08-12T23:44:25.521836396Z" level=info msg="connecting to shim c0c56fe3f092fea32db8e05a42c15810c03d84d5d4d33d81543c2d69e4b68bae" address="unix:///run/containerd/s/3bc6a9503806106dd2badf2c4a541d00d97b92b1ef522f8c2949024fa5fa7c46" protocol=ttrpc version=3 Aug 12 23:44:25.523912 containerd[1548]: time="2025-08-12T23:44:25.523815689Z" level=info msg="CreateContainer within sandbox \"eb51cb9053c6ea3168ae6bd78967b69494f6f8926ec0b099f26a52385dd4e998\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"66f3137dd5319fd94bc80cab6dd90e871befd7126420037eda1ab8c96361f929\"" Aug 12 23:44:25.525367 containerd[1548]: time="2025-08-12T23:44:25.525334463Z" level=info msg="StartContainer for \"66f3137dd5319fd94bc80cab6dd90e871befd7126420037eda1ab8c96361f929\"" Aug 12 23:44:25.528804 containerd[1548]: time="2025-08-12T23:44:25.528726628Z" level=info msg="connecting to shim 66f3137dd5319fd94bc80cab6dd90e871befd7126420037eda1ab8c96361f929" address="unix:///run/containerd/s/9cdc186dae95ea666b5885e5fb549b4e54da042d75b0c6b1d13659a2ddea5613" protocol=ttrpc version=3 Aug 12 23:44:25.529459 containerd[1548]: time="2025-08-12T23:44:25.529410628Z" level=info msg="CreateContainer within sandbox \"d2e0fb4e84c84c146d7752acdd59fdb881d76c38af684a18a16000ffc2dfe7d8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3ed0d7b4b2a6df11058c5703c7c04f5443e69dcd250d3868db8a029ea58c1510\"" Aug 12 23:44:25.530791 containerd[1548]: time="2025-08-12T23:44:25.530728397Z" level=info msg="StartContainer for \"3ed0d7b4b2a6df11058c5703c7c04f5443e69dcd250d3868db8a029ea58c1510\"" Aug 12 23:44:25.532691 containerd[1548]: time="2025-08-12T23:44:25.532641181Z" level=info msg="connecting to shim 3ed0d7b4b2a6df11058c5703c7c04f5443e69dcd250d3868db8a029ea58c1510" address="unix:///run/containerd/s/0a7d2a85a3e933c0a47643e9d304b218ea29218fe71b56cf3ac81e9acc4bd537" protocol=ttrpc version=3 Aug 12 23:44:25.549457 systemd[1]: Started cri-containerd-c0c56fe3f092fea32db8e05a42c15810c03d84d5d4d33d81543c2d69e4b68bae.scope - libcontainer container c0c56fe3f092fea32db8e05a42c15810c03d84d5d4d33d81543c2d69e4b68bae. Aug 12 23:44:25.559403 systemd[1]: Started cri-containerd-3ed0d7b4b2a6df11058c5703c7c04f5443e69dcd250d3868db8a029ea58c1510.scope - libcontainer container 3ed0d7b4b2a6df11058c5703c7c04f5443e69dcd250d3868db8a029ea58c1510. Aug 12 23:44:25.571379 systemd[1]: Started cri-containerd-66f3137dd5319fd94bc80cab6dd90e871befd7126420037eda1ab8c96361f929.scope - libcontainer container 66f3137dd5319fd94bc80cab6dd90e871befd7126420037eda1ab8c96361f929. Aug 12 23:44:25.648797 containerd[1548]: time="2025-08-12T23:44:25.648756581Z" level=info msg="StartContainer for \"c0c56fe3f092fea32db8e05a42c15810c03d84d5d4d33d81543c2d69e4b68bae\" returns successfully" Aug 12 23:44:25.650264 containerd[1548]: time="2025-08-12T23:44:25.650232482Z" level=info msg="StartContainer for \"3ed0d7b4b2a6df11058c5703c7c04f5443e69dcd250d3868db8a029ea58c1510\" returns successfully" Aug 12 23:44:25.664857 kubelet[2317]: I0812 23:44:25.664808 2317 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:25.666360 kubelet[2317]: E0812 23:44:25.666322 2317 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.54.157:6443/api/v1/nodes\": dial tcp 49.13.54.157:6443: connect: connection refused" node="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:25.672761 containerd[1548]: time="2025-08-12T23:44:25.672699783Z" level=info msg="StartContainer for \"66f3137dd5319fd94bc80cab6dd90e871befd7126420037eda1ab8c96361f929\" returns successfully" Aug 12 23:44:25.905087 kubelet[2317]: E0812 23:44:25.904668 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-f-e67fdcf04d\" not found" node="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:25.912209 kubelet[2317]: E0812 23:44:25.909806 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-f-e67fdcf04d\" not found" node="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:25.913419 kubelet[2317]: E0812 23:44:25.913402 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-f-e67fdcf04d\" not found" node="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:26.469164 kubelet[2317]: I0812 23:44:26.468868 2317 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:26.913915 kubelet[2317]: E0812 23:44:26.913889 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-f-e67fdcf04d\" not found" node="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:26.915442 kubelet[2317]: E0812 23:44:26.914289 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-f-e67fdcf04d\" not found" node="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:26.915881 kubelet[2317]: E0812 23:44:26.915864 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-f-e67fdcf04d\" not found" node="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:28.117435 kubelet[2317]: E0812 23:44:28.117390 2317 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372-1-0-f-e67fdcf04d\" not found" node="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:28.169552 kubelet[2317]: I0812 23:44:28.169125 2317 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:28.264624 kubelet[2317]: I0812 23:44:28.264381 2317 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:28.277925 kubelet[2317]: E0812 23:44:28.277888 2317 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372-1-0-f-e67fdcf04d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:28.278297 kubelet[2317]: I0812 23:44:28.278237 2317 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:28.283570 kubelet[2317]: E0812 23:44:28.282352 2317 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372-1-0-f-e67fdcf04d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:28.283851 kubelet[2317]: I0812 23:44:28.283710 2317 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:28.285877 kubelet[2317]: E0812 23:44:28.285847 2317 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372-1-0-f-e67fdcf04d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:28.851689 kubelet[2317]: I0812 23:44:28.851575 2317 apiserver.go:52] "Watching apiserver" Aug 12 23:44:28.865023 kubelet[2317]: I0812 23:44:28.864951 2317 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 12 23:44:30.777840 systemd[1]: Reload requested from client PID 2589 ('systemctl') (unit session-7.scope)... Aug 12 23:44:30.777863 systemd[1]: Reloading... Aug 12 23:44:30.899226 zram_generator::config[2633]: No configuration found. Aug 12 23:44:30.982953 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:44:31.101873 systemd[1]: Reloading finished in 323 ms. Aug 12 23:44:31.134618 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:44:31.151257 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:44:31.153418 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:44:31.153511 systemd[1]: kubelet.service: Consumed 970ms CPU time, 128.2M memory peak. Aug 12 23:44:31.156620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:44:31.315712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:44:31.327968 (kubelet)[2678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:44:31.384133 kubelet[2678]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:44:31.384133 kubelet[2678]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 12 23:44:31.384133 kubelet[2678]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:44:31.385313 kubelet[2678]: I0812 23:44:31.385151 2678 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:44:31.396953 kubelet[2678]: I0812 23:44:31.396916 2678 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 12 23:44:31.397234 kubelet[2678]: I0812 23:44:31.397099 2678 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:44:31.397626 kubelet[2678]: I0812 23:44:31.397605 2678 server.go:954] "Client rotation is on, will bootstrap in background" Aug 12 23:44:31.400157 kubelet[2678]: I0812 23:44:31.399840 2678 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 12 23:44:31.402876 kubelet[2678]: I0812 23:44:31.402850 2678 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:44:31.411766 kubelet[2678]: I0812 23:44:31.411682 2678 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 12 23:44:31.417619 kubelet[2678]: I0812 23:44:31.417579 2678 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:44:31.417893 kubelet[2678]: I0812 23:44:31.417854 2678 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:44:31.418060 kubelet[2678]: I0812 23:44:31.417890 2678 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-1-0-f-e67fdcf04d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:44:31.418186 kubelet[2678]: I0812 23:44:31.418066 2678 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:44:31.418186 kubelet[2678]: I0812 23:44:31.418074 2678 container_manager_linux.go:304] "Creating device plugin manager" Aug 12 23:44:31.418186 kubelet[2678]: I0812 23:44:31.418116 2678 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:44:31.418310 kubelet[2678]: I0812 23:44:31.418268 2678 kubelet.go:446] "Attempting to sync node with API server" Aug 12 23:44:31.418310 kubelet[2678]: I0812 23:44:31.418279 2678 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:44:31.418310 kubelet[2678]: I0812 23:44:31.418300 2678 kubelet.go:352] "Adding apiserver pod source" Aug 12 23:44:31.418310 kubelet[2678]: I0812 23:44:31.418309 2678 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:44:31.423029 kubelet[2678]: I0812 23:44:31.422856 2678 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 12 23:44:31.424273 kubelet[2678]: I0812 23:44:31.424194 2678 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:44:31.425275 kubelet[2678]: I0812 23:44:31.425228 2678 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 12 23:44:31.425805 kubelet[2678]: I0812 23:44:31.425571 2678 server.go:1287] "Started kubelet" Aug 12 23:44:31.428781 kubelet[2678]: I0812 23:44:31.428738 2678 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:44:31.435516 kubelet[2678]: I0812 23:44:31.435484 2678 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:44:31.437165 kubelet[2678]: I0812 23:44:31.436544 2678 server.go:479] "Adding debug handlers to kubelet server" Aug 12 23:44:31.437606 kubelet[2678]: I0812 23:44:31.437565 2678 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:44:31.437875 kubelet[2678]: I0812 23:44:31.437860 2678 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:44:31.438239 kubelet[2678]: I0812 23:44:31.438222 2678 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:44:31.439980 kubelet[2678]: I0812 23:44:31.439965 2678 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 12 23:44:31.440315 kubelet[2678]: E0812 23:44:31.440294 2678 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372-1-0-f-e67fdcf04d\" not found" Aug 12 23:44:31.442094 kubelet[2678]: I0812 23:44:31.442075 2678 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 12 23:44:31.442314 kubelet[2678]: I0812 23:44:31.442301 2678 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:44:31.444087 kubelet[2678]: I0812 23:44:31.444051 2678 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:44:31.445093 kubelet[2678]: I0812 23:44:31.445074 2678 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:44:31.445183 kubelet[2678]: I0812 23:44:31.445173 2678 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 12 23:44:31.445271 kubelet[2678]: I0812 23:44:31.445262 2678 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 12 23:44:31.445318 kubelet[2678]: I0812 23:44:31.445311 2678 kubelet.go:2382] "Starting kubelet main sync loop" Aug 12 23:44:31.445405 kubelet[2678]: E0812 23:44:31.445389 2678 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:44:31.456932 kubelet[2678]: I0812 23:44:31.456896 2678 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:44:31.457131 kubelet[2678]: I0812 23:44:31.457113 2678 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:44:31.459822 kubelet[2678]: I0812 23:44:31.459804 2678 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:44:31.529631 kubelet[2678]: I0812 23:44:31.529603 2678 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 12 23:44:31.529631 kubelet[2678]: I0812 23:44:31.529629 2678 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 12 23:44:31.529804 kubelet[2678]: I0812 23:44:31.529657 2678 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:44:31.529933 kubelet[2678]: I0812 23:44:31.529893 2678 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 12 23:44:31.529966 kubelet[2678]: I0812 23:44:31.529919 2678 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 12 23:44:31.529966 kubelet[2678]: I0812 23:44:31.529953 2678 policy_none.go:49] "None policy: Start" Aug 12 23:44:31.530020 kubelet[2678]: I0812 23:44:31.529966 2678 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 12 23:44:31.530020 kubelet[2678]: I0812 23:44:31.529980 2678 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:44:31.530148 kubelet[2678]: I0812 23:44:31.530125 2678 state_mem.go:75] "Updated machine memory state" Aug 12 23:44:31.536282 kubelet[2678]: I0812 23:44:31.536231 2678 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:44:31.537118 kubelet[2678]: I0812 23:44:31.537098 2678 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:44:31.537252 kubelet[2678]: I0812 23:44:31.537217 2678 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:44:31.537609 kubelet[2678]: I0812 23:44:31.537570 2678 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:44:31.538824 kubelet[2678]: E0812 23:44:31.538787 2678 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 12 23:44:31.546106 kubelet[2678]: I0812 23:44:31.545957 2678 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:31.546494 kubelet[2678]: I0812 23:44:31.546430 2678 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:31.546689 kubelet[2678]: I0812 23:44:31.546662 2678 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:31.641648 kubelet[2678]: I0812 23:44:31.641504 2678 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:31.657912 kubelet[2678]: I0812 23:44:31.657805 2678 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:31.658145 kubelet[2678]: I0812 23:44:31.657946 2678 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:31.743559 kubelet[2678]: I0812 23:44:31.743503 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0ba9cc9d3f3dc4d27ceea1b7e8d51ba3-kubeconfig\") pod \"kube-controller-manager-ci-4372-1-0-f-e67fdcf04d\" (UID: \"0ba9cc9d3f3dc4d27ceea1b7e8d51ba3\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:31.743559 kubelet[2678]: I0812 23:44:31.743553 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ba9cc9d3f3dc4d27ceea1b7e8d51ba3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-1-0-f-e67fdcf04d\" (UID: \"0ba9cc9d3f3dc4d27ceea1b7e8d51ba3\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:31.743741 kubelet[2678]: I0812 23:44:31.743577 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/69984ed8d88af2f77d5d8b4933456971-kubeconfig\") pod \"kube-scheduler-ci-4372-1-0-f-e67fdcf04d\" (UID: \"69984ed8d88af2f77d5d8b4933456971\") " pod="kube-system/kube-scheduler-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:31.743741 kubelet[2678]: I0812 23:44:31.743594 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a18637e370445f2fc5007cd65b5be83-ca-certs\") pod \"kube-apiserver-ci-4372-1-0-f-e67fdcf04d\" (UID: \"5a18637e370445f2fc5007cd65b5be83\") " pod="kube-system/kube-apiserver-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:31.743741 kubelet[2678]: I0812 23:44:31.743611 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a18637e370445f2fc5007cd65b5be83-k8s-certs\") pod \"kube-apiserver-ci-4372-1-0-f-e67fdcf04d\" (UID: \"5a18637e370445f2fc5007cd65b5be83\") " pod="kube-system/kube-apiserver-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:31.743741 kubelet[2678]: I0812 23:44:31.743628 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0ba9cc9d3f3dc4d27ceea1b7e8d51ba3-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-1-0-f-e67fdcf04d\" (UID: \"0ba9cc9d3f3dc4d27ceea1b7e8d51ba3\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:31.743741 kubelet[2678]: I0812 23:44:31.743644 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ba9cc9d3f3dc4d27ceea1b7e8d51ba3-k8s-certs\") pod \"kube-controller-manager-ci-4372-1-0-f-e67fdcf04d\" (UID: \"0ba9cc9d3f3dc4d27ceea1b7e8d51ba3\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:31.743885 kubelet[2678]: I0812 23:44:31.743660 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a18637e370445f2fc5007cd65b5be83-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-1-0-f-e67fdcf04d\" (UID: \"5a18637e370445f2fc5007cd65b5be83\") " pod="kube-system/kube-apiserver-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:31.743885 kubelet[2678]: I0812 23:44:31.743689 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ba9cc9d3f3dc4d27ceea1b7e8d51ba3-ca-certs\") pod \"kube-controller-manager-ci-4372-1-0-f-e67fdcf04d\" (UID: \"0ba9cc9d3f3dc4d27ceea1b7e8d51ba3\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:32.245662 update_engine[1521]: I20250812 23:44:32.245568 1521 update_attempter.cc:509] Updating boot flags... Aug 12 23:44:32.420120 kubelet[2678]: I0812 23:44:32.419948 2678 apiserver.go:52] "Watching apiserver" Aug 12 23:44:32.445300 kubelet[2678]: I0812 23:44:32.444729 2678 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 12 23:44:32.514162 kubelet[2678]: I0812 23:44:32.513660 2678 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:32.530366 kubelet[2678]: E0812 23:44:32.530323 2678 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372-1-0-f-e67fdcf04d\" already exists" pod="kube-system/kube-apiserver-ci-4372-1-0-f-e67fdcf04d" Aug 12 23:44:32.583710 kubelet[2678]: I0812 23:44:32.582880 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372-1-0-f-e67fdcf04d" podStartSLOduration=1.582829126 podStartE2EDuration="1.582829126s" podCreationTimestamp="2025-08-12 23:44:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:44:32.582405294 +0000 UTC m=+1.245641844" watchObservedRunningTime="2025-08-12 23:44:32.582829126 +0000 UTC m=+1.246065636" Aug 12 23:44:32.583937 kubelet[2678]: I0812 23:44:32.583820 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372-1-0-f-e67fdcf04d" podStartSLOduration=1.5838060569999999 podStartE2EDuration="1.583806057s" podCreationTimestamp="2025-08-12 23:44:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:44:32.567911151 +0000 UTC m=+1.231147661" watchObservedRunningTime="2025-08-12 23:44:32.583806057 +0000 UTC m=+1.247042567" Aug 12 23:44:32.597732 kubelet[2678]: I0812 23:44:32.597598 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372-1-0-f-e67fdcf04d" podStartSLOduration=1.597539205 podStartE2EDuration="1.597539205s" podCreationTimestamp="2025-08-12 23:44:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:44:32.597448935 +0000 UTC m=+1.260685445" watchObservedRunningTime="2025-08-12 23:44:32.597539205 +0000 UTC m=+1.260775755" Aug 12 23:44:36.381260 kubelet[2678]: I0812 23:44:36.381144 2678 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 12 23:44:36.382021 containerd[1548]: time="2025-08-12T23:44:36.381930843Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 12 23:44:36.383034 kubelet[2678]: I0812 23:44:36.382331 2678 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 12 23:44:37.050352 systemd[1]: Created slice kubepods-besteffort-pod6d6aa8f8_f3a2_405d_853d_a6b3172f5965.slice - libcontainer container kubepods-besteffort-pod6d6aa8f8_f3a2_405d_853d_a6b3172f5965.slice. Aug 12 23:44:37.079404 kubelet[2678]: I0812 23:44:37.079243 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6d6aa8f8-f3a2-405d-853d-a6b3172f5965-kube-proxy\") pod \"kube-proxy-kw4ph\" (UID: \"6d6aa8f8-f3a2-405d-853d-a6b3172f5965\") " pod="kube-system/kube-proxy-kw4ph" Aug 12 23:44:37.079404 kubelet[2678]: I0812 23:44:37.079293 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d6aa8f8-f3a2-405d-853d-a6b3172f5965-xtables-lock\") pod \"kube-proxy-kw4ph\" (UID: \"6d6aa8f8-f3a2-405d-853d-a6b3172f5965\") " pod="kube-system/kube-proxy-kw4ph" Aug 12 23:44:37.079404 kubelet[2678]: I0812 23:44:37.079313 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d6aa8f8-f3a2-405d-853d-a6b3172f5965-lib-modules\") pod \"kube-proxy-kw4ph\" (UID: \"6d6aa8f8-f3a2-405d-853d-a6b3172f5965\") " pod="kube-system/kube-proxy-kw4ph" Aug 12 23:44:37.079404 kubelet[2678]: I0812 23:44:37.079333 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k74sv\" (UniqueName: \"kubernetes.io/projected/6d6aa8f8-f3a2-405d-853d-a6b3172f5965-kube-api-access-k74sv\") pod \"kube-proxy-kw4ph\" (UID: \"6d6aa8f8-f3a2-405d-853d-a6b3172f5965\") " pod="kube-system/kube-proxy-kw4ph" Aug 12 23:44:37.359883 containerd[1548]: time="2025-08-12T23:44:37.359425308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kw4ph,Uid:6d6aa8f8-f3a2-405d-853d-a6b3172f5965,Namespace:kube-system,Attempt:0,}" Aug 12 23:44:37.390222 containerd[1548]: time="2025-08-12T23:44:37.389319972Z" level=info msg="connecting to shim 43ee16c0f89b4a5381771e7101153302a4645acfee5a3a87454f1e4b7837b569" address="unix:///run/containerd/s/4bd8514bf10e637bbc888931b403481190466bff87e34864fc1884f899aadc02" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:44:37.426410 systemd[1]: Started cri-containerd-43ee16c0f89b4a5381771e7101153302a4645acfee5a3a87454f1e4b7837b569.scope - libcontainer container 43ee16c0f89b4a5381771e7101153302a4645acfee5a3a87454f1e4b7837b569. Aug 12 23:44:37.482041 kubelet[2678]: I0812 23:44:37.482003 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgg92\" (UniqueName: \"kubernetes.io/projected/ba89a48a-ebec-435a-9e3a-7119c4791204-kube-api-access-pgg92\") pod \"tigera-operator-747864d56d-nbdpw\" (UID: \"ba89a48a-ebec-435a-9e3a-7119c4791204\") " pod="tigera-operator/tigera-operator-747864d56d-nbdpw" Aug 12 23:44:37.482041 kubelet[2678]: I0812 23:44:37.482044 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ba89a48a-ebec-435a-9e3a-7119c4791204-var-lib-calico\") pod \"tigera-operator-747864d56d-nbdpw\" (UID: \"ba89a48a-ebec-435a-9e3a-7119c4791204\") " pod="tigera-operator/tigera-operator-747864d56d-nbdpw" Aug 12 23:44:37.485322 systemd[1]: Created slice kubepods-besteffort-podba89a48a_ebec_435a_9e3a_7119c4791204.slice - libcontainer container kubepods-besteffort-podba89a48a_ebec_435a_9e3a_7119c4791204.slice. Aug 12 23:44:37.517459 containerd[1548]: time="2025-08-12T23:44:37.517386782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kw4ph,Uid:6d6aa8f8-f3a2-405d-853d-a6b3172f5965,Namespace:kube-system,Attempt:0,} returns sandbox id \"43ee16c0f89b4a5381771e7101153302a4645acfee5a3a87454f1e4b7837b569\"" Aug 12 23:44:37.523271 containerd[1548]: time="2025-08-12T23:44:37.522791665Z" level=info msg="CreateContainer within sandbox \"43ee16c0f89b4a5381771e7101153302a4645acfee5a3a87454f1e4b7837b569\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 12 23:44:37.540431 containerd[1548]: time="2025-08-12T23:44:37.538924001Z" level=info msg="Container c4330dc9d09ef73661c559fc320ce0454533c0c05ae7477acd377aa13bbfa48a: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:44:37.551506 containerd[1548]: time="2025-08-12T23:44:37.551433950Z" level=info msg="CreateContainer within sandbox \"43ee16c0f89b4a5381771e7101153302a4645acfee5a3a87454f1e4b7837b569\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c4330dc9d09ef73661c559fc320ce0454533c0c05ae7477acd377aa13bbfa48a\"" Aug 12 23:44:37.553528 containerd[1548]: time="2025-08-12T23:44:37.553483024Z" level=info msg="StartContainer for \"c4330dc9d09ef73661c559fc320ce0454533c0c05ae7477acd377aa13bbfa48a\"" Aug 12 23:44:37.555714 containerd[1548]: time="2025-08-12T23:44:37.555643770Z" level=info msg="connecting to shim c4330dc9d09ef73661c559fc320ce0454533c0c05ae7477acd377aa13bbfa48a" address="unix:///run/containerd/s/4bd8514bf10e637bbc888931b403481190466bff87e34864fc1884f899aadc02" protocol=ttrpc version=3 Aug 12 23:44:37.578470 systemd[1]: Started cri-containerd-c4330dc9d09ef73661c559fc320ce0454533c0c05ae7477acd377aa13bbfa48a.scope - libcontainer container c4330dc9d09ef73661c559fc320ce0454533c0c05ae7477acd377aa13bbfa48a. Aug 12 23:44:37.650755 containerd[1548]: time="2025-08-12T23:44:37.650147812Z" level=info msg="StartContainer for \"c4330dc9d09ef73661c559fc320ce0454533c0c05ae7477acd377aa13bbfa48a\" returns successfully" Aug 12 23:44:37.791247 containerd[1548]: time="2025-08-12T23:44:37.791186534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-nbdpw,Uid:ba89a48a-ebec-435a-9e3a-7119c4791204,Namespace:tigera-operator,Attempt:0,}" Aug 12 23:44:37.818990 containerd[1548]: time="2025-08-12T23:44:37.818891855Z" level=info msg="connecting to shim 09ffdd08d54f6b02e7daecae5a730e70684b2b99660aec9e2f0b10caaed9791f" address="unix:///run/containerd/s/7e27c316021208d8cb8ea831952dcf825c21a7b24201439e890a4daa97a79bdd" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:44:37.853081 systemd[1]: Started cri-containerd-09ffdd08d54f6b02e7daecae5a730e70684b2b99660aec9e2f0b10caaed9791f.scope - libcontainer container 09ffdd08d54f6b02e7daecae5a730e70684b2b99660aec9e2f0b10caaed9791f. Aug 12 23:44:37.902984 containerd[1548]: time="2025-08-12T23:44:37.902750597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-nbdpw,Uid:ba89a48a-ebec-435a-9e3a-7119c4791204,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"09ffdd08d54f6b02e7daecae5a730e70684b2b99660aec9e2f0b10caaed9791f\"" Aug 12 23:44:37.906731 containerd[1548]: time="2025-08-12T23:44:37.906379824Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 12 23:44:39.008227 kubelet[2678]: I0812 23:44:39.008090 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kw4ph" podStartSLOduration=2.008037349 podStartE2EDuration="2.008037349s" podCreationTimestamp="2025-08-12 23:44:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:44:38.551227567 +0000 UTC m=+7.214464077" watchObservedRunningTime="2025-08-12 23:44:39.008037349 +0000 UTC m=+7.671273859" Aug 12 23:44:39.391790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2747038477.mount: Deactivated successfully. Aug 12 23:44:39.861409 containerd[1548]: time="2025-08-12T23:44:39.861339978Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:39.862640 containerd[1548]: time="2025-08-12T23:44:39.862358346Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 12 23:44:39.863557 containerd[1548]: time="2025-08-12T23:44:39.863500025Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:39.866900 containerd[1548]: time="2025-08-12T23:44:39.866836508Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:39.867873 containerd[1548]: time="2025-08-12T23:44:39.867755202Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.961330222s" Aug 12 23:44:39.867873 containerd[1548]: time="2025-08-12T23:44:39.867790080Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 12 23:44:39.871751 containerd[1548]: time="2025-08-12T23:44:39.871689043Z" level=info msg="CreateContainer within sandbox \"09ffdd08d54f6b02e7daecae5a730e70684b2b99660aec9e2f0b10caaed9791f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 12 23:44:39.889933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2670173826.mount: Deactivated successfully. Aug 12 23:44:39.892234 containerd[1548]: time="2025-08-12T23:44:39.890771448Z" level=info msg="Container ba69d0d8f93e7b5917bb9f48800f9245b33857e680955a13cad8459020dae439: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:44:39.906527 containerd[1548]: time="2025-08-12T23:44:39.906467533Z" level=info msg="CreateContainer within sandbox \"09ffdd08d54f6b02e7daecae5a730e70684b2b99660aec9e2f0b10caaed9791f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ba69d0d8f93e7b5917bb9f48800f9245b33857e680955a13cad8459020dae439\"" Aug 12 23:44:39.907119 containerd[1548]: time="2025-08-12T23:44:39.907097648Z" level=info msg="StartContainer for \"ba69d0d8f93e7b5917bb9f48800f9245b33857e680955a13cad8459020dae439\"" Aug 12 23:44:39.908500 containerd[1548]: time="2025-08-12T23:44:39.908433513Z" level=info msg="connecting to shim ba69d0d8f93e7b5917bb9f48800f9245b33857e680955a13cad8459020dae439" address="unix:///run/containerd/s/7e27c316021208d8cb8ea831952dcf825c21a7b24201439e890a4daa97a79bdd" protocol=ttrpc version=3 Aug 12 23:44:39.929517 systemd[1]: Started cri-containerd-ba69d0d8f93e7b5917bb9f48800f9245b33857e680955a13cad8459020dae439.scope - libcontainer container ba69d0d8f93e7b5917bb9f48800f9245b33857e680955a13cad8459020dae439. Aug 12 23:44:39.972440 containerd[1548]: time="2025-08-12T23:44:39.972406289Z" level=info msg="StartContainer for \"ba69d0d8f93e7b5917bb9f48800f9245b33857e680955a13cad8459020dae439\" returns successfully" Aug 12 23:44:40.560480 kubelet[2678]: I0812 23:44:40.560394 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-nbdpw" podStartSLOduration=1.596153326 podStartE2EDuration="3.560371928s" podCreationTimestamp="2025-08-12 23:44:37 +0000 UTC" firstStartedPulling="2025-08-12 23:44:37.904958379 +0000 UTC m=+6.568194889" lastFinishedPulling="2025-08-12 23:44:39.869177021 +0000 UTC m=+8.532413491" observedRunningTime="2025-08-12 23:44:40.559653776 +0000 UTC m=+9.222890326" watchObservedRunningTime="2025-08-12 23:44:40.560371928 +0000 UTC m=+9.223608438" Aug 12 23:44:46.113305 sudo[1769]: pam_unix(sudo:session): session closed for user root Aug 12 23:44:46.277273 sshd[1768]: Connection closed by 139.178.68.195 port 60472 Aug 12 23:44:46.277850 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Aug 12 23:44:46.283190 systemd-logind[1520]: Session 7 logged out. Waiting for processes to exit. Aug 12 23:44:46.283381 systemd[1]: sshd@6-49.13.54.157:22-139.178.68.195:60472.service: Deactivated successfully. Aug 12 23:44:46.286804 systemd[1]: session-7.scope: Deactivated successfully. Aug 12 23:44:46.289643 systemd[1]: session-7.scope: Consumed 7.700s CPU time, 228.3M memory peak. Aug 12 23:44:46.295702 systemd-logind[1520]: Removed session 7. Aug 12 23:44:51.528663 systemd[1]: Created slice kubepods-besteffort-pod543c9ed2_d30a_42b8_9309_d2fc78193f45.slice - libcontainer container kubepods-besteffort-pod543c9ed2_d30a_42b8_9309_d2fc78193f45.slice. Aug 12 23:44:51.581471 kubelet[2678]: I0812 23:44:51.580833 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/543c9ed2-d30a-42b8-9309-d2fc78193f45-tigera-ca-bundle\") pod \"calico-typha-794785f7bf-gq7vk\" (UID: \"543c9ed2-d30a-42b8-9309-d2fc78193f45\") " pod="calico-system/calico-typha-794785f7bf-gq7vk" Aug 12 23:44:51.581471 kubelet[2678]: I0812 23:44:51.580881 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/543c9ed2-d30a-42b8-9309-d2fc78193f45-typha-certs\") pod \"calico-typha-794785f7bf-gq7vk\" (UID: \"543c9ed2-d30a-42b8-9309-d2fc78193f45\") " pod="calico-system/calico-typha-794785f7bf-gq7vk" Aug 12 23:44:51.581471 kubelet[2678]: I0812 23:44:51.580902 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vx5k\" (UniqueName: \"kubernetes.io/projected/543c9ed2-d30a-42b8-9309-d2fc78193f45-kube-api-access-2vx5k\") pod \"calico-typha-794785f7bf-gq7vk\" (UID: \"543c9ed2-d30a-42b8-9309-d2fc78193f45\") " pod="calico-system/calico-typha-794785f7bf-gq7vk" Aug 12 23:44:51.736410 systemd[1]: Created slice kubepods-besteffort-pod1e46c828_9b04_4320_9719_d5e7472bdd78.slice - libcontainer container kubepods-besteffort-pod1e46c828_9b04_4320_9719_d5e7472bdd78.slice. Aug 12 23:44:51.783034 kubelet[2678]: I0812 23:44:51.782742 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1e46c828-9b04-4320-9719-d5e7472bdd78-cni-bin-dir\") pod \"calico-node-n7gvs\" (UID: \"1e46c828-9b04-4320-9719-d5e7472bdd78\") " pod="calico-system/calico-node-n7gvs" Aug 12 23:44:51.784389 kubelet[2678]: I0812 23:44:51.784315 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1e46c828-9b04-4320-9719-d5e7472bdd78-cni-log-dir\") pod \"calico-node-n7gvs\" (UID: \"1e46c828-9b04-4320-9719-d5e7472bdd78\") " pod="calico-system/calico-node-n7gvs" Aug 12 23:44:51.784389 kubelet[2678]: I0812 23:44:51.784356 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1e46c828-9b04-4320-9719-d5e7472bdd78-flexvol-driver-host\") pod \"calico-node-n7gvs\" (UID: \"1e46c828-9b04-4320-9719-d5e7472bdd78\") " pod="calico-system/calico-node-n7gvs" Aug 12 23:44:51.784389 kubelet[2678]: I0812 23:44:51.784375 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e46c828-9b04-4320-9719-d5e7472bdd78-tigera-ca-bundle\") pod \"calico-node-n7gvs\" (UID: \"1e46c828-9b04-4320-9719-d5e7472bdd78\") " pod="calico-system/calico-node-n7gvs" Aug 12 23:44:51.784389 kubelet[2678]: I0812 23:44:51.784393 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e46c828-9b04-4320-9719-d5e7472bdd78-xtables-lock\") pod \"calico-node-n7gvs\" (UID: \"1e46c828-9b04-4320-9719-d5e7472bdd78\") " pod="calico-system/calico-node-n7gvs" Aug 12 23:44:51.784568 kubelet[2678]: I0812 23:44:51.784414 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1e46c828-9b04-4320-9719-d5e7472bdd78-cni-net-dir\") pod \"calico-node-n7gvs\" (UID: \"1e46c828-9b04-4320-9719-d5e7472bdd78\") " pod="calico-system/calico-node-n7gvs" Aug 12 23:44:51.784568 kubelet[2678]: I0812 23:44:51.784429 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e46c828-9b04-4320-9719-d5e7472bdd78-lib-modules\") pod \"calico-node-n7gvs\" (UID: \"1e46c828-9b04-4320-9719-d5e7472bdd78\") " pod="calico-system/calico-node-n7gvs" Aug 12 23:44:51.784568 kubelet[2678]: I0812 23:44:51.784444 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1e46c828-9b04-4320-9719-d5e7472bdd78-policysync\") pod \"calico-node-n7gvs\" (UID: \"1e46c828-9b04-4320-9719-d5e7472bdd78\") " pod="calico-system/calico-node-n7gvs" Aug 12 23:44:51.784568 kubelet[2678]: I0812 23:44:51.784463 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1e46c828-9b04-4320-9719-d5e7472bdd78-node-certs\") pod \"calico-node-n7gvs\" (UID: \"1e46c828-9b04-4320-9719-d5e7472bdd78\") " pod="calico-system/calico-node-n7gvs" Aug 12 23:44:51.784568 kubelet[2678]: I0812 23:44:51.784478 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1e46c828-9b04-4320-9719-d5e7472bdd78-var-run-calico\") pod \"calico-node-n7gvs\" (UID: \"1e46c828-9b04-4320-9719-d5e7472bdd78\") " pod="calico-system/calico-node-n7gvs" Aug 12 23:44:51.784667 kubelet[2678]: I0812 23:44:51.784496 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1e46c828-9b04-4320-9719-d5e7472bdd78-var-lib-calico\") pod \"calico-node-n7gvs\" (UID: \"1e46c828-9b04-4320-9719-d5e7472bdd78\") " pod="calico-system/calico-node-n7gvs" Aug 12 23:44:51.784667 kubelet[2678]: I0812 23:44:51.784514 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjj8j\" (UniqueName: \"kubernetes.io/projected/1e46c828-9b04-4320-9719-d5e7472bdd78-kube-api-access-kjj8j\") pod \"calico-node-n7gvs\" (UID: \"1e46c828-9b04-4320-9719-d5e7472bdd78\") " pod="calico-system/calico-node-n7gvs" Aug 12 23:44:51.835835 containerd[1548]: time="2025-08-12T23:44:51.835092363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-794785f7bf-gq7vk,Uid:543c9ed2-d30a-42b8-9309-d2fc78193f45,Namespace:calico-system,Attempt:0,}" Aug 12 23:44:51.844059 kubelet[2678]: E0812 23:44:51.844006 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9cgr" podUID="8e2e417a-afc9-4f75-a471-5551fad879ea" Aug 12 23:44:51.876225 containerd[1548]: time="2025-08-12T23:44:51.874368197Z" level=info msg="connecting to shim 61b6bed51ab7dfd6f9ffc8958dbcee58dd5a77abcaa83dc74ed914f7dc3fada8" address="unix:///run/containerd/s/a99c20d5c118cb97bf93125aa626d246b494143831c20db25a8a15caf55cca5d" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:44:51.886614 kubelet[2678]: I0812 23:44:51.886577 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e2e417a-afc9-4f75-a471-5551fad879ea-kubelet-dir\") pod \"csi-node-driver-m9cgr\" (UID: \"8e2e417a-afc9-4f75-a471-5551fad879ea\") " pod="calico-system/csi-node-driver-m9cgr" Aug 12 23:44:51.886731 kubelet[2678]: I0812 23:44:51.886628 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8e2e417a-afc9-4f75-a471-5551fad879ea-varrun\") pod \"csi-node-driver-m9cgr\" (UID: \"8e2e417a-afc9-4f75-a471-5551fad879ea\") " pod="calico-system/csi-node-driver-m9cgr" Aug 12 23:44:51.886731 kubelet[2678]: I0812 23:44:51.886647 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgb4s\" (UniqueName: \"kubernetes.io/projected/8e2e417a-afc9-4f75-a471-5551fad879ea-kube-api-access-sgb4s\") pod \"csi-node-driver-m9cgr\" (UID: \"8e2e417a-afc9-4f75-a471-5551fad879ea\") " pod="calico-system/csi-node-driver-m9cgr" Aug 12 23:44:51.886731 kubelet[2678]: I0812 23:44:51.886684 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8e2e417a-afc9-4f75-a471-5551fad879ea-registration-dir\") pod \"csi-node-driver-m9cgr\" (UID: \"8e2e417a-afc9-4f75-a471-5551fad879ea\") " pod="calico-system/csi-node-driver-m9cgr" Aug 12 23:44:51.886812 kubelet[2678]: I0812 23:44:51.886750 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8e2e417a-afc9-4f75-a471-5551fad879ea-socket-dir\") pod \"csi-node-driver-m9cgr\" (UID: \"8e2e417a-afc9-4f75-a471-5551fad879ea\") " pod="calico-system/csi-node-driver-m9cgr" Aug 12 23:44:51.909456 kubelet[2678]: E0812 23:44:51.909421 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.909456 kubelet[2678]: W0812 23:44:51.909447 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.909591 kubelet[2678]: E0812 23:44:51.909477 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.922419 systemd[1]: Started cri-containerd-61b6bed51ab7dfd6f9ffc8958dbcee58dd5a77abcaa83dc74ed914f7dc3fada8.scope - libcontainer container 61b6bed51ab7dfd6f9ffc8958dbcee58dd5a77abcaa83dc74ed914f7dc3fada8. Aug 12 23:44:51.925584 kubelet[2678]: E0812 23:44:51.925114 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.925584 kubelet[2678]: W0812 23:44:51.925546 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.925584 kubelet[2678]: E0812 23:44:51.925568 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.988109 kubelet[2678]: E0812 23:44:51.988054 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.988568 kubelet[2678]: W0812 23:44:51.988303 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.988568 kubelet[2678]: E0812 23:44:51.988334 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.988949 kubelet[2678]: E0812 23:44:51.988920 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.988949 kubelet[2678]: W0812 23:44:51.988939 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.989096 kubelet[2678]: E0812 23:44:51.988961 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.989559 kubelet[2678]: E0812 23:44:51.989536 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.989559 kubelet[2678]: W0812 23:44:51.989554 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.989752 kubelet[2678]: E0812 23:44:51.989576 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.989890 kubelet[2678]: E0812 23:44:51.989865 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.989890 kubelet[2678]: W0812 23:44:51.989881 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.990331 kubelet[2678]: E0812 23:44:51.989937 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.990523 kubelet[2678]: E0812 23:44:51.990503 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.990750 kubelet[2678]: W0812 23:44:51.990524 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.990750 kubelet[2678]: E0812 23:44:51.990544 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.991313 kubelet[2678]: E0812 23:44:51.991172 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.991313 kubelet[2678]: W0812 23:44:51.991304 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.991458 kubelet[2678]: E0812 23:44:51.991331 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.991795 kubelet[2678]: E0812 23:44:51.991722 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.991795 kubelet[2678]: W0812 23:44:51.991774 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.992038 kubelet[2678]: E0812 23:44:51.991871 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.992569 kubelet[2678]: E0812 23:44:51.992542 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.992569 kubelet[2678]: W0812 23:44:51.992559 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.992778 kubelet[2678]: E0812 23:44:51.992627 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.993014 kubelet[2678]: E0812 23:44:51.992990 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.993014 kubelet[2678]: W0812 23:44:51.993014 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.993183 kubelet[2678]: E0812 23:44:51.993079 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.993528 kubelet[2678]: E0812 23:44:51.993421 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.993528 kubelet[2678]: W0812 23:44:51.993439 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.993528 kubelet[2678]: E0812 23:44:51.993472 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.993772 kubelet[2678]: E0812 23:44:51.993750 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.993772 kubelet[2678]: W0812 23:44:51.993765 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.993772 kubelet[2678]: E0812 23:44:51.993796 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.994307 kubelet[2678]: E0812 23:44:51.994226 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.994307 kubelet[2678]: W0812 23:44:51.994281 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.994767 kubelet[2678]: E0812 23:44:51.994599 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.994928 kubelet[2678]: E0812 23:44:51.994906 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.994928 kubelet[2678]: W0812 23:44:51.994923 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.995606 kubelet[2678]: E0812 23:44:51.995585 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.995830 kubelet[2678]: E0812 23:44:51.995807 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.995830 kubelet[2678]: W0812 23:44:51.995824 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.996034 kubelet[2678]: E0812 23:44:51.996011 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.996366 kubelet[2678]: E0812 23:44:51.996344 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.996366 kubelet[2678]: W0812 23:44:51.996360 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.996504 kubelet[2678]: E0812 23:44:51.996474 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.996816 kubelet[2678]: E0812 23:44:51.996795 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.996816 kubelet[2678]: W0812 23:44:51.996810 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.996952 kubelet[2678]: E0812 23:44:51.996866 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.997157 kubelet[2678]: E0812 23:44:51.997140 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.997157 kubelet[2678]: W0812 23:44:51.997154 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.997335 kubelet[2678]: E0812 23:44:51.997232 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.997647 kubelet[2678]: E0812 23:44:51.997574 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.997647 kubelet[2678]: W0812 23:44:51.997593 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.997647 kubelet[2678]: E0812 23:44:51.997622 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.997996 kubelet[2678]: E0812 23:44:51.997965 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.998170 kubelet[2678]: W0812 23:44:51.998085 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.998484 kubelet[2678]: E0812 23:44:51.998449 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.998484 kubelet[2678]: W0812 23:44:51.998471 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.998672 kubelet[2678]: E0812 23:44:51.998446 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.998672 kubelet[2678]: E0812 23:44:51.998586 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.998887 kubelet[2678]: E0812 23:44:51.998861 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.998887 kubelet[2678]: W0812 23:44:51.998883 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.999267 kubelet[2678]: E0812 23:44:51.998961 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.999267 kubelet[2678]: E0812 23:44:51.999156 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.999267 kubelet[2678]: W0812 23:44:51.999167 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.999490 kubelet[2678]: E0812 23:44:51.999382 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:51.999533 kubelet[2678]: E0812 23:44:51.999513 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:51.999533 kubelet[2678]: W0812 23:44:51.999525 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:51.999577 kubelet[2678]: E0812 23:44:51.999537 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:52.000084 kubelet[2678]: E0812 23:44:52.000061 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:52.000379 kubelet[2678]: W0812 23:44:52.000234 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:52.000611 kubelet[2678]: E0812 23:44:52.000472 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:52.000861 kubelet[2678]: E0812 23:44:52.000810 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:52.000861 kubelet[2678]: W0812 23:44:52.000825 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:52.000861 kubelet[2678]: E0812 23:44:52.000838 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:52.014428 kubelet[2678]: E0812 23:44:52.014346 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:52.014428 kubelet[2678]: W0812 23:44:52.014371 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:52.014428 kubelet[2678]: E0812 23:44:52.014391 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:52.045305 containerd[1548]: time="2025-08-12T23:44:52.044247645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-794785f7bf-gq7vk,Uid:543c9ed2-d30a-42b8-9309-d2fc78193f45,Namespace:calico-system,Attempt:0,} returns sandbox id \"61b6bed51ab7dfd6f9ffc8958dbcee58dd5a77abcaa83dc74ed914f7dc3fada8\"" Aug 12 23:44:52.046773 containerd[1548]: time="2025-08-12T23:44:52.046731169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n7gvs,Uid:1e46c828-9b04-4320-9719-d5e7472bdd78,Namespace:calico-system,Attempt:0,}" Aug 12 23:44:52.051461 containerd[1548]: time="2025-08-12T23:44:52.051394626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 12 23:44:52.083383 containerd[1548]: time="2025-08-12T23:44:52.083333246Z" level=info msg="connecting to shim e3e1df7772e57974c38743df8e693a79860ea2f65732d34a40cdf0efa53e45d0" address="unix:///run/containerd/s/abdd36aacfc6f102caf88bf8dae746e0bbb5690739d346d1f55024df40a47567" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:44:52.122443 systemd[1]: Started cri-containerd-e3e1df7772e57974c38743df8e693a79860ea2f65732d34a40cdf0efa53e45d0.scope - libcontainer container e3e1df7772e57974c38743df8e693a79860ea2f65732d34a40cdf0efa53e45d0. Aug 12 23:44:52.281743 containerd[1548]: time="2025-08-12T23:44:52.281675518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n7gvs,Uid:1e46c828-9b04-4320-9719-d5e7472bdd78,Namespace:calico-system,Attempt:0,} returns sandbox id \"e3e1df7772e57974c38743df8e693a79860ea2f65732d34a40cdf0efa53e45d0\"" Aug 12 23:44:53.389094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1555326611.mount: Deactivated successfully. Aug 12 23:44:53.448944 kubelet[2678]: E0812 23:44:53.448025 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9cgr" podUID="8e2e417a-afc9-4f75-a471-5551fad879ea" Aug 12 23:44:53.975687 containerd[1548]: time="2025-08-12T23:44:53.975617715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:53.977757 containerd[1548]: time="2025-08-12T23:44:53.977721814Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Aug 12 23:44:53.978637 containerd[1548]: time="2025-08-12T23:44:53.978587549Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:53.982469 containerd[1548]: time="2025-08-12T23:44:53.982086129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:53.983773 containerd[1548]: time="2025-08-12T23:44:53.983744881Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.931898829s" Aug 12 23:44:53.984035 containerd[1548]: time="2025-08-12T23:44:53.983907636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 12 23:44:53.986783 containerd[1548]: time="2025-08-12T23:44:53.985892499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 12 23:44:54.006868 containerd[1548]: time="2025-08-12T23:44:54.006830909Z" level=info msg="CreateContainer within sandbox \"61b6bed51ab7dfd6f9ffc8958dbcee58dd5a77abcaa83dc74ed914f7dc3fada8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 12 23:44:54.021522 containerd[1548]: time="2025-08-12T23:44:54.021474633Z" level=info msg="Container 1c72f72dde196eb4096205fc194906a1baf16ac0c5f709afe0632796d819a187: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:44:54.034461 containerd[1548]: time="2025-08-12T23:44:54.034220090Z" level=info msg="CreateContainer within sandbox \"61b6bed51ab7dfd6f9ffc8958dbcee58dd5a77abcaa83dc74ed914f7dc3fada8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1c72f72dde196eb4096205fc194906a1baf16ac0c5f709afe0632796d819a187\"" Aug 12 23:44:54.035990 containerd[1548]: time="2025-08-12T23:44:54.035931203Z" level=info msg="StartContainer for \"1c72f72dde196eb4096205fc194906a1baf16ac0c5f709afe0632796d819a187\"" Aug 12 23:44:54.038605 containerd[1548]: time="2025-08-12T23:44:54.038567372Z" level=info msg="connecting to shim 1c72f72dde196eb4096205fc194906a1baf16ac0c5f709afe0632796d819a187" address="unix:///run/containerd/s/a99c20d5c118cb97bf93125aa626d246b494143831c20db25a8a15caf55cca5d" protocol=ttrpc version=3 Aug 12 23:44:54.069619 systemd[1]: Started cri-containerd-1c72f72dde196eb4096205fc194906a1baf16ac0c5f709afe0632796d819a187.scope - libcontainer container 1c72f72dde196eb4096205fc194906a1baf16ac0c5f709afe0632796d819a187. Aug 12 23:44:54.124844 containerd[1548]: time="2025-08-12T23:44:54.124768727Z" level=info msg="StartContainer for \"1c72f72dde196eb4096205fc194906a1baf16ac0c5f709afe0632796d819a187\" returns successfully" Aug 12 23:44:54.680897 kubelet[2678]: E0812 23:44:54.680640 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.680897 kubelet[2678]: W0812 23:44:54.680692 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.680897 kubelet[2678]: E0812 23:44:54.680724 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.681986 kubelet[2678]: E0812 23:44:54.681926 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.682476 kubelet[2678]: W0812 23:44:54.681954 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.682476 kubelet[2678]: E0812 23:44:54.682352 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.683024 kubelet[2678]: E0812 23:44:54.682900 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.683024 kubelet[2678]: W0812 23:44:54.682925 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.683024 kubelet[2678]: E0812 23:44:54.682948 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.683877 kubelet[2678]: E0812 23:44:54.683682 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.683877 kubelet[2678]: W0812 23:44:54.683704 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.683877 kubelet[2678]: E0812 23:44:54.683726 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.684182 kubelet[2678]: E0812 23:44:54.684169 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.684285 kubelet[2678]: W0812 23:44:54.684271 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.684372 kubelet[2678]: E0812 23:44:54.684329 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.684617 kubelet[2678]: E0812 23:44:54.684554 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.684617 kubelet[2678]: W0812 23:44:54.684569 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.684617 kubelet[2678]: E0812 23:44:54.684579 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.684944 kubelet[2678]: E0812 23:44:54.684875 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.684944 kubelet[2678]: W0812 23:44:54.684888 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.684944 kubelet[2678]: E0812 23:44:54.684899 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.685308 kubelet[2678]: E0812 23:44:54.685229 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.685308 kubelet[2678]: W0812 23:44:54.685243 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.685308 kubelet[2678]: E0812 23:44:54.685255 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.685667 kubelet[2678]: E0812 23:44:54.685641 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.685749 kubelet[2678]: W0812 23:44:54.685722 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.685855 kubelet[2678]: E0812 23:44:54.685796 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.686106 kubelet[2678]: E0812 23:44:54.686092 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.686397 kubelet[2678]: W0812 23:44:54.686303 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.686397 kubelet[2678]: E0812 23:44:54.686321 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.686969 kubelet[2678]: E0812 23:44:54.686803 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.686969 kubelet[2678]: W0812 23:44:54.686907 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.686969 kubelet[2678]: E0812 23:44:54.686921 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.687509 kubelet[2678]: E0812 23:44:54.687485 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.687852 kubelet[2678]: W0812 23:44:54.687599 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.687852 kubelet[2678]: E0812 23:44:54.687617 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.688489 kubelet[2678]: E0812 23:44:54.688460 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.688732 kubelet[2678]: W0812 23:44:54.688475 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.688836 kubelet[2678]: E0812 23:44:54.688821 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.689471 kubelet[2678]: E0812 23:44:54.689433 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.689471 kubelet[2678]: W0812 23:44:54.689447 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.689471 kubelet[2678]: E0812 23:44:54.689507 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.690351 kubelet[2678]: E0812 23:44:54.690162 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.690351 kubelet[2678]: W0812 23:44:54.690178 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.690351 kubelet[2678]: E0812 23:44:54.690190 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.712602 kubelet[2678]: E0812 23:44:54.712376 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.712602 kubelet[2678]: W0812 23:44:54.712405 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.712602 kubelet[2678]: E0812 23:44:54.712429 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.713006 kubelet[2678]: E0812 23:44:54.712986 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.713125 kubelet[2678]: W0812 23:44:54.713104 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.713278 kubelet[2678]: E0812 23:44:54.713256 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.713696 kubelet[2678]: E0812 23:44:54.713638 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.713696 kubelet[2678]: W0812 23:44:54.713668 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.713696 kubelet[2678]: E0812 23:44:54.713692 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.713954 kubelet[2678]: E0812 23:44:54.713915 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.713954 kubelet[2678]: W0812 23:44:54.713937 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.713954 kubelet[2678]: E0812 23:44:54.713953 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.714268 kubelet[2678]: E0812 23:44:54.714236 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.714268 kubelet[2678]: W0812 23:44:54.714254 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.714310 kubelet[2678]: E0812 23:44:54.714285 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.714602 kubelet[2678]: E0812 23:44:54.714580 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.714647 kubelet[2678]: W0812 23:44:54.714600 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.714647 kubelet[2678]: E0812 23:44:54.714632 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.715052 kubelet[2678]: E0812 23:44:54.715026 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.715052 kubelet[2678]: W0812 23:44:54.715047 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.715116 kubelet[2678]: E0812 23:44:54.715071 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.715459 kubelet[2678]: E0812 23:44:54.715431 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.715459 kubelet[2678]: W0812 23:44:54.715455 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.715591 kubelet[2678]: E0812 23:44:54.715556 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.715699 kubelet[2678]: E0812 23:44:54.715685 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.715730 kubelet[2678]: W0812 23:44:54.715699 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.715847 kubelet[2678]: E0812 23:44:54.715792 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.716099 kubelet[2678]: E0812 23:44:54.715945 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.716099 kubelet[2678]: W0812 23:44:54.715964 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.716099 kubelet[2678]: E0812 23:44:54.715980 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.716237 kubelet[2678]: E0812 23:44:54.716182 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.716237 kubelet[2678]: W0812 23:44:54.716195 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.716237 kubelet[2678]: E0812 23:44:54.716230 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.716530 kubelet[2678]: E0812 23:44:54.716411 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.716530 kubelet[2678]: W0812 23:44:54.716429 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.716530 kubelet[2678]: E0812 23:44:54.716440 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.716714 kubelet[2678]: E0812 23:44:54.716698 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.716881 kubelet[2678]: W0812 23:44:54.716764 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.716881 kubelet[2678]: E0812 23:44:54.716791 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.717042 kubelet[2678]: E0812 23:44:54.717027 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.717108 kubelet[2678]: W0812 23:44:54.717094 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.717241 kubelet[2678]: E0812 23:44:54.717223 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.717567 kubelet[2678]: E0812 23:44:54.717538 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.717567 kubelet[2678]: W0812 23:44:54.717557 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.717650 kubelet[2678]: E0812 23:44:54.717575 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.717894 kubelet[2678]: E0812 23:44:54.717878 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.718225 kubelet[2678]: W0812 23:44:54.717965 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.718225 kubelet[2678]: E0812 23:44:54.717994 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.718225 kubelet[2678]: E0812 23:44:54.718176 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.718225 kubelet[2678]: W0812 23:44:54.718189 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.718394 kubelet[2678]: E0812 23:44:54.718240 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:54.718807 kubelet[2678]: E0812 23:44:54.718788 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:44:54.718881 kubelet[2678]: W0812 23:44:54.718868 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:44:54.718997 kubelet[2678]: E0812 23:44:54.718980 2678 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:44:55.231802 containerd[1548]: time="2025-08-12T23:44:55.231751854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:55.233875 containerd[1548]: time="2025-08-12T23:44:55.233828602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Aug 12 23:44:55.236300 containerd[1548]: time="2025-08-12T23:44:55.235682755Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:55.240109 containerd[1548]: time="2025-08-12T23:44:55.239619095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:55.241554 containerd[1548]: time="2025-08-12T23:44:55.241460369Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.254539139s" Aug 12 23:44:55.241554 containerd[1548]: time="2025-08-12T23:44:55.241504687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 12 23:44:55.245400 containerd[1548]: time="2025-08-12T23:44:55.245352630Z" level=info msg="CreateContainer within sandbox \"e3e1df7772e57974c38743df8e693a79860ea2f65732d34a40cdf0efa53e45d0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 12 23:44:55.261822 containerd[1548]: time="2025-08-12T23:44:55.261730216Z" level=info msg="Container 05d14a747271c68817e74a53761c8b026dbe340b6588443824f8c78e0b7cd1f5: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:44:55.262863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3129942241.mount: Deactivated successfully. Aug 12 23:44:55.275305 containerd[1548]: time="2025-08-12T23:44:55.275238794Z" level=info msg="CreateContainer within sandbox \"e3e1df7772e57974c38743df8e693a79860ea2f65732d34a40cdf0efa53e45d0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"05d14a747271c68817e74a53761c8b026dbe340b6588443824f8c78e0b7cd1f5\"" Aug 12 23:44:55.277292 containerd[1548]: time="2025-08-12T23:44:55.276376245Z" level=info msg="StartContainer for \"05d14a747271c68817e74a53761c8b026dbe340b6588443824f8c78e0b7cd1f5\"" Aug 12 23:44:55.279683 containerd[1548]: time="2025-08-12T23:44:55.279579884Z" level=info msg="connecting to shim 05d14a747271c68817e74a53761c8b026dbe340b6588443824f8c78e0b7cd1f5" address="unix:///run/containerd/s/abdd36aacfc6f102caf88bf8dae746e0bbb5690739d346d1f55024df40a47567" protocol=ttrpc version=3 Aug 12 23:44:55.309470 systemd[1]: Started cri-containerd-05d14a747271c68817e74a53761c8b026dbe340b6588443824f8c78e0b7cd1f5.scope - libcontainer container 05d14a747271c68817e74a53761c8b026dbe340b6588443824f8c78e0b7cd1f5. Aug 12 23:44:55.365157 containerd[1548]: time="2025-08-12T23:44:55.364934846Z" level=info msg="StartContainer for \"05d14a747271c68817e74a53761c8b026dbe340b6588443824f8c78e0b7cd1f5\" returns successfully" Aug 12 23:44:55.382366 systemd[1]: cri-containerd-05d14a747271c68817e74a53761c8b026dbe340b6588443824f8c78e0b7cd1f5.scope: Deactivated successfully. Aug 12 23:44:55.389806 containerd[1548]: time="2025-08-12T23:44:55.389576783Z" level=info msg="received exit event container_id:\"05d14a747271c68817e74a53761c8b026dbe340b6588443824f8c78e0b7cd1f5\" id:\"05d14a747271c68817e74a53761c8b026dbe340b6588443824f8c78e0b7cd1f5\" pid:3317 exited_at:{seconds:1755042295 nanos:388650046}" Aug 12 23:44:55.390319 containerd[1548]: time="2025-08-12T23:44:55.390245166Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05d14a747271c68817e74a53761c8b026dbe340b6588443824f8c78e0b7cd1f5\" id:\"05d14a747271c68817e74a53761c8b026dbe340b6588443824f8c78e0b7cd1f5\" pid:3317 exited_at:{seconds:1755042295 nanos:388650046}" Aug 12 23:44:55.424482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05d14a747271c68817e74a53761c8b026dbe340b6588443824f8c78e0b7cd1f5-rootfs.mount: Deactivated successfully. Aug 12 23:44:55.451763 kubelet[2678]: E0812 23:44:55.451486 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9cgr" podUID="8e2e417a-afc9-4f75-a471-5551fad879ea" Aug 12 23:44:55.606248 containerd[1548]: time="2025-08-12T23:44:55.605386604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 12 23:44:55.624845 kubelet[2678]: I0812 23:44:55.624776 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-794785f7bf-gq7vk" podStartSLOduration=2.689545585 podStartE2EDuration="4.624753555s" podCreationTimestamp="2025-08-12 23:44:51 +0000 UTC" firstStartedPulling="2025-08-12 23:44:52.050258301 +0000 UTC m=+20.713494811" lastFinishedPulling="2025-08-12 23:44:53.985466231 +0000 UTC m=+22.648702781" observedRunningTime="2025-08-12 23:44:54.613499742 +0000 UTC m=+23.276736252" watchObservedRunningTime="2025-08-12 23:44:55.624753555 +0000 UTC m=+24.287990065" Aug 12 23:44:57.448045 kubelet[2678]: E0812 23:44:57.446467 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9cgr" podUID="8e2e417a-afc9-4f75-a471-5551fad879ea" Aug 12 23:44:58.115065 containerd[1548]: time="2025-08-12T23:44:58.114973044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:58.116556 containerd[1548]: time="2025-08-12T23:44:58.116344736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 12 23:44:58.117637 containerd[1548]: time="2025-08-12T23:44:58.117594030Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:58.121474 containerd[1548]: time="2025-08-12T23:44:58.121436470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:44:58.122861 containerd[1548]: time="2025-08-12T23:44:58.122666084Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.517213881s" Aug 12 23:44:58.122861 containerd[1548]: time="2025-08-12T23:44:58.122729123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 12 23:44:58.127317 containerd[1548]: time="2025-08-12T23:44:58.127259468Z" level=info msg="CreateContainer within sandbox \"e3e1df7772e57974c38743df8e693a79860ea2f65732d34a40cdf0efa53e45d0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 12 23:44:58.141233 containerd[1548]: time="2025-08-12T23:44:58.140462673Z" level=info msg="Container 81690404b870a61b12a71fc075cac16ab0b354808d3838f3eab4a902d9df730a: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:44:58.148443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2591650675.mount: Deactivated successfully. Aug 12 23:44:58.156316 containerd[1548]: time="2025-08-12T23:44:58.156268224Z" level=info msg="CreateContainer within sandbox \"e3e1df7772e57974c38743df8e693a79860ea2f65732d34a40cdf0efa53e45d0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"81690404b870a61b12a71fc075cac16ab0b354808d3838f3eab4a902d9df730a\"" Aug 12 23:44:58.158363 containerd[1548]: time="2025-08-12T23:44:58.158326221Z" level=info msg="StartContainer for \"81690404b870a61b12a71fc075cac16ab0b354808d3838f3eab4a902d9df730a\"" Aug 12 23:44:58.160562 containerd[1548]: time="2025-08-12T23:44:58.160075024Z" level=info msg="connecting to shim 81690404b870a61b12a71fc075cac16ab0b354808d3838f3eab4a902d9df730a" address="unix:///run/containerd/s/abdd36aacfc6f102caf88bf8dae746e0bbb5690739d346d1f55024df40a47567" protocol=ttrpc version=3 Aug 12 23:44:58.192408 systemd[1]: Started cri-containerd-81690404b870a61b12a71fc075cac16ab0b354808d3838f3eab4a902d9df730a.scope - libcontainer container 81690404b870a61b12a71fc075cac16ab0b354808d3838f3eab4a902d9df730a. Aug 12 23:44:58.243290 containerd[1548]: time="2025-08-12T23:44:58.243187972Z" level=info msg="StartContainer for \"81690404b870a61b12a71fc075cac16ab0b354808d3838f3eab4a902d9df730a\" returns successfully" Aug 12 23:44:58.782273 containerd[1548]: time="2025-08-12T23:44:58.782123302Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:44:58.786219 systemd[1]: cri-containerd-81690404b870a61b12a71fc075cac16ab0b354808d3838f3eab4a902d9df730a.scope: Deactivated successfully. Aug 12 23:44:58.786808 systemd[1]: cri-containerd-81690404b870a61b12a71fc075cac16ab0b354808d3838f3eab4a902d9df730a.scope: Consumed 521ms CPU time, 190.4M memory peak, 165.8M written to disk. Aug 12 23:44:58.789952 containerd[1548]: time="2025-08-12T23:44:58.789902299Z" level=info msg="received exit event container_id:\"81690404b870a61b12a71fc075cac16ab0b354808d3838f3eab4a902d9df730a\" id:\"81690404b870a61b12a71fc075cac16ab0b354808d3838f3eab4a902d9df730a\" pid:3380 exited_at:{seconds:1755042298 nanos:789651465}" Aug 12 23:44:58.790514 containerd[1548]: time="2025-08-12T23:44:58.790137215Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81690404b870a61b12a71fc075cac16ab0b354808d3838f3eab4a902d9df730a\" id:\"81690404b870a61b12a71fc075cac16ab0b354808d3838f3eab4a902d9df730a\" pid:3380 exited_at:{seconds:1755042298 nanos:789651465}" Aug 12 23:44:58.815149 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81690404b870a61b12a71fc075cac16ab0b354808d3838f3eab4a902d9df730a-rootfs.mount: Deactivated successfully. Aug 12 23:44:58.840286 kubelet[2678]: I0812 23:44:58.840118 2678 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 12 23:44:58.899577 systemd[1]: Created slice kubepods-burstable-pod0ac1771f_2081_4e4a_babd_6f3aff24a738.slice - libcontainer container kubepods-burstable-pod0ac1771f_2081_4e4a_babd_6f3aff24a738.slice. Aug 12 23:44:58.919997 systemd[1]: Created slice kubepods-burstable-podc47c77fc_ea5a_46ba_a097_cbea896c6dc5.slice - libcontainer container kubepods-burstable-podc47c77fc_ea5a_46ba_a097_cbea896c6dc5.slice. Aug 12 23:44:58.933568 systemd[1]: Created slice kubepods-besteffort-pod75d556c6_ec19_418d_87c8_eb49c39093ba.slice - libcontainer container kubepods-besteffort-pod75d556c6_ec19_418d_87c8_eb49c39093ba.slice. Aug 12 23:44:58.944554 systemd[1]: Created slice kubepods-besteffort-pod5b3e9e11_ddf2_4174_81f6_041fc4ea6217.slice - libcontainer container kubepods-besteffort-pod5b3e9e11_ddf2_4174_81f6_041fc4ea6217.slice. Aug 12 23:44:58.949501 kubelet[2678]: I0812 23:44:58.948987 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6rt4\" (UniqueName: \"kubernetes.io/projected/6a628041-f9ef-49db-b102-02e5038d5605-kube-api-access-n6rt4\") pod \"calico-apiserver-8d95f6697-s2n77\" (UID: \"6a628041-f9ef-49db-b102-02e5038d5605\") " pod="calico-apiserver/calico-apiserver-8d95f6697-s2n77" Aug 12 23:44:58.949501 kubelet[2678]: I0812 23:44:58.949035 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c44hr\" (UniqueName: \"kubernetes.io/projected/5b3e9e11-ddf2-4174-81f6-041fc4ea6217-kube-api-access-c44hr\") pod \"whisker-fd94c85cc-h8s9s\" (UID: \"5b3e9e11-ddf2-4174-81f6-041fc4ea6217\") " pod="calico-system/whisker-fd94c85cc-h8s9s" Aug 12 23:44:58.949501 kubelet[2678]: I0812 23:44:58.949275 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ls94\" (UniqueName: \"kubernetes.io/projected/0ac1771f-2081-4e4a-babd-6f3aff24a738-kube-api-access-6ls94\") pod \"coredns-668d6bf9bc-6rt9v\" (UID: \"0ac1771f-2081-4e4a-babd-6f3aff24a738\") " pod="kube-system/coredns-668d6bf9bc-6rt9v" Aug 12 23:44:58.949501 kubelet[2678]: I0812 23:44:58.949297 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/246b261b-8026-450f-b42d-480831756f1c-config\") pod \"goldmane-768f4c5c69-q9tw6\" (UID: \"246b261b-8026-450f-b42d-480831756f1c\") " pod="calico-system/goldmane-768f4c5c69-q9tw6" Aug 12 23:44:58.949501 kubelet[2678]: I0812 23:44:58.949331 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/246b261b-8026-450f-b42d-480831756f1c-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-q9tw6\" (UID: \"246b261b-8026-450f-b42d-480831756f1c\") " pod="calico-system/goldmane-768f4c5c69-q9tw6" Aug 12 23:44:58.949753 kubelet[2678]: I0812 23:44:58.949349 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/246b261b-8026-450f-b42d-480831756f1c-goldmane-key-pair\") pod \"goldmane-768f4c5c69-q9tw6\" (UID: \"246b261b-8026-450f-b42d-480831756f1c\") " pod="calico-system/goldmane-768f4c5c69-q9tw6" Aug 12 23:44:58.949753 kubelet[2678]: I0812 23:44:58.949368 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75d556c6-ec19-418d-87c8-eb49c39093ba-tigera-ca-bundle\") pod \"calico-kube-controllers-7d885f6b46-lvzfn\" (UID: \"75d556c6-ec19-418d-87c8-eb49c39093ba\") " pod="calico-system/calico-kube-controllers-7d885f6b46-lvzfn" Aug 12 23:44:58.949753 kubelet[2678]: I0812 23:44:58.949398 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ac1771f-2081-4e4a-babd-6f3aff24a738-config-volume\") pod \"coredns-668d6bf9bc-6rt9v\" (UID: \"0ac1771f-2081-4e4a-babd-6f3aff24a738\") " pod="kube-system/coredns-668d6bf9bc-6rt9v" Aug 12 23:44:58.949753 kubelet[2678]: I0812 23:44:58.949416 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6a628041-f9ef-49db-b102-02e5038d5605-calico-apiserver-certs\") pod \"calico-apiserver-8d95f6697-s2n77\" (UID: \"6a628041-f9ef-49db-b102-02e5038d5605\") " pod="calico-apiserver/calico-apiserver-8d95f6697-s2n77" Aug 12 23:44:58.949753 kubelet[2678]: I0812 23:44:58.949443 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c87hz\" (UniqueName: \"kubernetes.io/projected/dc72f1ba-c6f4-4181-ad9b-1872ade37dbf-kube-api-access-c87hz\") pod \"calico-apiserver-8d95f6697-hkpgp\" (UID: \"dc72f1ba-c6f4-4181-ad9b-1872ade37dbf\") " pod="calico-apiserver/calico-apiserver-8d95f6697-hkpgp" Aug 12 23:44:58.949858 kubelet[2678]: I0812 23:44:58.949695 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b3e9e11-ddf2-4174-81f6-041fc4ea6217-whisker-ca-bundle\") pod \"whisker-fd94c85cc-h8s9s\" (UID: \"5b3e9e11-ddf2-4174-81f6-041fc4ea6217\") " pod="calico-system/whisker-fd94c85cc-h8s9s" Aug 12 23:44:58.949858 kubelet[2678]: I0812 23:44:58.949730 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psdk6\" (UniqueName: \"kubernetes.io/projected/c47c77fc-ea5a-46ba-a097-cbea896c6dc5-kube-api-access-psdk6\") pod \"coredns-668d6bf9bc-9wjth\" (UID: \"c47c77fc-ea5a-46ba-a097-cbea896c6dc5\") " pod="kube-system/coredns-668d6bf9bc-9wjth" Aug 12 23:44:58.951098 kubelet[2678]: I0812 23:44:58.950916 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5b3e9e11-ddf2-4174-81f6-041fc4ea6217-whisker-backend-key-pair\") pod \"whisker-fd94c85cc-h8s9s\" (UID: \"5b3e9e11-ddf2-4174-81f6-041fc4ea6217\") " pod="calico-system/whisker-fd94c85cc-h8s9s" Aug 12 23:44:58.951098 kubelet[2678]: I0812 23:44:58.950992 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c47c77fc-ea5a-46ba-a097-cbea896c6dc5-config-volume\") pod \"coredns-668d6bf9bc-9wjth\" (UID: \"c47c77fc-ea5a-46ba-a097-cbea896c6dc5\") " pod="kube-system/coredns-668d6bf9bc-9wjth" Aug 12 23:44:58.951098 kubelet[2678]: I0812 23:44:58.951012 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pssn\" (UniqueName: \"kubernetes.io/projected/246b261b-8026-450f-b42d-480831756f1c-kube-api-access-8pssn\") pod \"goldmane-768f4c5c69-q9tw6\" (UID: \"246b261b-8026-450f-b42d-480831756f1c\") " pod="calico-system/goldmane-768f4c5c69-q9tw6" Aug 12 23:44:58.951098 kubelet[2678]: I0812 23:44:58.951077 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dc72f1ba-c6f4-4181-ad9b-1872ade37dbf-calico-apiserver-certs\") pod \"calico-apiserver-8d95f6697-hkpgp\" (UID: \"dc72f1ba-c6f4-4181-ad9b-1872ade37dbf\") " pod="calico-apiserver/calico-apiserver-8d95f6697-hkpgp" Aug 12 23:44:58.951098 kubelet[2678]: I0812 23:44:58.951103 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pdtd\" (UniqueName: \"kubernetes.io/projected/75d556c6-ec19-418d-87c8-eb49c39093ba-kube-api-access-2pdtd\") pod \"calico-kube-controllers-7d885f6b46-lvzfn\" (UID: \"75d556c6-ec19-418d-87c8-eb49c39093ba\") " pod="calico-system/calico-kube-controllers-7d885f6b46-lvzfn" Aug 12 23:44:58.952123 systemd[1]: Created slice kubepods-besteffort-pod6a628041_f9ef_49db_b102_02e5038d5605.slice - libcontainer container kubepods-besteffort-pod6a628041_f9ef_49db_b102_02e5038d5605.slice. Aug 12 23:44:58.966011 systemd[1]: Created slice kubepods-besteffort-poddc72f1ba_c6f4_4181_ad9b_1872ade37dbf.slice - libcontainer container kubepods-besteffort-poddc72f1ba_c6f4_4181_ad9b_1872ade37dbf.slice. Aug 12 23:44:58.974495 systemd[1]: Created slice kubepods-besteffort-pod246b261b_8026_450f_b42d_480831756f1c.slice - libcontainer container kubepods-besteffort-pod246b261b_8026_450f_b42d_480831756f1c.slice. Aug 12 23:44:59.224291 containerd[1548]: time="2025-08-12T23:44:59.224149741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rt9v,Uid:0ac1771f-2081-4e4a-babd-6f3aff24a738,Namespace:kube-system,Attempt:0,}" Aug 12 23:44:59.226741 containerd[1548]: time="2025-08-12T23:44:59.226557333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9wjth,Uid:c47c77fc-ea5a-46ba-a097-cbea896c6dc5,Namespace:kube-system,Attempt:0,}" Aug 12 23:44:59.240616 containerd[1548]: time="2025-08-12T23:44:59.240512901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d885f6b46-lvzfn,Uid:75d556c6-ec19-418d-87c8-eb49c39093ba,Namespace:calico-system,Attempt:0,}" Aug 12 23:44:59.253524 containerd[1548]: time="2025-08-12T23:44:59.253269292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fd94c85cc-h8s9s,Uid:5b3e9e11-ddf2-4174-81f6-041fc4ea6217,Namespace:calico-system,Attempt:0,}" Aug 12 23:44:59.266448 containerd[1548]: time="2025-08-12T23:44:59.266406675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d95f6697-s2n77,Uid:6a628041-f9ef-49db-b102-02e5038d5605,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:44:59.272732 containerd[1548]: time="2025-08-12T23:44:59.272662113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d95f6697-hkpgp,Uid:dc72f1ba-c6f4-4181-ad9b-1872ade37dbf,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:44:59.282181 containerd[1548]: time="2025-08-12T23:44:59.281807814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-q9tw6,Uid:246b261b-8026-450f-b42d-480831756f1c,Namespace:calico-system,Attempt:0,}" Aug 12 23:44:59.424877 containerd[1548]: time="2025-08-12T23:44:59.424825180Z" level=error msg="Failed to destroy network for sandbox \"554196918d2ec3c69059d1a3acb16a248693ee4e7c5a40861a4a8b625cb99de0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.428366 containerd[1548]: time="2025-08-12T23:44:59.428316312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9wjth,Uid:c47c77fc-ea5a-46ba-a097-cbea896c6dc5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"554196918d2ec3c69059d1a3acb16a248693ee4e7c5a40861a4a8b625cb99de0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.428803 kubelet[2678]: E0812 23:44:59.428746 2678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"554196918d2ec3c69059d1a3acb16a248693ee4e7c5a40861a4a8b625cb99de0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.428883 kubelet[2678]: E0812 23:44:59.428823 2678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"554196918d2ec3c69059d1a3acb16a248693ee4e7c5a40861a4a8b625cb99de0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9wjth" Aug 12 23:44:59.428883 kubelet[2678]: E0812 23:44:59.428842 2678 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"554196918d2ec3c69059d1a3acb16a248693ee4e7c5a40861a4a8b625cb99de0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9wjth" Aug 12 23:44:59.428941 kubelet[2678]: E0812 23:44:59.428882 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9wjth_kube-system(c47c77fc-ea5a-46ba-a097-cbea896c6dc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9wjth_kube-system(c47c77fc-ea5a-46ba-a097-cbea896c6dc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"554196918d2ec3c69059d1a3acb16a248693ee4e7c5a40861a4a8b625cb99de0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9wjth" podUID="c47c77fc-ea5a-46ba-a097-cbea896c6dc5" Aug 12 23:44:59.430765 containerd[1548]: time="2025-08-12T23:44:59.430508829Z" level=error msg="Failed to destroy network for sandbox \"f81c7c5e44e724a3079294bde52001c12b378f90d8ed649081b0142b0f9f932f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.433060 containerd[1548]: time="2025-08-12T23:44:59.432937862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rt9v,Uid:0ac1771f-2081-4e4a-babd-6f3aff24a738,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f81c7c5e44e724a3079294bde52001c12b378f90d8ed649081b0142b0f9f932f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.433578 kubelet[2678]: E0812 23:44:59.433512 2678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f81c7c5e44e724a3079294bde52001c12b378f90d8ed649081b0142b0f9f932f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.433650 kubelet[2678]: E0812 23:44:59.433600 2678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f81c7c5e44e724a3079294bde52001c12b378f90d8ed649081b0142b0f9f932f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6rt9v" Aug 12 23:44:59.433650 kubelet[2678]: E0812 23:44:59.433631 2678 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f81c7c5e44e724a3079294bde52001c12b378f90d8ed649081b0142b0f9f932f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6rt9v" Aug 12 23:44:59.433846 kubelet[2678]: E0812 23:44:59.433672 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6rt9v_kube-system(0ac1771f-2081-4e4a-babd-6f3aff24a738)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6rt9v_kube-system(0ac1771f-2081-4e4a-babd-6f3aff24a738)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f81c7c5e44e724a3079294bde52001c12b378f90d8ed649081b0142b0f9f932f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6rt9v" podUID="0ac1771f-2081-4e4a-babd-6f3aff24a738" Aug 12 23:44:59.435312 containerd[1548]: time="2025-08-12T23:44:59.435157098Z" level=error msg="Failed to destroy network for sandbox \"b111b35a2cee9d6122f75be9515dce1103501f1dc8cead89fa85cc223c72b8aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.437704 containerd[1548]: time="2025-08-12T23:44:59.437064501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d95f6697-s2n77,Uid:6a628041-f9ef-49db-b102-02e5038d5605,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b111b35a2cee9d6122f75be9515dce1103501f1dc8cead89fa85cc223c72b8aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.437811 kubelet[2678]: E0812 23:44:59.437325 2678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b111b35a2cee9d6122f75be9515dce1103501f1dc8cead89fa85cc223c72b8aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.437811 kubelet[2678]: E0812 23:44:59.437375 2678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b111b35a2cee9d6122f75be9515dce1103501f1dc8cead89fa85cc223c72b8aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8d95f6697-s2n77" Aug 12 23:44:59.437811 kubelet[2678]: E0812 23:44:59.437394 2678 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b111b35a2cee9d6122f75be9515dce1103501f1dc8cead89fa85cc223c72b8aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8d95f6697-s2n77" Aug 12 23:44:59.438516 kubelet[2678]: E0812 23:44:59.437428 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8d95f6697-s2n77_calico-apiserver(6a628041-f9ef-49db-b102-02e5038d5605)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8d95f6697-s2n77_calico-apiserver(6a628041-f9ef-49db-b102-02e5038d5605)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b111b35a2cee9d6122f75be9515dce1103501f1dc8cead89fa85cc223c72b8aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8d95f6697-s2n77" podUID="6a628041-f9ef-49db-b102-02e5038d5605" Aug 12 23:44:59.455614 containerd[1548]: time="2025-08-12T23:44:59.455576059Z" level=error msg="Failed to destroy network for sandbox \"c201d5a6019d98f09a7fbf750db6ab3adfa0aacf4daa094ecb8d25bd501841ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.458367 systemd[1]: Created slice kubepods-besteffort-pod8e2e417a_afc9_4f75_a471_5551fad879ea.slice - libcontainer container kubepods-besteffort-pod8e2e417a_afc9_4f75_a471_5551fad879ea.slice. Aug 12 23:44:59.462826 containerd[1548]: time="2025-08-12T23:44:59.462736479Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fd94c85cc-h8s9s,Uid:5b3e9e11-ddf2-4174-81f6-041fc4ea6217,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c201d5a6019d98f09a7fbf750db6ab3adfa0aacf4daa094ecb8d25bd501841ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.464056 kubelet[2678]: E0812 23:44:59.463030 2678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c201d5a6019d98f09a7fbf750db6ab3adfa0aacf4daa094ecb8d25bd501841ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.464056 kubelet[2678]: E0812 23:44:59.463078 2678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c201d5a6019d98f09a7fbf750db6ab3adfa0aacf4daa094ecb8d25bd501841ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-fd94c85cc-h8s9s" Aug 12 23:44:59.464056 kubelet[2678]: E0812 23:44:59.463097 2678 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c201d5a6019d98f09a7fbf750db6ab3adfa0aacf4daa094ecb8d25bd501841ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-fd94c85cc-h8s9s" Aug 12 23:44:59.465097 kubelet[2678]: E0812 23:44:59.463130 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-fd94c85cc-h8s9s_calico-system(5b3e9e11-ddf2-4174-81f6-041fc4ea6217)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-fd94c85cc-h8s9s_calico-system(5b3e9e11-ddf2-4174-81f6-041fc4ea6217)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c201d5a6019d98f09a7fbf750db6ab3adfa0aacf4daa094ecb8d25bd501841ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-fd94c85cc-h8s9s" podUID="5b3e9e11-ddf2-4174-81f6-041fc4ea6217" Aug 12 23:44:59.465355 containerd[1548]: time="2025-08-12T23:44:59.464341208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m9cgr,Uid:8e2e417a-afc9-4f75-a471-5551fad879ea,Namespace:calico-system,Attempt:0,}" Aug 12 23:44:59.487022 containerd[1548]: time="2025-08-12T23:44:59.486830249Z" level=error msg="Failed to destroy network for sandbox \"b019efbbc6c567954f79b9773dcec314108b636e4a907baf90669e1186bda29f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.490095 containerd[1548]: time="2025-08-12T23:44:59.489796071Z" level=error msg="Failed to destroy network for sandbox \"667adfe09f808dfc6aea7c45851c659a3c9354936deea7b6daac24882c5df17e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.490235 containerd[1548]: time="2025-08-12T23:44:59.489962627Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d885f6b46-lvzfn,Uid:75d556c6-ec19-418d-87c8-eb49c39093ba,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b019efbbc6c567954f79b9773dcec314108b636e4a907baf90669e1186bda29f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.490585 kubelet[2678]: E0812 23:44:59.490550 2678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b019efbbc6c567954f79b9773dcec314108b636e4a907baf90669e1186bda29f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.490732 kubelet[2678]: E0812 23:44:59.490705 2678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b019efbbc6c567954f79b9773dcec314108b636e4a907baf90669e1186bda29f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d885f6b46-lvzfn" Aug 12 23:44:59.490894 kubelet[2678]: E0812 23:44:59.490823 2678 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b019efbbc6c567954f79b9773dcec314108b636e4a907baf90669e1186bda29f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d885f6b46-lvzfn" Aug 12 23:44:59.490980 kubelet[2678]: E0812 23:44:59.490954 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d885f6b46-lvzfn_calico-system(75d556c6-ec19-418d-87c8-eb49c39093ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d885f6b46-lvzfn_calico-system(75d556c6-ec19-418d-87c8-eb49c39093ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b019efbbc6c567954f79b9773dcec314108b636e4a907baf90669e1186bda29f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d885f6b46-lvzfn" podUID="75d556c6-ec19-418d-87c8-eb49c39093ba" Aug 12 23:44:59.493852 containerd[1548]: time="2025-08-12T23:44:59.493791633Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d95f6697-hkpgp,Uid:dc72f1ba-c6f4-4181-ad9b-1872ade37dbf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"667adfe09f808dfc6aea7c45851c659a3c9354936deea7b6daac24882c5df17e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.494122 kubelet[2678]: E0812 23:44:59.494060 2678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"667adfe09f808dfc6aea7c45851c659a3c9354936deea7b6daac24882c5df17e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.494122 kubelet[2678]: E0812 23:44:59.494116 2678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"667adfe09f808dfc6aea7c45851c659a3c9354936deea7b6daac24882c5df17e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8d95f6697-hkpgp" Aug 12 23:44:59.494260 kubelet[2678]: E0812 23:44:59.494136 2678 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"667adfe09f808dfc6aea7c45851c659a3c9354936deea7b6daac24882c5df17e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8d95f6697-hkpgp" Aug 12 23:44:59.495277 kubelet[2678]: E0812 23:44:59.494180 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8d95f6697-hkpgp_calico-apiserver(dc72f1ba-c6f4-4181-ad9b-1872ade37dbf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8d95f6697-hkpgp_calico-apiserver(dc72f1ba-c6f4-4181-ad9b-1872ade37dbf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"667adfe09f808dfc6aea7c45851c659a3c9354936deea7b6daac24882c5df17e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8d95f6697-hkpgp" podUID="dc72f1ba-c6f4-4181-ad9b-1872ade37dbf" Aug 12 23:44:59.514093 containerd[1548]: time="2025-08-12T23:44:59.513988078Z" level=error msg="Failed to destroy network for sandbox \"270e5508c589e6e07e2b04372835dc92b5d368b53c233ff2f68d42523a6edb50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.517234 containerd[1548]: time="2025-08-12T23:44:59.517127857Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-q9tw6,Uid:246b261b-8026-450f-b42d-480831756f1c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"270e5508c589e6e07e2b04372835dc92b5d368b53c233ff2f68d42523a6edb50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.518267 kubelet[2678]: E0812 23:44:59.517810 2678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"270e5508c589e6e07e2b04372835dc92b5d368b53c233ff2f68d42523a6edb50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.518546 kubelet[2678]: E0812 23:44:59.518479 2678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"270e5508c589e6e07e2b04372835dc92b5d368b53c233ff2f68d42523a6edb50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-q9tw6" Aug 12 23:44:59.518640 kubelet[2678]: E0812 23:44:59.518623 2678 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"270e5508c589e6e07e2b04372835dc92b5d368b53c233ff2f68d42523a6edb50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-q9tw6" Aug 12 23:44:59.520259 kubelet[2678]: E0812 23:44:59.519747 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-q9tw6_calico-system(246b261b-8026-450f-b42d-480831756f1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-q9tw6_calico-system(246b261b-8026-450f-b42d-480831756f1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"270e5508c589e6e07e2b04372835dc92b5d368b53c233ff2f68d42523a6edb50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-q9tw6" podUID="246b261b-8026-450f-b42d-480831756f1c" Aug 12 23:44:59.554235 containerd[1548]: time="2025-08-12T23:44:59.554100534Z" level=error msg="Failed to destroy network for sandbox \"ed1666b29f4056836a561750e2cab5951bd61ebbc357449e58b9c0adc0378ccf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.556296 containerd[1548]: time="2025-08-12T23:44:59.556145054Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m9cgr,Uid:8e2e417a-afc9-4f75-a471-5551fad879ea,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed1666b29f4056836a561750e2cab5951bd61ebbc357449e58b9c0adc0378ccf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.556935 kubelet[2678]: E0812 23:44:59.556687 2678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed1666b29f4056836a561750e2cab5951bd61ebbc357449e58b9c0adc0378ccf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:44:59.556935 kubelet[2678]: E0812 23:44:59.556770 2678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed1666b29f4056836a561750e2cab5951bd61ebbc357449e58b9c0adc0378ccf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m9cgr" Aug 12 23:44:59.556935 kubelet[2678]: E0812 23:44:59.556796 2678 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed1666b29f4056836a561750e2cab5951bd61ebbc357449e58b9c0adc0378ccf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m9cgr" Aug 12 23:44:59.557134 kubelet[2678]: E0812 23:44:59.556861 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-m9cgr_calico-system(8e2e417a-afc9-4f75-a471-5551fad879ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-m9cgr_calico-system(8e2e417a-afc9-4f75-a471-5551fad879ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed1666b29f4056836a561750e2cab5951bd61ebbc357449e58b9c0adc0378ccf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m9cgr" podUID="8e2e417a-afc9-4f75-a471-5551fad879ea" Aug 12 23:44:59.629826 containerd[1548]: time="2025-08-12T23:44:59.629793056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 12 23:45:00.143763 systemd[1]: run-netns-cni\x2d552cd911\x2dc7c1\x2d30b9\x2d07ac\x2da8b92b79c718.mount: Deactivated successfully. Aug 12 23:45:00.143931 systemd[1]: run-netns-cni\x2d40c0148e\x2dd7ad\x2d8d68\x2d831d\x2d6028fe47bec9.mount: Deactivated successfully. Aug 12 23:45:00.144424 systemd[1]: run-netns-cni\x2d71361400\x2d0d6e\x2df917\x2dda33\x2dbbc45986fc5a.mount: Deactivated successfully. Aug 12 23:45:00.144545 systemd[1]: run-netns-cni\x2dec1c6adf\x2d470b\x2dabc6\x2dda7e\x2d51b20337c774.mount: Deactivated successfully. Aug 12 23:45:07.309492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3037037470.mount: Deactivated successfully. Aug 12 23:45:07.342283 containerd[1548]: time="2025-08-12T23:45:07.340756638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 12 23:45:07.342283 containerd[1548]: time="2025-08-12T23:45:07.342194582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:07.343515 containerd[1548]: time="2025-08-12T23:45:07.343476687Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:07.344472 containerd[1548]: time="2025-08-12T23:45:07.344422196Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 7.714438784s" Aug 12 23:45:07.344472 containerd[1548]: time="2025-08-12T23:45:07.344469515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 12 23:45:07.345501 containerd[1548]: time="2025-08-12T23:45:07.345470343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:07.367623 containerd[1548]: time="2025-08-12T23:45:07.367584686Z" level=info msg="CreateContainer within sandbox \"e3e1df7772e57974c38743df8e693a79860ea2f65732d34a40cdf0efa53e45d0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 12 23:45:07.381415 containerd[1548]: time="2025-08-12T23:45:07.381370565Z" level=info msg="Container 48fee397fa82b165d7cb8847fe3251ff1966f81c3bd5d5f886d48fbb6317b10e: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:45:07.408447 containerd[1548]: time="2025-08-12T23:45:07.408385810Z" level=info msg="CreateContainer within sandbox \"e3e1df7772e57974c38743df8e693a79860ea2f65732d34a40cdf0efa53e45d0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"48fee397fa82b165d7cb8847fe3251ff1966f81c3bd5d5f886d48fbb6317b10e\"" Aug 12 23:45:07.409125 containerd[1548]: time="2025-08-12T23:45:07.409091602Z" level=info msg="StartContainer for \"48fee397fa82b165d7cb8847fe3251ff1966f81c3bd5d5f886d48fbb6317b10e\"" Aug 12 23:45:07.412278 containerd[1548]: time="2025-08-12T23:45:07.412231925Z" level=info msg="connecting to shim 48fee397fa82b165d7cb8847fe3251ff1966f81c3bd5d5f886d48fbb6317b10e" address="unix:///run/containerd/s/abdd36aacfc6f102caf88bf8dae746e0bbb5690739d346d1f55024df40a47567" protocol=ttrpc version=3 Aug 12 23:45:07.438262 systemd[1]: Started cri-containerd-48fee397fa82b165d7cb8847fe3251ff1966f81c3bd5d5f886d48fbb6317b10e.scope - libcontainer container 48fee397fa82b165d7cb8847fe3251ff1966f81c3bd5d5f886d48fbb6317b10e. Aug 12 23:45:07.498002 containerd[1548]: time="2025-08-12T23:45:07.497884047Z" level=info msg="StartContainer for \"48fee397fa82b165d7cb8847fe3251ff1966f81c3bd5d5f886d48fbb6317b10e\" returns successfully" Aug 12 23:45:07.650544 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 12 23:45:07.650653 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 12 23:45:07.793310 containerd[1548]: time="2025-08-12T23:45:07.793255203Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48fee397fa82b165d7cb8847fe3251ff1966f81c3bd5d5f886d48fbb6317b10e\" id:\"530dd43408d88460078c19d341e64a58f939d89c46b43e9e93225260b2701d01\" pid:3695 exit_status:1 exited_at:{seconds:1755042307 nanos:792657210}" Aug 12 23:45:07.827810 kubelet[2678]: I0812 23:45:07.827725 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n7gvs" podStartSLOduration=1.765446082 podStartE2EDuration="16.827705362s" podCreationTimestamp="2025-08-12 23:44:51 +0000 UTC" firstStartedPulling="2025-08-12 23:44:52.283580819 +0000 UTC m=+20.946817329" lastFinishedPulling="2025-08-12 23:45:07.345840099 +0000 UTC m=+36.009076609" observedRunningTime="2025-08-12 23:45:07.693258569 +0000 UTC m=+36.356495079" watchObservedRunningTime="2025-08-12 23:45:07.827705362 +0000 UTC m=+36.490941872" Aug 12 23:45:07.923697 kubelet[2678]: I0812 23:45:07.923642 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5b3e9e11-ddf2-4174-81f6-041fc4ea6217-whisker-backend-key-pair\") pod \"5b3e9e11-ddf2-4174-81f6-041fc4ea6217\" (UID: \"5b3e9e11-ddf2-4174-81f6-041fc4ea6217\") " Aug 12 23:45:07.923697 kubelet[2678]: I0812 23:45:07.923704 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c44hr\" (UniqueName: \"kubernetes.io/projected/5b3e9e11-ddf2-4174-81f6-041fc4ea6217-kube-api-access-c44hr\") pod \"5b3e9e11-ddf2-4174-81f6-041fc4ea6217\" (UID: \"5b3e9e11-ddf2-4174-81f6-041fc4ea6217\") " Aug 12 23:45:07.923966 kubelet[2678]: I0812 23:45:07.923735 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b3e9e11-ddf2-4174-81f6-041fc4ea6217-whisker-ca-bundle\") pod \"5b3e9e11-ddf2-4174-81f6-041fc4ea6217\" (UID: \"5b3e9e11-ddf2-4174-81f6-041fc4ea6217\") " Aug 12 23:45:07.924164 kubelet[2678]: I0812 23:45:07.924132 2678 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b3e9e11-ddf2-4174-81f6-041fc4ea6217-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5b3e9e11-ddf2-4174-81f6-041fc4ea6217" (UID: "5b3e9e11-ddf2-4174-81f6-041fc4ea6217"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 12 23:45:07.930426 kubelet[2678]: I0812 23:45:07.930382 2678 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b3e9e11-ddf2-4174-81f6-041fc4ea6217-kube-api-access-c44hr" (OuterVolumeSpecName: "kube-api-access-c44hr") pod "5b3e9e11-ddf2-4174-81f6-041fc4ea6217" (UID: "5b3e9e11-ddf2-4174-81f6-041fc4ea6217"). InnerVolumeSpecName "kube-api-access-c44hr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 12 23:45:07.932377 kubelet[2678]: I0812 23:45:07.932338 2678 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b3e9e11-ddf2-4174-81f6-041fc4ea6217-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5b3e9e11-ddf2-4174-81f6-041fc4ea6217" (UID: "5b3e9e11-ddf2-4174-81f6-041fc4ea6217"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 12 23:45:08.024990 kubelet[2678]: I0812 23:45:08.024894 2678 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5b3e9e11-ddf2-4174-81f6-041fc4ea6217-whisker-backend-key-pair\") on node \"ci-4372-1-0-f-e67fdcf04d\" DevicePath \"\"" Aug 12 23:45:08.024990 kubelet[2678]: I0812 23:45:08.024972 2678 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c44hr\" (UniqueName: \"kubernetes.io/projected/5b3e9e11-ddf2-4174-81f6-041fc4ea6217-kube-api-access-c44hr\") on node \"ci-4372-1-0-f-e67fdcf04d\" DevicePath \"\"" Aug 12 23:45:08.024990 kubelet[2678]: I0812 23:45:08.024984 2678 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b3e9e11-ddf2-4174-81f6-041fc4ea6217-whisker-ca-bundle\") on node \"ci-4372-1-0-f-e67fdcf04d\" DevicePath \"\"" Aug 12 23:45:08.313112 systemd[1]: var-lib-kubelet-pods-5b3e9e11\x2dddf2\x2d4174\x2d81f6\x2d041fc4ea6217-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc44hr.mount: Deactivated successfully. Aug 12 23:45:08.314291 systemd[1]: var-lib-kubelet-pods-5b3e9e11\x2dddf2\x2d4174\x2d81f6\x2d041fc4ea6217-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 12 23:45:08.673534 systemd[1]: Removed slice kubepods-besteffort-pod5b3e9e11_ddf2_4174_81f6_041fc4ea6217.slice - libcontainer container kubepods-besteffort-pod5b3e9e11_ddf2_4174_81f6_041fc4ea6217.slice. Aug 12 23:45:08.762426 kubelet[2678]: W0812 23:45:08.762030 2678 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4372-1-0-f-e67fdcf04d" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4372-1-0-f-e67fdcf04d' and this object Aug 12 23:45:08.762912 kubelet[2678]: W0812 23:45:08.762658 2678 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4372-1-0-f-e67fdcf04d" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4372-1-0-f-e67fdcf04d' and this object Aug 12 23:45:08.765309 kubelet[2678]: E0812 23:45:08.764687 2678 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4372-1-0-f-e67fdcf04d\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372-1-0-f-e67fdcf04d' and this object" logger="UnhandledError" Aug 12 23:45:08.765309 kubelet[2678]: I0812 23:45:08.762029 2678 status_manager.go:890] "Failed to get status for pod" podUID="5e34d0f9-672d-4091-a051-b8ebdfb5d5e0" pod="calico-system/whisker-fbd976b7f-f9b2m" err="pods \"whisker-fbd976b7f-f9b2m\" is forbidden: User \"system:node:ci-4372-1-0-f-e67fdcf04d\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372-1-0-f-e67fdcf04d' and this object" Aug 12 23:45:08.766283 kubelet[2678]: E0812 23:45:08.766236 2678 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4372-1-0-f-e67fdcf04d\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372-1-0-f-e67fdcf04d' and this object" logger="UnhandledError" Aug 12 23:45:08.770441 systemd[1]: Created slice kubepods-besteffort-pod5e34d0f9_672d_4091_a051_b8ebdfb5d5e0.slice - libcontainer container kubepods-besteffort-pod5e34d0f9_672d_4091_a051_b8ebdfb5d5e0.slice. Aug 12 23:45:08.814686 containerd[1548]: time="2025-08-12T23:45:08.814621969Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48fee397fa82b165d7cb8847fe3251ff1966f81c3bd5d5f886d48fbb6317b10e\" id:\"e521dcad010d2ac0fd9d96dbc7bb301fa2975c97a2d2b89b6e79d128f30f5b13\" pid:3742 exit_status:1 exited_at:{seconds:1755042308 nanos:813962656}" Aug 12 23:45:08.830055 kubelet[2678]: I0812 23:45:08.829975 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r95gx\" (UniqueName: \"kubernetes.io/projected/5e34d0f9-672d-4091-a051-b8ebdfb5d5e0-kube-api-access-r95gx\") pod \"whisker-fbd976b7f-f9b2m\" (UID: \"5e34d0f9-672d-4091-a051-b8ebdfb5d5e0\") " pod="calico-system/whisker-fbd976b7f-f9b2m" Aug 12 23:45:08.831502 kubelet[2678]: I0812 23:45:08.830357 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e34d0f9-672d-4091-a051-b8ebdfb5d5e0-whisker-ca-bundle\") pod \"whisker-fbd976b7f-f9b2m\" (UID: \"5e34d0f9-672d-4091-a051-b8ebdfb5d5e0\") " pod="calico-system/whisker-fbd976b7f-f9b2m" Aug 12 23:45:08.831502 kubelet[2678]: I0812 23:45:08.830703 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5e34d0f9-672d-4091-a051-b8ebdfb5d5e0-whisker-backend-key-pair\") pod \"whisker-fbd976b7f-f9b2m\" (UID: \"5e34d0f9-672d-4091-a051-b8ebdfb5d5e0\") " pod="calico-system/whisker-fbd976b7f-f9b2m" Aug 12 23:45:09.449880 kubelet[2678]: I0812 23:45:09.449530 2678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b3e9e11-ddf2-4174-81f6-041fc4ea6217" path="/var/lib/kubelet/pods/5b3e9e11-ddf2-4174-81f6-041fc4ea6217/volumes" Aug 12 23:45:09.753062 containerd[1548]: time="2025-08-12T23:45:09.752824909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48fee397fa82b165d7cb8847fe3251ff1966f81c3bd5d5f886d48fbb6317b10e\" id:\"984f0fc40733d4723c0e37e35f70b18db02f5670b6c37a84e1ae59c9c166184a\" pid:3889 exit_status:1 exited_at:{seconds:1755042309 nanos:752056597}" Aug 12 23:45:09.876252 systemd-networkd[1417]: vxlan.calico: Link UP Aug 12 23:45:09.877017 systemd-networkd[1417]: vxlan.calico: Gained carrier Aug 12 23:45:09.977618 containerd[1548]: time="2025-08-12T23:45:09.977528087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fbd976b7f-f9b2m,Uid:5e34d0f9-672d-4091-a051-b8ebdfb5d5e0,Namespace:calico-system,Attempt:0,}" Aug 12 23:45:10.257338 systemd-networkd[1417]: cali9accdd67bb8: Link UP Aug 12 23:45:10.258340 systemd-networkd[1417]: cali9accdd67bb8: Gained carrier Aug 12 23:45:10.286108 containerd[1548]: 2025-08-12 23:45:10.093 [INFO][3936] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--f--e67fdcf04d-k8s-whisker--fbd976b7f--f9b2m-eth0 whisker-fbd976b7f- calico-system 5e34d0f9-672d-4091-a051-b8ebdfb5d5e0 858 0 2025-08-12 23:45:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:fbd976b7f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372-1-0-f-e67fdcf04d whisker-fbd976b7f-f9b2m eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9accdd67bb8 [] [] }} ContainerID="34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81" Namespace="calico-system" Pod="whisker-fbd976b7f-f9b2m" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-whisker--fbd976b7f--f9b2m-" Aug 12 23:45:10.286108 containerd[1548]: 2025-08-12 23:45:10.094 [INFO][3936] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81" Namespace="calico-system" Pod="whisker-fbd976b7f-f9b2m" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-whisker--fbd976b7f--f9b2m-eth0" Aug 12 23:45:10.286108 containerd[1548]: 2025-08-12 23:45:10.170 [INFO][3958] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81" HandleID="k8s-pod-network.34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-whisker--fbd976b7f--f9b2m-eth0" Aug 12 23:45:10.286108 containerd[1548]: 2025-08-12 23:45:10.170 [INFO][3958] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81" HandleID="k8s-pod-network.34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-whisker--fbd976b7f--f9b2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000393730), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-1-0-f-e67fdcf04d", "pod":"whisker-fbd976b7f-f9b2m", "timestamp":"2025-08-12 23:45:10.170639336 +0000 UTC"}, Hostname:"ci-4372-1-0-f-e67fdcf04d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:45:10.286108 containerd[1548]: 2025-08-12 23:45:10.171 [INFO][3958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:45:10.286108 containerd[1548]: 2025-08-12 23:45:10.171 [INFO][3958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:45:10.286108 containerd[1548]: 2025-08-12 23:45:10.171 [INFO][3958] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-f-e67fdcf04d' Aug 12 23:45:10.286108 containerd[1548]: 2025-08-12 23:45:10.184 [INFO][3958] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.286108 containerd[1548]: 2025-08-12 23:45:10.200 [INFO][3958] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.286108 containerd[1548]: 2025-08-12 23:45:10.209 [INFO][3958] ipam/ipam.go 511: Trying affinity for 192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.286108 containerd[1548]: 2025-08-12 23:45:10.212 [INFO][3958] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.286108 containerd[1548]: 2025-08-12 23:45:10.218 [INFO][3958] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.286108 containerd[1548]: 2025-08-12 23:45:10.218 [INFO][3958] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.0/26 handle="k8s-pod-network.34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.286108 containerd[1548]: 2025-08-12 23:45:10.221 [INFO][3958] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81 Aug 12 23:45:10.286108 containerd[1548]: 2025-08-12 23:45:10.229 [INFO][3958] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.0/26 handle="k8s-pod-network.34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.286108 containerd[1548]: 2025-08-12 23:45:10.242 [INFO][3958] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.1/26] block=192.168.41.0/26 handle="k8s-pod-network.34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.286108 containerd[1548]: 2025-08-12 23:45:10.242 [INFO][3958] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.1/26] handle="k8s-pod-network.34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.286108 containerd[1548]: 2025-08-12 23:45:10.242 [INFO][3958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:45:10.286108 containerd[1548]: 2025-08-12 23:45:10.242 [INFO][3958] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.1/26] IPv6=[] ContainerID="34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81" HandleID="k8s-pod-network.34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-whisker--fbd976b7f--f9b2m-eth0" Aug 12 23:45:10.286864 containerd[1548]: 2025-08-12 23:45:10.247 [INFO][3936] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81" Namespace="calico-system" Pod="whisker-fbd976b7f-f9b2m" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-whisker--fbd976b7f--f9b2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--f--e67fdcf04d-k8s-whisker--fbd976b7f--f9b2m-eth0", GenerateName:"whisker-fbd976b7f-", Namespace:"calico-system", SelfLink:"", UID:"5e34d0f9-672d-4091-a051-b8ebdfb5d5e0", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 45, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fbd976b7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-f-e67fdcf04d", ContainerID:"", Pod:"whisker-fbd976b7f-f9b2m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.41.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9accdd67bb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:45:10.286864 containerd[1548]: 2025-08-12 23:45:10.248 [INFO][3936] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.1/32] ContainerID="34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81" Namespace="calico-system" Pod="whisker-fbd976b7f-f9b2m" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-whisker--fbd976b7f--f9b2m-eth0" Aug 12 23:45:10.286864 containerd[1548]: 2025-08-12 23:45:10.248 [INFO][3936] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9accdd67bb8 ContainerID="34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81" Namespace="calico-system" Pod="whisker-fbd976b7f-f9b2m" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-whisker--fbd976b7f--f9b2m-eth0" Aug 12 23:45:10.286864 containerd[1548]: 2025-08-12 23:45:10.262 [INFO][3936] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81" Namespace="calico-system" Pod="whisker-fbd976b7f-f9b2m" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-whisker--fbd976b7f--f9b2m-eth0" Aug 12 23:45:10.286864 containerd[1548]: 2025-08-12 23:45:10.265 [INFO][3936] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81" Namespace="calico-system" Pod="whisker-fbd976b7f-f9b2m" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-whisker--fbd976b7f--f9b2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--f--e67fdcf04d-k8s-whisker--fbd976b7f--f9b2m-eth0", GenerateName:"whisker-fbd976b7f-", Namespace:"calico-system", SelfLink:"", UID:"5e34d0f9-672d-4091-a051-b8ebdfb5d5e0", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 45, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fbd976b7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-f-e67fdcf04d", ContainerID:"34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81", Pod:"whisker-fbd976b7f-f9b2m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.41.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9accdd67bb8", MAC:"fe:e8:f3:a8:76:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:45:10.286864 containerd[1548]: 2025-08-12 23:45:10.280 [INFO][3936] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81" Namespace="calico-system" Pod="whisker-fbd976b7f-f9b2m" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-whisker--fbd976b7f--f9b2m-eth0" Aug 12 23:45:10.329636 containerd[1548]: time="2025-08-12T23:45:10.329547050Z" level=info msg="connecting to shim 34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81" address="unix:///run/containerd/s/060a2bcd123876c67876d03a643df90b009dba67823533f58b8f0af1952dcb32" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:45:10.365434 systemd[1]: Started cri-containerd-34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81.scope - libcontainer container 34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81. Aug 12 23:45:10.418837 containerd[1548]: time="2025-08-12T23:45:10.418684074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fbd976b7f-f9b2m,Uid:5e34d0f9-672d-4091-a051-b8ebdfb5d5e0,Namespace:calico-system,Attempt:0,} returns sandbox id \"34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81\"" Aug 12 23:45:10.421997 containerd[1548]: time="2025-08-12T23:45:10.421936683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 12 23:45:10.446885 containerd[1548]: time="2025-08-12T23:45:10.446755724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rt9v,Uid:0ac1771f-2081-4e4a-babd-6f3aff24a738,Namespace:kube-system,Attempt:0,}" Aug 12 23:45:10.447463 containerd[1548]: time="2025-08-12T23:45:10.447404918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d95f6697-hkpgp,Uid:dc72f1ba-c6f4-4181-ad9b-1872ade37dbf,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:45:10.634490 systemd-networkd[1417]: calicae3daac9f6: Link UP Aug 12 23:45:10.635412 systemd-networkd[1417]: calicae3daac9f6: Gained carrier Aug 12 23:45:10.662047 containerd[1548]: 2025-08-12 23:45:10.527 [INFO][4046] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--6rt9v-eth0 coredns-668d6bf9bc- kube-system 0ac1771f-2081-4e4a-babd-6f3aff24a738 782 0 2025-08-12 23:44:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372-1-0-f-e67fdcf04d coredns-668d6bf9bc-6rt9v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicae3daac9f6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rt9v" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--6rt9v-" Aug 12 23:45:10.662047 containerd[1548]: 2025-08-12 23:45:10.527 [INFO][4046] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rt9v" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--6rt9v-eth0" Aug 12 23:45:10.662047 containerd[1548]: 2025-08-12 23:45:10.570 [INFO][4071] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474" HandleID="k8s-pod-network.8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--6rt9v-eth0" Aug 12 23:45:10.662047 containerd[1548]: 2025-08-12 23:45:10.570 [INFO][4071] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474" HandleID="k8s-pod-network.8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--6rt9v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002caff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372-1-0-f-e67fdcf04d", "pod":"coredns-668d6bf9bc-6rt9v", "timestamp":"2025-08-12 23:45:10.570323377 +0000 UTC"}, Hostname:"ci-4372-1-0-f-e67fdcf04d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:45:10.662047 containerd[1548]: 2025-08-12 23:45:10.570 [INFO][4071] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:45:10.662047 containerd[1548]: 2025-08-12 23:45:10.570 [INFO][4071] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:45:10.662047 containerd[1548]: 2025-08-12 23:45:10.570 [INFO][4071] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-f-e67fdcf04d' Aug 12 23:45:10.662047 containerd[1548]: 2025-08-12 23:45:10.583 [INFO][4071] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.662047 containerd[1548]: 2025-08-12 23:45:10.589 [INFO][4071] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.662047 containerd[1548]: 2025-08-12 23:45:10.597 [INFO][4071] ipam/ipam.go 511: Trying affinity for 192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.662047 containerd[1548]: 2025-08-12 23:45:10.600 [INFO][4071] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.662047 containerd[1548]: 2025-08-12 23:45:10.604 [INFO][4071] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.662047 containerd[1548]: 2025-08-12 23:45:10.605 [INFO][4071] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.0/26 handle="k8s-pod-network.8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.662047 containerd[1548]: 2025-08-12 23:45:10.608 [INFO][4071] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474 Aug 12 23:45:10.662047 containerd[1548]: 2025-08-12 23:45:10.614 [INFO][4071] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.0/26 handle="k8s-pod-network.8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.662047 containerd[1548]: 2025-08-12 23:45:10.625 [INFO][4071] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.2/26] block=192.168.41.0/26 handle="k8s-pod-network.8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.662047 containerd[1548]: 2025-08-12 23:45:10.625 [INFO][4071] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.2/26] handle="k8s-pod-network.8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.662047 containerd[1548]: 2025-08-12 23:45:10.626 [INFO][4071] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:45:10.662047 containerd[1548]: 2025-08-12 23:45:10.626 [INFO][4071] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.2/26] IPv6=[] ContainerID="8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474" HandleID="k8s-pod-network.8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--6rt9v-eth0" Aug 12 23:45:10.662652 containerd[1548]: 2025-08-12 23:45:10.629 [INFO][4046] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rt9v" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--6rt9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--6rt9v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0ac1771f-2081-4e4a-babd-6f3aff24a738", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 44, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-f-e67fdcf04d", ContainerID:"", Pod:"coredns-668d6bf9bc-6rt9v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicae3daac9f6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:45:10.662652 containerd[1548]: 2025-08-12 23:45:10.629 [INFO][4046] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.2/32] ContainerID="8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rt9v" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--6rt9v-eth0" Aug 12 23:45:10.662652 containerd[1548]: 2025-08-12 23:45:10.629 [INFO][4046] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicae3daac9f6 ContainerID="8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rt9v" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--6rt9v-eth0" Aug 12 23:45:10.662652 containerd[1548]: 2025-08-12 23:45:10.635 [INFO][4046] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rt9v" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--6rt9v-eth0" Aug 12 23:45:10.662652 containerd[1548]: 2025-08-12 23:45:10.636 [INFO][4046] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rt9v" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--6rt9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--6rt9v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0ac1771f-2081-4e4a-babd-6f3aff24a738", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 44, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-f-e67fdcf04d", ContainerID:"8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474", Pod:"coredns-668d6bf9bc-6rt9v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicae3daac9f6", MAC:"22:50:c8:e2:a6:3e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:45:10.662849 containerd[1548]: 2025-08-12 23:45:10.650 [INFO][4046] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rt9v" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--6rt9v-eth0" Aug 12 23:45:10.700734 containerd[1548]: time="2025-08-12T23:45:10.700507247Z" level=info msg="connecting to shim 8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474" address="unix:///run/containerd/s/81d6e9c22a524c74662dd0c937917289996c2a7900d029bc6680bd9e4512c1e5" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:45:10.745163 systemd[1]: Started cri-containerd-8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474.scope - libcontainer container 8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474. Aug 12 23:45:10.758270 systemd-networkd[1417]: cali2825dfbdd8a: Link UP Aug 12 23:45:10.758512 systemd-networkd[1417]: cali2825dfbdd8a: Gained carrier Aug 12 23:45:10.792699 containerd[1548]: 2025-08-12 23:45:10.531 [INFO][4051] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--hkpgp-eth0 calico-apiserver-8d95f6697- calico-apiserver dc72f1ba-c6f4-4181-ad9b-1872ade37dbf 791 0 2025-08-12 23:44:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8d95f6697 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372-1-0-f-e67fdcf04d calico-apiserver-8d95f6697-hkpgp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2825dfbdd8a [] [] }} ContainerID="f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2" Namespace="calico-apiserver" Pod="calico-apiserver-8d95f6697-hkpgp" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--hkpgp-" Aug 12 23:45:10.792699 containerd[1548]: 2025-08-12 23:45:10.531 [INFO][4051] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2" Namespace="calico-apiserver" Pod="calico-apiserver-8d95f6697-hkpgp" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--hkpgp-eth0" Aug 12 23:45:10.792699 containerd[1548]: 2025-08-12 23:45:10.579 [INFO][4076] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2" HandleID="k8s-pod-network.f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--hkpgp-eth0" Aug 12 23:45:10.792699 containerd[1548]: 2025-08-12 23:45:10.579 [INFO][4076] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2" HandleID="k8s-pod-network.f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--hkpgp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000322140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372-1-0-f-e67fdcf04d", "pod":"calico-apiserver-8d95f6697-hkpgp", "timestamp":"2025-08-12 23:45:10.579034334 +0000 UTC"}, Hostname:"ci-4372-1-0-f-e67fdcf04d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:45:10.792699 containerd[1548]: 2025-08-12 23:45:10.579 [INFO][4076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:45:10.792699 containerd[1548]: 2025-08-12 23:45:10.626 [INFO][4076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:45:10.792699 containerd[1548]: 2025-08-12 23:45:10.626 [INFO][4076] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-f-e67fdcf04d' Aug 12 23:45:10.792699 containerd[1548]: 2025-08-12 23:45:10.687 [INFO][4076] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.792699 containerd[1548]: 2025-08-12 23:45:10.696 [INFO][4076] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.792699 containerd[1548]: 2025-08-12 23:45:10.708 [INFO][4076] ipam/ipam.go 511: Trying affinity for 192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.792699 containerd[1548]: 2025-08-12 23:45:10.712 [INFO][4076] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.792699 containerd[1548]: 2025-08-12 23:45:10.718 [INFO][4076] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.792699 containerd[1548]: 2025-08-12 23:45:10.718 [INFO][4076] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.0/26 handle="k8s-pod-network.f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.792699 containerd[1548]: 2025-08-12 23:45:10.720 [INFO][4076] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2 Aug 12 23:45:10.792699 containerd[1548]: 2025-08-12 23:45:10.726 [INFO][4076] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.0/26 handle="k8s-pod-network.f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.792699 containerd[1548]: 2025-08-12 23:45:10.740 [INFO][4076] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.3/26] block=192.168.41.0/26 handle="k8s-pod-network.f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.792699 containerd[1548]: 2025-08-12 23:45:10.740 [INFO][4076] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.3/26] handle="k8s-pod-network.f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:10.792699 containerd[1548]: 2025-08-12 23:45:10.741 [INFO][4076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:45:10.792699 containerd[1548]: 2025-08-12 23:45:10.741 [INFO][4076] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.3/26] IPv6=[] ContainerID="f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2" HandleID="k8s-pod-network.f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--hkpgp-eth0" Aug 12 23:45:10.794057 containerd[1548]: 2025-08-12 23:45:10.748 [INFO][4051] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2" Namespace="calico-apiserver" Pod="calico-apiserver-8d95f6697-hkpgp" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--hkpgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--hkpgp-eth0", GenerateName:"calico-apiserver-8d95f6697-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc72f1ba-c6f4-4181-ad9b-1872ade37dbf", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 44, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8d95f6697", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-f-e67fdcf04d", ContainerID:"", Pod:"calico-apiserver-8d95f6697-hkpgp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2825dfbdd8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:45:10.794057 containerd[1548]: 2025-08-12 23:45:10.748 [INFO][4051] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.3/32] ContainerID="f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2" Namespace="calico-apiserver" Pod="calico-apiserver-8d95f6697-hkpgp" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--hkpgp-eth0" Aug 12 23:45:10.794057 containerd[1548]: 2025-08-12 23:45:10.748 [INFO][4051] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2825dfbdd8a ContainerID="f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2" Namespace="calico-apiserver" Pod="calico-apiserver-8d95f6697-hkpgp" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--hkpgp-eth0" Aug 12 23:45:10.794057 containerd[1548]: 2025-08-12 23:45:10.757 [INFO][4051] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2" Namespace="calico-apiserver" Pod="calico-apiserver-8d95f6697-hkpgp" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--hkpgp-eth0" Aug 12 23:45:10.794057 containerd[1548]: 2025-08-12 23:45:10.762 [INFO][4051] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2" Namespace="calico-apiserver" Pod="calico-apiserver-8d95f6697-hkpgp" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--hkpgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--hkpgp-eth0", GenerateName:"calico-apiserver-8d95f6697-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc72f1ba-c6f4-4181-ad9b-1872ade37dbf", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 44, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8d95f6697", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-f-e67fdcf04d", ContainerID:"f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2", Pod:"calico-apiserver-8d95f6697-hkpgp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2825dfbdd8a", MAC:"0e:80:b4:6b:05:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:45:10.794057 containerd[1548]: 2025-08-12 23:45:10.788 [INFO][4051] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2" Namespace="calico-apiserver" Pod="calico-apiserver-8d95f6697-hkpgp" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--hkpgp-eth0" Aug 12 23:45:10.823742 containerd[1548]: time="2025-08-12T23:45:10.823694703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rt9v,Uid:0ac1771f-2081-4e4a-babd-6f3aff24a738,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474\"" Aug 12 23:45:10.837952 containerd[1548]: time="2025-08-12T23:45:10.837902367Z" level=info msg="CreateContainer within sandbox \"8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:45:10.842905 containerd[1548]: time="2025-08-12T23:45:10.842851679Z" level=info msg="connecting to shim f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2" address="unix:///run/containerd/s/bf39ecc6c971b9c2ff093a98f825ad8b3015e24c7ad4d9502e3ca03c61781bb0" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:45:10.856577 containerd[1548]: time="2025-08-12T23:45:10.856509908Z" level=info msg="Container 881a2cfd771a7a0b24bdcdafa09cc206443c1110f8be6f8d2ba39910412f2501: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:45:10.870967 containerd[1548]: time="2025-08-12T23:45:10.870498974Z" level=info msg="CreateContainer within sandbox \"8e91aecf23ec854f1658060960f8c872fbbae1f673a32e30431bb9c6fbb02474\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"881a2cfd771a7a0b24bdcdafa09cc206443c1110f8be6f8d2ba39910412f2501\"" Aug 12 23:45:10.871766 containerd[1548]: time="2025-08-12T23:45:10.871721842Z" level=info msg="StartContainer for \"881a2cfd771a7a0b24bdcdafa09cc206443c1110f8be6f8d2ba39910412f2501\"" Aug 12 23:45:10.873779 containerd[1548]: time="2025-08-12T23:45:10.873732463Z" level=info msg="connecting to shim 881a2cfd771a7a0b24bdcdafa09cc206443c1110f8be6f8d2ba39910412f2501" address="unix:///run/containerd/s/81d6e9c22a524c74662dd0c937917289996c2a7900d029bc6680bd9e4512c1e5" protocol=ttrpc version=3 Aug 12 23:45:10.891917 systemd[1]: Started cri-containerd-f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2.scope - libcontainer container f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2. Aug 12 23:45:10.903034 systemd[1]: Started cri-containerd-881a2cfd771a7a0b24bdcdafa09cc206443c1110f8be6f8d2ba39910412f2501.scope - libcontainer container 881a2cfd771a7a0b24bdcdafa09cc206443c1110f8be6f8d2ba39910412f2501. Aug 12 23:45:10.952353 containerd[1548]: time="2025-08-12T23:45:10.952161829Z" level=info msg="StartContainer for \"881a2cfd771a7a0b24bdcdafa09cc206443c1110f8be6f8d2ba39910412f2501\" returns successfully" Aug 12 23:45:10.988496 containerd[1548]: time="2025-08-12T23:45:10.988401881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d95f6697-hkpgp,Uid:dc72f1ba-c6f4-4181-ad9b-1872ade37dbf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2\"" Aug 12 23:45:11.124473 systemd-networkd[1417]: vxlan.calico: Gained IPv6LL Aug 12 23:45:11.450365 containerd[1548]: time="2025-08-12T23:45:11.450267024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-q9tw6,Uid:246b261b-8026-450f-b42d-480831756f1c,Namespace:calico-system,Attempt:0,}" Aug 12 23:45:11.451407 containerd[1548]: time="2025-08-12T23:45:11.450886177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d95f6697-s2n77,Uid:6a628041-f9ef-49db-b102-02e5038d5605,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:45:11.646551 systemd-networkd[1417]: cali97063e02997: Link UP Aug 12 23:45:11.647446 systemd-networkd[1417]: cali97063e02997: Gained carrier Aug 12 23:45:11.673454 containerd[1548]: 2025-08-12 23:45:11.540 [INFO][4235] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--s2n77-eth0 calico-apiserver-8d95f6697- calico-apiserver 6a628041-f9ef-49db-b102-02e5038d5605 789 0 2025-08-12 23:44:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8d95f6697 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372-1-0-f-e67fdcf04d calico-apiserver-8d95f6697-s2n77 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali97063e02997 [] [] }} ContainerID="d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5" Namespace="calico-apiserver" Pod="calico-apiserver-8d95f6697-s2n77" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--s2n77-" Aug 12 23:45:11.673454 containerd[1548]: 2025-08-12 23:45:11.540 [INFO][4235] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5" Namespace="calico-apiserver" Pod="calico-apiserver-8d95f6697-s2n77" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--s2n77-eth0" Aug 12 23:45:11.673454 containerd[1548]: 2025-08-12 23:45:11.587 [INFO][4261] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5" HandleID="k8s-pod-network.d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--s2n77-eth0" Aug 12 23:45:11.673454 containerd[1548]: 2025-08-12 23:45:11.588 [INFO][4261] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5" HandleID="k8s-pod-network.d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--s2n77-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c1150), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372-1-0-f-e67fdcf04d", "pod":"calico-apiserver-8d95f6697-s2n77", "timestamp":"2025-08-12 23:45:11.587962839 +0000 UTC"}, Hostname:"ci-4372-1-0-f-e67fdcf04d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:45:11.673454 containerd[1548]: 2025-08-12 23:45:11.588 [INFO][4261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:45:11.673454 containerd[1548]: 2025-08-12 23:45:11.588 [INFO][4261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:45:11.673454 containerd[1548]: 2025-08-12 23:45:11.588 [INFO][4261] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-f-e67fdcf04d' Aug 12 23:45:11.673454 containerd[1548]: 2025-08-12 23:45:11.600 [INFO][4261] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:11.673454 containerd[1548]: 2025-08-12 23:45:11.606 [INFO][4261] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:11.673454 containerd[1548]: 2025-08-12 23:45:11.614 [INFO][4261] ipam/ipam.go 511: Trying affinity for 192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:11.673454 containerd[1548]: 2025-08-12 23:45:11.617 [INFO][4261] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:11.673454 containerd[1548]: 2025-08-12 23:45:11.620 [INFO][4261] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:11.673454 containerd[1548]: 2025-08-12 23:45:11.621 [INFO][4261] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.0/26 handle="k8s-pod-network.d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:11.673454 containerd[1548]: 2025-08-12 23:45:11.623 [INFO][4261] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5 Aug 12 23:45:11.673454 containerd[1548]: 2025-08-12 23:45:11.629 [INFO][4261] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.0/26 handle="k8s-pod-network.d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:11.673454 containerd[1548]: 2025-08-12 23:45:11.639 [INFO][4261] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.4/26] block=192.168.41.0/26 handle="k8s-pod-network.d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:11.673454 containerd[1548]: 2025-08-12 23:45:11.639 [INFO][4261] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.4/26] handle="k8s-pod-network.d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:11.673454 containerd[1548]: 2025-08-12 23:45:11.639 [INFO][4261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:45:11.673454 containerd[1548]: 2025-08-12 23:45:11.639 [INFO][4261] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.4/26] IPv6=[] ContainerID="d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5" HandleID="k8s-pod-network.d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--s2n77-eth0" Aug 12 23:45:11.675168 containerd[1548]: 2025-08-12 23:45:11.642 [INFO][4235] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5" Namespace="calico-apiserver" Pod="calico-apiserver-8d95f6697-s2n77" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--s2n77-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--s2n77-eth0", GenerateName:"calico-apiserver-8d95f6697-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a628041-f9ef-49db-b102-02e5038d5605", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 44, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8d95f6697", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-f-e67fdcf04d", ContainerID:"", Pod:"calico-apiserver-8d95f6697-s2n77", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97063e02997", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:45:11.675168 containerd[1548]: 2025-08-12 23:45:11.643 [INFO][4235] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.4/32] ContainerID="d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5" Namespace="calico-apiserver" Pod="calico-apiserver-8d95f6697-s2n77" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--s2n77-eth0" Aug 12 23:45:11.675168 containerd[1548]: 2025-08-12 23:45:11.643 [INFO][4235] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97063e02997 ContainerID="d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5" Namespace="calico-apiserver" Pod="calico-apiserver-8d95f6697-s2n77" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--s2n77-eth0" Aug 12 23:45:11.675168 containerd[1548]: 2025-08-12 23:45:11.647 [INFO][4235] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5" Namespace="calico-apiserver" Pod="calico-apiserver-8d95f6697-s2n77" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--s2n77-eth0" Aug 12 23:45:11.675168 containerd[1548]: 2025-08-12 23:45:11.649 [INFO][4235] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5" Namespace="calico-apiserver" Pod="calico-apiserver-8d95f6697-s2n77" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--s2n77-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--s2n77-eth0", GenerateName:"calico-apiserver-8d95f6697-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a628041-f9ef-49db-b102-02e5038d5605", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 44, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8d95f6697", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-f-e67fdcf04d", ContainerID:"d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5", Pod:"calico-apiserver-8d95f6697-s2n77", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97063e02997", MAC:"4a:49:7e:c8:f1:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:45:11.675168 containerd[1548]: 2025-08-12 23:45:11.669 [INFO][4235] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5" Namespace="calico-apiserver" Pod="calico-apiserver-8d95f6697-s2n77" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--apiserver--8d95f6697--s2n77-eth0" Aug 12 23:45:11.700357 systemd-networkd[1417]: cali9accdd67bb8: Gained IPv6LL Aug 12 23:45:11.700801 systemd-networkd[1417]: calicae3daac9f6: Gained IPv6LL Aug 12 23:45:11.740934 containerd[1548]: time="2025-08-12T23:45:11.740315387Z" level=info msg="connecting to shim d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5" address="unix:///run/containerd/s/224431a2989653e2ac5736ece0eff7e4ce4c6a09ae7b940b8f9380b1612b652c" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:45:11.743648 kubelet[2678]: I0812 23:45:11.741924 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6rt9v" podStartSLOduration=34.741905607 podStartE2EDuration="34.741905607s" podCreationTimestamp="2025-08-12 23:44:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:45:11.714975988 +0000 UTC m=+40.378212498" watchObservedRunningTime="2025-08-12 23:45:11.741905607 +0000 UTC m=+40.405142117" Aug 12 23:45:11.805544 systemd[1]: Started cri-containerd-d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5.scope - libcontainer container d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5. Aug 12 23:45:11.825536 systemd-networkd[1417]: cali8dfaa7c7a44: Link UP Aug 12 23:45:11.827522 systemd-networkd[1417]: cali8dfaa7c7a44: Gained carrier Aug 12 23:45:11.869676 containerd[1548]: 2025-08-12 23:45:11.523 [INFO][4230] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--f--e67fdcf04d-k8s-goldmane--768f4c5c69--q9tw6-eth0 goldmane-768f4c5c69- calico-system 246b261b-8026-450f-b42d-480831756f1c 792 0 2025-08-12 23:44:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372-1-0-f-e67fdcf04d goldmane-768f4c5c69-q9tw6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8dfaa7c7a44 [] [] }} ContainerID="32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef" Namespace="calico-system" Pod="goldmane-768f4c5c69-q9tw6" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-goldmane--768f4c5c69--q9tw6-" Aug 12 23:45:11.869676 containerd[1548]: 2025-08-12 23:45:11.523 [INFO][4230] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef" Namespace="calico-system" Pod="goldmane-768f4c5c69-q9tw6" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-goldmane--768f4c5c69--q9tw6-eth0" Aug 12 23:45:11.869676 containerd[1548]: 2025-08-12 23:45:11.587 [INFO][4256] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef" HandleID="k8s-pod-network.32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-goldmane--768f4c5c69--q9tw6-eth0" Aug 12 23:45:11.869676 containerd[1548]: 2025-08-12 23:45:11.588 [INFO][4256] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef" HandleID="k8s-pod-network.32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-goldmane--768f4c5c69--q9tw6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-1-0-f-e67fdcf04d", "pod":"goldmane-768f4c5c69-q9tw6", "timestamp":"2025-08-12 23:45:11.58784864 +0000 UTC"}, Hostname:"ci-4372-1-0-f-e67fdcf04d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:45:11.869676 containerd[1548]: 2025-08-12 23:45:11.588 [INFO][4256] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:45:11.869676 containerd[1548]: 2025-08-12 23:45:11.639 [INFO][4256] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:45:11.869676 containerd[1548]: 2025-08-12 23:45:11.640 [INFO][4256] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-f-e67fdcf04d' Aug 12 23:45:11.869676 containerd[1548]: 2025-08-12 23:45:11.705 [INFO][4256] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:11.869676 containerd[1548]: 2025-08-12 23:45:11.733 [INFO][4256] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:11.869676 containerd[1548]: 2025-08-12 23:45:11.757 [INFO][4256] ipam/ipam.go 511: Trying affinity for 192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:11.869676 containerd[1548]: 2025-08-12 23:45:11.760 [INFO][4256] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:11.869676 containerd[1548]: 2025-08-12 23:45:11.764 [INFO][4256] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:11.869676 containerd[1548]: 2025-08-12 23:45:11.764 [INFO][4256] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.0/26 handle="k8s-pod-network.32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:11.869676 containerd[1548]: 2025-08-12 23:45:11.770 [INFO][4256] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef Aug 12 23:45:11.869676 containerd[1548]: 2025-08-12 23:45:11.790 [INFO][4256] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.0/26 handle="k8s-pod-network.32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:11.869676 containerd[1548]: 2025-08-12 23:45:11.802 [INFO][4256] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.5/26] block=192.168.41.0/26 handle="k8s-pod-network.32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:11.869676 containerd[1548]: 2025-08-12 23:45:11.803 [INFO][4256] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.5/26] handle="k8s-pod-network.32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:11.869676 containerd[1548]: 2025-08-12 23:45:11.803 [INFO][4256] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:45:11.869676 containerd[1548]: 2025-08-12 23:45:11.804 [INFO][4256] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.5/26] IPv6=[] ContainerID="32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef" HandleID="k8s-pod-network.32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-goldmane--768f4c5c69--q9tw6-eth0" Aug 12 23:45:11.870248 containerd[1548]: 2025-08-12 23:45:11.814 [INFO][4230] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef" Namespace="calico-system" Pod="goldmane-768f4c5c69-q9tw6" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-goldmane--768f4c5c69--q9tw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--f--e67fdcf04d-k8s-goldmane--768f4c5c69--q9tw6-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"246b261b-8026-450f-b42d-480831756f1c", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 44, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-f-e67fdcf04d", ContainerID:"", Pod:"goldmane-768f4c5c69-q9tw6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.41.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8dfaa7c7a44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:45:11.870248 containerd[1548]: 2025-08-12 23:45:11.816 [INFO][4230] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.5/32] ContainerID="32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef" Namespace="calico-system" Pod="goldmane-768f4c5c69-q9tw6" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-goldmane--768f4c5c69--q9tw6-eth0" Aug 12 23:45:11.870248 containerd[1548]: 2025-08-12 23:45:11.816 [INFO][4230] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8dfaa7c7a44 ContainerID="32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef" Namespace="calico-system" Pod="goldmane-768f4c5c69-q9tw6" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-goldmane--768f4c5c69--q9tw6-eth0" Aug 12 23:45:11.870248 containerd[1548]: 2025-08-12 23:45:11.827 [INFO][4230] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef" Namespace="calico-system" Pod="goldmane-768f4c5c69-q9tw6" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-goldmane--768f4c5c69--q9tw6-eth0" Aug 12 23:45:11.870248 containerd[1548]: 2025-08-12 23:45:11.830 [INFO][4230] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef" Namespace="calico-system" Pod="goldmane-768f4c5c69-q9tw6" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-goldmane--768f4c5c69--q9tw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--f--e67fdcf04d-k8s-goldmane--768f4c5c69--q9tw6-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"246b261b-8026-450f-b42d-480831756f1c", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 44, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-f-e67fdcf04d", ContainerID:"32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef", Pod:"goldmane-768f4c5c69-q9tw6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.41.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8dfaa7c7a44", MAC:"f6:5a:d3:1a:96:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:45:11.870248 containerd[1548]: 2025-08-12 23:45:11.862 [INFO][4230] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef" Namespace="calico-system" Pod="goldmane-768f4c5c69-q9tw6" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-goldmane--768f4c5c69--q9tw6-eth0" Aug 12 23:45:11.923022 containerd[1548]: time="2025-08-12T23:45:11.922969671Z" level=info msg="connecting to shim 32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef" address="unix:///run/containerd/s/07cb0f138edbceff5f65b370877ac69886327e506fd1baf5ee510474959a8c78" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:45:11.931885 containerd[1548]: time="2025-08-12T23:45:11.931841999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d95f6697-s2n77,Uid:6a628041-f9ef-49db-b102-02e5038d5605,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5\"" Aug 12 23:45:11.966541 systemd[1]: Started cri-containerd-32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef.scope - libcontainer container 32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef. Aug 12 23:45:12.010639 containerd[1548]: time="2025-08-12T23:45:12.010596729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:12.012103 containerd[1548]: time="2025-08-12T23:45:12.012062080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 12 23:45:12.013510 containerd[1548]: time="2025-08-12T23:45:12.013447155Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:12.018902 containerd[1548]: time="2025-08-12T23:45:12.018821417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:12.020548 containerd[1548]: time="2025-08-12T23:45:12.020501721Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.59852228s" Aug 12 23:45:12.020548 containerd[1548]: time="2025-08-12T23:45:12.020545280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 12 23:45:12.023809 containerd[1548]: time="2025-08-12T23:45:12.023714935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 12 23:45:12.027105 containerd[1548]: time="2025-08-12T23:45:12.027019866Z" level=info msg="CreateContainer within sandbox \"34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 12 23:45:12.027692 containerd[1548]: time="2025-08-12T23:45:12.027559248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-q9tw6,Uid:246b261b-8026-450f-b42d-480831756f1c,Namespace:calico-system,Attempt:0,} returns sandbox id \"32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef\"" Aug 12 23:45:12.041227 containerd[1548]: time="2025-08-12T23:45:12.041079561Z" level=info msg="Container ca4685f80a626c8364653e39d5c1cfcba808267d7217396824b3633c236d2739: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:45:12.046066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2602160246.mount: Deactivated successfully. Aug 12 23:45:12.055646 containerd[1548]: time="2025-08-12T23:45:12.055542603Z" level=info msg="CreateContainer within sandbox \"34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ca4685f80a626c8364653e39d5c1cfcba808267d7217396824b3633c236d2739\"" Aug 12 23:45:12.056911 containerd[1548]: time="2025-08-12T23:45:12.056872519Z" level=info msg="StartContainer for \"ca4685f80a626c8364653e39d5c1cfcba808267d7217396824b3633c236d2739\"" Aug 12 23:45:12.059095 containerd[1548]: time="2025-08-12T23:45:12.059046127Z" level=info msg="connecting to shim ca4685f80a626c8364653e39d5c1cfcba808267d7217396824b3633c236d2739" address="unix:///run/containerd/s/060a2bcd123876c67876d03a643df90b009dba67823533f58b8f0af1952dcb32" protocol=ttrpc version=3 Aug 12 23:45:12.085410 systemd[1]: Started cri-containerd-ca4685f80a626c8364653e39d5c1cfcba808267d7217396824b3633c236d2739.scope - libcontainer container ca4685f80a626c8364653e39d5c1cfcba808267d7217396824b3633c236d2739. Aug 12 23:45:12.142633 containerd[1548]: time="2025-08-12T23:45:12.142584605Z" level=info msg="StartContainer for \"ca4685f80a626c8364653e39d5c1cfcba808267d7217396824b3633c236d2739\" returns successfully" Aug 12 23:45:12.660460 systemd-networkd[1417]: cali2825dfbdd8a: Gained IPv6LL Aug 12 23:45:13.172486 systemd-networkd[1417]: cali97063e02997: Gained IPv6LL Aug 12 23:45:13.448029 containerd[1548]: time="2025-08-12T23:45:13.447713589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d885f6b46-lvzfn,Uid:75d556c6-ec19-418d-87c8-eb49c39093ba,Namespace:calico-system,Attempt:0,}" Aug 12 23:45:13.618109 systemd-networkd[1417]: calieb80ba4827b: Link UP Aug 12 23:45:13.620108 systemd-networkd[1417]: calieb80ba4827b: Gained carrier Aug 12 23:45:13.642579 containerd[1548]: 2025-08-12 23:45:13.502 [INFO][4419] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--f--e67fdcf04d-k8s-calico--kube--controllers--7d885f6b46--lvzfn-eth0 calico-kube-controllers-7d885f6b46- calico-system 75d556c6-ec19-418d-87c8-eb49c39093ba 793 0 2025-08-12 23:44:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d885f6b46 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372-1-0-f-e67fdcf04d calico-kube-controllers-7d885f6b46-lvzfn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calieb80ba4827b [] [] }} ContainerID="31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c" Namespace="calico-system" Pod="calico-kube-controllers-7d885f6b46-lvzfn" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--kube--controllers--7d885f6b46--lvzfn-" Aug 12 23:45:13.642579 containerd[1548]: 2025-08-12 23:45:13.503 [INFO][4419] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c" Namespace="calico-system" Pod="calico-kube-controllers-7d885f6b46-lvzfn" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--kube--controllers--7d885f6b46--lvzfn-eth0" Aug 12 23:45:13.642579 containerd[1548]: 2025-08-12 23:45:13.535 [INFO][4430] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c" HandleID="k8s-pod-network.31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-calico--kube--controllers--7d885f6b46--lvzfn-eth0" Aug 12 23:45:13.642579 containerd[1548]: 2025-08-12 23:45:13.535 [INFO][4430] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c" HandleID="k8s-pod-network.31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-calico--kube--controllers--7d885f6b46--lvzfn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330790), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-1-0-f-e67fdcf04d", "pod":"calico-kube-controllers-7d885f6b46-lvzfn", "timestamp":"2025-08-12 23:45:13.535367451 +0000 UTC"}, Hostname:"ci-4372-1-0-f-e67fdcf04d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:45:13.642579 containerd[1548]: 2025-08-12 23:45:13.535 [INFO][4430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:45:13.642579 containerd[1548]: 2025-08-12 23:45:13.535 [INFO][4430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:45:13.642579 containerd[1548]: 2025-08-12 23:45:13.535 [INFO][4430] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-f-e67fdcf04d' Aug 12 23:45:13.642579 containerd[1548]: 2025-08-12 23:45:13.552 [INFO][4430] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:13.642579 containerd[1548]: 2025-08-12 23:45:13.562 [INFO][4430] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:13.642579 containerd[1548]: 2025-08-12 23:45:13.572 [INFO][4430] ipam/ipam.go 511: Trying affinity for 192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:13.642579 containerd[1548]: 2025-08-12 23:45:13.576 [INFO][4430] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:13.642579 containerd[1548]: 2025-08-12 23:45:13.580 [INFO][4430] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:13.642579 containerd[1548]: 2025-08-12 23:45:13.581 [INFO][4430] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.0/26 handle="k8s-pod-network.31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:13.642579 containerd[1548]: 2025-08-12 23:45:13.584 [INFO][4430] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c Aug 12 23:45:13.642579 containerd[1548]: 2025-08-12 23:45:13.592 [INFO][4430] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.0/26 handle="k8s-pod-network.31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:13.642579 containerd[1548]: 2025-08-12 23:45:13.608 [INFO][4430] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.6/26] block=192.168.41.0/26 handle="k8s-pod-network.31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:13.642579 containerd[1548]: 2025-08-12 23:45:13.608 [INFO][4430] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.6/26] handle="k8s-pod-network.31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:13.642579 containerd[1548]: 2025-08-12 23:45:13.608 [INFO][4430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:45:13.642579 containerd[1548]: 2025-08-12 23:45:13.608 [INFO][4430] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.6/26] IPv6=[] ContainerID="31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c" HandleID="k8s-pod-network.31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-calico--kube--controllers--7d885f6b46--lvzfn-eth0" Aug 12 23:45:13.643638 containerd[1548]: 2025-08-12 23:45:13.611 [INFO][4419] cni-plugin/k8s.go 418: Populated endpoint ContainerID="31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c" Namespace="calico-system" Pod="calico-kube-controllers-7d885f6b46-lvzfn" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--kube--controllers--7d885f6b46--lvzfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--f--e67fdcf04d-k8s-calico--kube--controllers--7d885f6b46--lvzfn-eth0", GenerateName:"calico-kube-controllers-7d885f6b46-", Namespace:"calico-system", SelfLink:"", UID:"75d556c6-ec19-418d-87c8-eb49c39093ba", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 44, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d885f6b46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-f-e67fdcf04d", ContainerID:"", Pod:"calico-kube-controllers-7d885f6b46-lvzfn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.41.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calieb80ba4827b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:45:13.643638 containerd[1548]: 2025-08-12 23:45:13.611 [INFO][4419] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.6/32] ContainerID="31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c" Namespace="calico-system" Pod="calico-kube-controllers-7d885f6b46-lvzfn" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--kube--controllers--7d885f6b46--lvzfn-eth0" Aug 12 23:45:13.643638 containerd[1548]: 2025-08-12 23:45:13.612 [INFO][4419] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb80ba4827b ContainerID="31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c" Namespace="calico-system" Pod="calico-kube-controllers-7d885f6b46-lvzfn" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--kube--controllers--7d885f6b46--lvzfn-eth0" Aug 12 23:45:13.643638 containerd[1548]: 2025-08-12 23:45:13.621 [INFO][4419] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c" Namespace="calico-system" Pod="calico-kube-controllers-7d885f6b46-lvzfn" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--kube--controllers--7d885f6b46--lvzfn-eth0" Aug 12 23:45:13.643638 containerd[1548]: 2025-08-12 23:45:13.623 [INFO][4419] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c" Namespace="calico-system" Pod="calico-kube-controllers-7d885f6b46-lvzfn" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--kube--controllers--7d885f6b46--lvzfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--f--e67fdcf04d-k8s-calico--kube--controllers--7d885f6b46--lvzfn-eth0", GenerateName:"calico-kube-controllers-7d885f6b46-", Namespace:"calico-system", SelfLink:"", UID:"75d556c6-ec19-418d-87c8-eb49c39093ba", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 44, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d885f6b46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-f-e67fdcf04d", ContainerID:"31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c", Pod:"calico-kube-controllers-7d885f6b46-lvzfn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.41.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calieb80ba4827b", MAC:"72:17:79:eb:f4:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:45:13.643638 containerd[1548]: 2025-08-12 23:45:13.637 [INFO][4419] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c" Namespace="calico-system" Pod="calico-kube-controllers-7d885f6b46-lvzfn" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-calico--kube--controllers--7d885f6b46--lvzfn-eth0" Aug 12 23:45:13.673047 containerd[1548]: time="2025-08-12T23:45:13.672991108Z" level=info msg="connecting to shim 31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c" address="unix:///run/containerd/s/db94e02768184f33684cf4b307558ace559a510353beffdc4f3e7c428c00c7b6" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:45:13.686297 systemd-networkd[1417]: cali8dfaa7c7a44: Gained IPv6LL Aug 12 23:45:13.716492 systemd[1]: Started cri-containerd-31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c.scope - libcontainer container 31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c. Aug 12 23:45:13.760745 containerd[1548]: time="2025-08-12T23:45:13.760600212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d885f6b46-lvzfn,Uid:75d556c6-ec19-418d-87c8-eb49c39093ba,Namespace:calico-system,Attempt:0,} returns sandbox id \"31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c\"" Aug 12 23:45:14.447519 containerd[1548]: time="2025-08-12T23:45:14.447219300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m9cgr,Uid:8e2e417a-afc9-4f75-a471-5551fad879ea,Namespace:calico-system,Attempt:0,}" Aug 12 23:45:14.447866 containerd[1548]: time="2025-08-12T23:45:14.447809402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9wjth,Uid:c47c77fc-ea5a-46ba-a097-cbea896c6dc5,Namespace:kube-system,Attempt:0,}" Aug 12 23:45:14.709378 systemd-networkd[1417]: calieb80ba4827b: Gained IPv6LL Aug 12 23:45:14.725408 systemd-networkd[1417]: cali898d8a1b0d1: Link UP Aug 12 23:45:14.726521 systemd-networkd[1417]: cali898d8a1b0d1: Gained carrier Aug 12 23:45:14.769642 containerd[1548]: 2025-08-12 23:45:14.520 [INFO][4493] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--f--e67fdcf04d-k8s-csi--node--driver--m9cgr-eth0 csi-node-driver- calico-system 8e2e417a-afc9-4f75-a471-5551fad879ea 661 0 2025-08-12 23:44:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372-1-0-f-e67fdcf04d csi-node-driver-m9cgr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali898d8a1b0d1 [] [] }} ContainerID="ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef" Namespace="calico-system" Pod="csi-node-driver-m9cgr" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-csi--node--driver--m9cgr-" Aug 12 23:45:14.769642 containerd[1548]: 2025-08-12 23:45:14.521 [INFO][4493] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef" Namespace="calico-system" Pod="csi-node-driver-m9cgr" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-csi--node--driver--m9cgr-eth0" Aug 12 23:45:14.769642 containerd[1548]: 2025-08-12 23:45:14.603 [INFO][4519] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef" HandleID="k8s-pod-network.ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-csi--node--driver--m9cgr-eth0" Aug 12 23:45:14.769642 containerd[1548]: 2025-08-12 23:45:14.605 [INFO][4519] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef" HandleID="k8s-pod-network.ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-csi--node--driver--m9cgr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d720), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-1-0-f-e67fdcf04d", "pod":"csi-node-driver-m9cgr", "timestamp":"2025-08-12 23:45:14.603448658 +0000 UTC"}, Hostname:"ci-4372-1-0-f-e67fdcf04d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:45:14.769642 containerd[1548]: 2025-08-12 23:45:14.606 [INFO][4519] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:45:14.769642 containerd[1548]: 2025-08-12 23:45:14.606 [INFO][4519] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:45:14.769642 containerd[1548]: 2025-08-12 23:45:14.607 [INFO][4519] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-f-e67fdcf04d' Aug 12 23:45:14.769642 containerd[1548]: 2025-08-12 23:45:14.636 [INFO][4519] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:14.769642 containerd[1548]: 2025-08-12 23:45:14.648 [INFO][4519] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:14.769642 containerd[1548]: 2025-08-12 23:45:14.660 [INFO][4519] ipam/ipam.go 511: Trying affinity for 192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:14.769642 containerd[1548]: 2025-08-12 23:45:14.666 [INFO][4519] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:14.769642 containerd[1548]: 2025-08-12 23:45:14.671 [INFO][4519] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:14.769642 containerd[1548]: 2025-08-12 23:45:14.673 [INFO][4519] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.0/26 handle="k8s-pod-network.ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:14.769642 containerd[1548]: 2025-08-12 23:45:14.679 [INFO][4519] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef Aug 12 23:45:14.769642 containerd[1548]: 2025-08-12 23:45:14.689 [INFO][4519] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.0/26 handle="k8s-pod-network.ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:14.769642 containerd[1548]: 2025-08-12 23:45:14.709 [INFO][4519] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.7/26] block=192.168.41.0/26 handle="k8s-pod-network.ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:14.769642 containerd[1548]: 2025-08-12 23:45:14.710 [INFO][4519] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.7/26] handle="k8s-pod-network.ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:14.769642 containerd[1548]: 2025-08-12 23:45:14.711 [INFO][4519] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:45:14.769642 containerd[1548]: 2025-08-12 23:45:14.711 [INFO][4519] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.7/26] IPv6=[] ContainerID="ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef" HandleID="k8s-pod-network.ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-csi--node--driver--m9cgr-eth0" Aug 12 23:45:14.773021 containerd[1548]: 2025-08-12 23:45:14.720 [INFO][4493] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef" Namespace="calico-system" Pod="csi-node-driver-m9cgr" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-csi--node--driver--m9cgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--f--e67fdcf04d-k8s-csi--node--driver--m9cgr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e2e417a-afc9-4f75-a471-5551fad879ea", ResourceVersion:"661", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 44, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-f-e67fdcf04d", ContainerID:"", Pod:"csi-node-driver-m9cgr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.41.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali898d8a1b0d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:45:14.773021 containerd[1548]: 2025-08-12 23:45:14.720 [INFO][4493] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.7/32] ContainerID="ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef" Namespace="calico-system" Pod="csi-node-driver-m9cgr" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-csi--node--driver--m9cgr-eth0" Aug 12 23:45:14.773021 containerd[1548]: 2025-08-12 23:45:14.720 [INFO][4493] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali898d8a1b0d1 ContainerID="ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef" Namespace="calico-system" Pod="csi-node-driver-m9cgr" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-csi--node--driver--m9cgr-eth0" Aug 12 23:45:14.773021 containerd[1548]: 2025-08-12 23:45:14.726 [INFO][4493] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef" Namespace="calico-system" Pod="csi-node-driver-m9cgr" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-csi--node--driver--m9cgr-eth0" Aug 12 23:45:14.773021 containerd[1548]: 2025-08-12 23:45:14.727 [INFO][4493] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef" Namespace="calico-system" Pod="csi-node-driver-m9cgr" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-csi--node--driver--m9cgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--f--e67fdcf04d-k8s-csi--node--driver--m9cgr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e2e417a-afc9-4f75-a471-5551fad879ea", ResourceVersion:"661", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 44, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-f-e67fdcf04d", ContainerID:"ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef", Pod:"csi-node-driver-m9cgr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.41.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali898d8a1b0d1", MAC:"56:30:37:84:3d:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:45:14.773021 containerd[1548]: 2025-08-12 23:45:14.760 [INFO][4493] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef" Namespace="calico-system" Pod="csi-node-driver-m9cgr" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-csi--node--driver--m9cgr-eth0" Aug 12 23:45:14.841366 containerd[1548]: time="2025-08-12T23:45:14.841321344Z" level=info msg="connecting to shim ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef" address="unix:///run/containerd/s/d9fefb97d3eb2786e34e58fb7c446d5bad840b99bbbbc34c791dd669ecb269c2" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:45:14.855065 systemd-networkd[1417]: cali4b4ed963526: Link UP Aug 12 23:45:14.856490 systemd-networkd[1417]: cali4b4ed963526: Gained carrier Aug 12 23:45:14.895721 containerd[1548]: 2025-08-12 23:45:14.550 [INFO][4499] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--9wjth-eth0 coredns-668d6bf9bc- kube-system c47c77fc-ea5a-46ba-a097-cbea896c6dc5 786 0 2025-08-12 23:44:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372-1-0-f-e67fdcf04d coredns-668d6bf9bc-9wjth eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4b4ed963526 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wjth" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--9wjth-" Aug 12 23:45:14.895721 containerd[1548]: 2025-08-12 23:45:14.550 [INFO][4499] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wjth" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--9wjth-eth0" Aug 12 23:45:14.895721 containerd[1548]: 2025-08-12 23:45:14.639 [INFO][4529] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa" HandleID="k8s-pod-network.5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--9wjth-eth0" Aug 12 23:45:14.895721 containerd[1548]: 2025-08-12 23:45:14.641 [INFO][4529] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa" HandleID="k8s-pod-network.5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--9wjth-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aaf70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372-1-0-f-e67fdcf04d", "pod":"coredns-668d6bf9bc-9wjth", "timestamp":"2025-08-12 23:45:14.639287338 +0000 UTC"}, Hostname:"ci-4372-1-0-f-e67fdcf04d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:45:14.895721 containerd[1548]: 2025-08-12 23:45:14.642 [INFO][4529] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:45:14.895721 containerd[1548]: 2025-08-12 23:45:14.711 [INFO][4529] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:45:14.895721 containerd[1548]: 2025-08-12 23:45:14.714 [INFO][4529] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-f-e67fdcf04d' Aug 12 23:45:14.895721 containerd[1548]: 2025-08-12 23:45:14.760 [INFO][4529] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:14.895721 containerd[1548]: 2025-08-12 23:45:14.784 [INFO][4529] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:14.895721 containerd[1548]: 2025-08-12 23:45:14.794 [INFO][4529] ipam/ipam.go 511: Trying affinity for 192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:14.895721 containerd[1548]: 2025-08-12 23:45:14.798 [INFO][4529] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:14.895721 containerd[1548]: 2025-08-12 23:45:14.806 [INFO][4529] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.0/26 host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:14.895721 containerd[1548]: 2025-08-12 23:45:14.806 [INFO][4529] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.0/26 handle="k8s-pod-network.5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:14.895721 containerd[1548]: 2025-08-12 23:45:14.813 [INFO][4529] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa Aug 12 23:45:14.895721 containerd[1548]: 2025-08-12 23:45:14.821 [INFO][4529] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.0/26 handle="k8s-pod-network.5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:14.895721 containerd[1548]: 2025-08-12 23:45:14.836 [INFO][4529] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.8/26] block=192.168.41.0/26 handle="k8s-pod-network.5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:14.895721 containerd[1548]: 2025-08-12 23:45:14.836 [INFO][4529] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.8/26] handle="k8s-pod-network.5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa" host="ci-4372-1-0-f-e67fdcf04d" Aug 12 23:45:14.895721 containerd[1548]: 2025-08-12 23:45:14.836 [INFO][4529] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:45:14.895721 containerd[1548]: 2025-08-12 23:45:14.836 [INFO][4529] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.8/26] IPv6=[] ContainerID="5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa" HandleID="k8s-pod-network.5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa" Workload="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--9wjth-eth0" Aug 12 23:45:14.896657 containerd[1548]: 2025-08-12 23:45:14.845 [INFO][4499] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wjth" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--9wjth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--9wjth-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c47c77fc-ea5a-46ba-a097-cbea896c6dc5", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 44, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-f-e67fdcf04d", ContainerID:"", Pod:"coredns-668d6bf9bc-9wjth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4b4ed963526", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:45:14.896657 containerd[1548]: 2025-08-12 23:45:14.846 [INFO][4499] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.8/32] ContainerID="5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wjth" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--9wjth-eth0" Aug 12 23:45:14.896657 containerd[1548]: 2025-08-12 23:45:14.846 [INFO][4499] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b4ed963526 ContainerID="5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wjth" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--9wjth-eth0" Aug 12 23:45:14.896657 containerd[1548]: 2025-08-12 23:45:14.859 [INFO][4499] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wjth" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--9wjth-eth0" Aug 12 23:45:14.896657 containerd[1548]: 2025-08-12 23:45:14.861 [INFO][4499] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wjth" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--9wjth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--9wjth-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c47c77fc-ea5a-46ba-a097-cbea896c6dc5", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 44, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-f-e67fdcf04d", ContainerID:"5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa", Pod:"coredns-668d6bf9bc-9wjth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4b4ed963526", MAC:"6a:fa:7c:32:f0:18", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:45:14.897831 containerd[1548]: 2025-08-12 23:45:14.887 [INFO][4499] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wjth" WorkloadEndpoint="ci--4372--1--0--f--e67fdcf04d-k8s-coredns--668d6bf9bc--9wjth-eth0" Aug 12 23:45:14.921081 systemd[1]: Started cri-containerd-ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef.scope - libcontainer container ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef. Aug 12 23:45:14.962692 containerd[1548]: time="2025-08-12T23:45:14.962128889Z" level=info msg="connecting to shim 5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa" address="unix:///run/containerd/s/14fd20d95494220611cafe1d228d257e61a7c8b84920dce11a61966e0ef2e5c5" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:45:15.005910 systemd[1]: Started cri-containerd-5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa.scope - libcontainer container 5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa. Aug 12 23:45:15.016434 containerd[1548]: time="2025-08-12T23:45:15.016251930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m9cgr,Uid:8e2e417a-afc9-4f75-a471-5551fad879ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef\"" Aug 12 23:45:15.098644 containerd[1548]: time="2025-08-12T23:45:15.098607828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9wjth,Uid:c47c77fc-ea5a-46ba-a097-cbea896c6dc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa\"" Aug 12 23:45:15.104309 containerd[1548]: time="2025-08-12T23:45:15.104261696Z" level=info msg="CreateContainer within sandbox \"5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:45:15.123755 containerd[1548]: time="2025-08-12T23:45:15.123703585Z" level=info msg="Container 9591607a20a003f1bc61ad913f66586dd2ea1588f01424d159d9cd208d30c9bd: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:45:15.134441 containerd[1548]: time="2025-08-12T23:45:15.134391620Z" level=info msg="CreateContainer within sandbox \"5a0440ffe960c67ca729cfc4bf99aac8233ea584b3a8f23dd13b1d070177c2aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9591607a20a003f1bc61ad913f66586dd2ea1588f01424d159d9cd208d30c9bd\"" Aug 12 23:45:15.135825 containerd[1548]: time="2025-08-12T23:45:15.135780498Z" level=info msg="StartContainer for \"9591607a20a003f1bc61ad913f66586dd2ea1588f01424d159d9cd208d30c9bd\"" Aug 12 23:45:15.139214 containerd[1548]: time="2025-08-12T23:45:15.138078508Z" level=info msg="connecting to shim 9591607a20a003f1bc61ad913f66586dd2ea1588f01424d159d9cd208d30c9bd" address="unix:///run/containerd/s/14fd20d95494220611cafe1d228d257e61a7c8b84920dce11a61966e0ef2e5c5" protocol=ttrpc version=3 Aug 12 23:45:15.176466 systemd[1]: Started cri-containerd-9591607a20a003f1bc61ad913f66586dd2ea1588f01424d159d9cd208d30c9bd.scope - libcontainer container 9591607a20a003f1bc61ad913f66586dd2ea1588f01424d159d9cd208d30c9bd. Aug 12 23:45:15.254054 containerd[1548]: time="2025-08-12T23:45:15.253335405Z" level=info msg="StartContainer for \"9591607a20a003f1bc61ad913f66586dd2ea1588f01424d159d9cd208d30c9bd\" returns successfully" Aug 12 23:45:15.751358 kubelet[2678]: I0812 23:45:15.751054 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9wjth" podStartSLOduration=38.751031241 podStartE2EDuration="38.751031241s" podCreationTimestamp="2025-08-12 23:44:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:45:15.750912764 +0000 UTC m=+44.414149234" watchObservedRunningTime="2025-08-12 23:45:15.751031241 +0000 UTC m=+44.414267791" Aug 12 23:45:15.790937 containerd[1548]: time="2025-08-12T23:45:15.790851471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:15.793154 containerd[1548]: time="2025-08-12T23:45:15.792725094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 12 23:45:15.794954 containerd[1548]: time="2025-08-12T23:45:15.794697674Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:15.800834 containerd[1548]: time="2025-08-12T23:45:15.800797728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:15.801891 containerd[1548]: time="2025-08-12T23:45:15.801506147Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 3.777748293s" Aug 12 23:45:15.801891 containerd[1548]: time="2025-08-12T23:45:15.801564505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 12 23:45:15.805676 containerd[1548]: time="2025-08-12T23:45:15.805399948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 12 23:45:15.808798 containerd[1548]: time="2025-08-12T23:45:15.808766406Z" level=info msg="CreateContainer within sandbox \"f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 12 23:45:15.823429 containerd[1548]: time="2025-08-12T23:45:15.823385602Z" level=info msg="Container f85e2ba06b902f51e589608f6e39f0a0fb9757c4f2a60f1f6e245152e3374112: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:45:15.838130 containerd[1548]: time="2025-08-12T23:45:15.838090355Z" level=info msg="CreateContainer within sandbox \"f032c7ed5c65f6057ff449c76448bf845077ceefc5fb1c7f7dbf5cf4d05162e2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f85e2ba06b902f51e589608f6e39f0a0fb9757c4f2a60f1f6e245152e3374112\"" Aug 12 23:45:15.840222 containerd[1548]: time="2025-08-12T23:45:15.839991617Z" level=info msg="StartContainer for \"f85e2ba06b902f51e589608f6e39f0a0fb9757c4f2a60f1f6e245152e3374112\"" Aug 12 23:45:15.843538 containerd[1548]: time="2025-08-12T23:45:15.843434233Z" level=info msg="connecting to shim f85e2ba06b902f51e589608f6e39f0a0fb9757c4f2a60f1f6e245152e3374112" address="unix:///run/containerd/s/bf39ecc6c971b9c2ff093a98f825ad8b3015e24c7ad4d9502e3ca03c61781bb0" protocol=ttrpc version=3 Aug 12 23:45:15.877417 systemd[1]: Started cri-containerd-f85e2ba06b902f51e589608f6e39f0a0fb9757c4f2a60f1f6e245152e3374112.scope - libcontainer container f85e2ba06b902f51e589608f6e39f0a0fb9757c4f2a60f1f6e245152e3374112. Aug 12 23:45:15.928551 containerd[1548]: time="2025-08-12T23:45:15.928502647Z" level=info msg="StartContainer for \"f85e2ba06b902f51e589608f6e39f0a0fb9757c4f2a60f1f6e245152e3374112\" returns successfully" Aug 12 23:45:16.052991 systemd-networkd[1417]: cali898d8a1b0d1: Gained IPv6LL Aug 12 23:45:16.251261 containerd[1548]: time="2025-08-12T23:45:16.251175290Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:16.255664 containerd[1548]: time="2025-08-12T23:45:16.255609319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 12 23:45:16.261021 containerd[1548]: time="2025-08-12T23:45:16.260975041Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 455.538973ms" Aug 12 23:45:16.261113 containerd[1548]: time="2025-08-12T23:45:16.261023039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 12 23:45:16.273233 containerd[1548]: time="2025-08-12T23:45:16.272416143Z" level=info msg="CreateContainer within sandbox \"d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 12 23:45:16.273233 containerd[1548]: time="2025-08-12T23:45:16.272834290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 12 23:45:16.285574 containerd[1548]: time="2025-08-12T23:45:16.285523275Z" level=info msg="Container 7fd86b99df3f13e3e0afa5b3090a8e775e4b5a5a9a0aa09152018f34faf11b7a: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:45:16.303324 containerd[1548]: time="2025-08-12T23:45:16.302950760Z" level=info msg="CreateContainer within sandbox \"d7408ace77a0ed54ecc6aeeb53b02cb5db5e11eb30ce4a97710904159f6b34f5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7fd86b99df3f13e3e0afa5b3090a8e775e4b5a5a9a0aa09152018f34faf11b7a\"" Aug 12 23:45:16.308002 containerd[1548]: time="2025-08-12T23:45:16.307434468Z" level=info msg="StartContainer for \"7fd86b99df3f13e3e0afa5b3090a8e775e4b5a5a9a0aa09152018f34faf11b7a\"" Aug 12 23:45:16.312593 containerd[1548]: time="2025-08-12T23:45:16.312467079Z" level=info msg="connecting to shim 7fd86b99df3f13e3e0afa5b3090a8e775e4b5a5a9a0aa09152018f34faf11b7a" address="unix:///run/containerd/s/224431a2989653e2ac5736ece0eff7e4ce4c6a09ae7b940b8f9380b1612b652c" protocol=ttrpc version=3 Aug 12 23:45:16.334398 systemd[1]: Started cri-containerd-7fd86b99df3f13e3e0afa5b3090a8e775e4b5a5a9a0aa09152018f34faf11b7a.scope - libcontainer container 7fd86b99df3f13e3e0afa5b3090a8e775e4b5a5a9a0aa09152018f34faf11b7a. Aug 12 23:45:16.387512 containerd[1548]: time="2025-08-12T23:45:16.387400984Z" level=info msg="StartContainer for \"7fd86b99df3f13e3e0afa5b3090a8e775e4b5a5a9a0aa09152018f34faf11b7a\" returns successfully" Aug 12 23:45:16.630445 systemd-networkd[1417]: cali4b4ed963526: Gained IPv6LL Aug 12 23:45:16.743016 kubelet[2678]: I0812 23:45:16.742938 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8d95f6697-s2n77" podStartSLOduration=24.409317134 podStartE2EDuration="28.742919717s" podCreationTimestamp="2025-08-12 23:44:48 +0000 UTC" firstStartedPulling="2025-08-12 23:45:11.934609604 +0000 UTC m=+40.597846114" lastFinishedPulling="2025-08-12 23:45:16.268212187 +0000 UTC m=+44.931448697" observedRunningTime="2025-08-12 23:45:16.74178959 +0000 UTC m=+45.405026100" watchObservedRunningTime="2025-08-12 23:45:16.742919717 +0000 UTC m=+45.406156227" Aug 12 23:45:16.763498 kubelet[2678]: I0812 23:45:16.763418 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8d95f6697-hkpgp" podStartSLOduration=23.950774748 podStartE2EDuration="28.763397192s" podCreationTimestamp="2025-08-12 23:44:48 +0000 UTC" firstStartedPulling="2025-08-12 23:45:10.992068326 +0000 UTC m=+39.655304836" lastFinishedPulling="2025-08-12 23:45:15.80469077 +0000 UTC m=+44.467927280" observedRunningTime="2025-08-12 23:45:16.759857256 +0000 UTC m=+45.423093806" watchObservedRunningTime="2025-08-12 23:45:16.763397192 +0000 UTC m=+45.426633662" Aug 12 23:45:18.540144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3228344117.mount: Deactivated successfully. Aug 12 23:45:19.238553 containerd[1548]: time="2025-08-12T23:45:19.238472321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:19.239136 containerd[1548]: time="2025-08-12T23:45:19.239100944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 12 23:45:19.240503 containerd[1548]: time="2025-08-12T23:45:19.240467627Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:19.243327 containerd[1548]: time="2025-08-12T23:45:19.243293590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:19.244013 containerd[1548]: time="2025-08-12T23:45:19.243975171Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.971108563s" Aug 12 23:45:19.244013 containerd[1548]: time="2025-08-12T23:45:19.244010971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 12 23:45:19.250343 containerd[1548]: time="2025-08-12T23:45:19.250186843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 12 23:45:19.251078 containerd[1548]: time="2025-08-12T23:45:19.251010540Z" level=info msg="CreateContainer within sandbox \"32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 12 23:45:19.260546 containerd[1548]: time="2025-08-12T23:45:19.260411964Z" level=info msg="Container 3cafefbbe0fd7b077d8d829c2560bb39ada3a75d66b83e936e63545c94415daa: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:45:19.276554 containerd[1548]: time="2025-08-12T23:45:19.276442928Z" level=info msg="CreateContainer within sandbox \"32f4996ec2dd4b5c0994ff5db51c0fadb4630fba100d4f820f1e634d728d70ef\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"3cafefbbe0fd7b077d8d829c2560bb39ada3a75d66b83e936e63545c94415daa\"" Aug 12 23:45:19.277932 containerd[1548]: time="2025-08-12T23:45:19.277899889Z" level=info msg="StartContainer for \"3cafefbbe0fd7b077d8d829c2560bb39ada3a75d66b83e936e63545c94415daa\"" Aug 12 23:45:19.280054 containerd[1548]: time="2025-08-12T23:45:19.279962353Z" level=info msg="connecting to shim 3cafefbbe0fd7b077d8d829c2560bb39ada3a75d66b83e936e63545c94415daa" address="unix:///run/containerd/s/07cb0f138edbceff5f65b370877ac69886327e506fd1baf5ee510474959a8c78" protocol=ttrpc version=3 Aug 12 23:45:19.312409 systemd[1]: Started cri-containerd-3cafefbbe0fd7b077d8d829c2560bb39ada3a75d66b83e936e63545c94415daa.scope - libcontainer container 3cafefbbe0fd7b077d8d829c2560bb39ada3a75d66b83e936e63545c94415daa. Aug 12 23:45:19.389967 containerd[1548]: time="2025-08-12T23:45:19.389918841Z" level=info msg="StartContainer for \"3cafefbbe0fd7b077d8d829c2560bb39ada3a75d66b83e936e63545c94415daa\" returns successfully" Aug 12 23:45:20.881634 containerd[1548]: time="2025-08-12T23:45:20.881581071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3cafefbbe0fd7b077d8d829c2560bb39ada3a75d66b83e936e63545c94415daa\" id:\"e2e04b49bd3d1b477ef0a8b864fb0bca65f9d345c8d2cf2fa112147c240fb720\" pid:4836 exit_status:1 exited_at:{seconds:1755042320 nanos:880329464}" Aug 12 23:45:21.946911 containerd[1548]: time="2025-08-12T23:45:21.946722273Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3cafefbbe0fd7b077d8d829c2560bb39ada3a75d66b83e936e63545c94415daa\" id:\"bb86be47a7a9cf84207e7907429b37601de74f8975c90211c473ae3b9d1fd5a0\" pid:4863 exit_status:1 exited_at:{seconds:1755042321 nanos:945932693}" Aug 12 23:45:22.424934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount332842791.mount: Deactivated successfully. Aug 12 23:45:22.452555 containerd[1548]: time="2025-08-12T23:45:22.451310027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:22.452708 containerd[1548]: time="2025-08-12T23:45:22.452167566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 12 23:45:22.453246 containerd[1548]: time="2025-08-12T23:45:22.453184100Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:22.457147 containerd[1548]: time="2025-08-12T23:45:22.457088083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:22.458305 containerd[1548]: time="2025-08-12T23:45:22.458257493Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 3.207666542s" Aug 12 23:45:22.458442 containerd[1548]: time="2025-08-12T23:45:22.458307612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 12 23:45:22.462507 containerd[1548]: time="2025-08-12T23:45:22.462454068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 12 23:45:22.468456 containerd[1548]: time="2025-08-12T23:45:22.468359480Z" level=info msg="CreateContainer within sandbox \"34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 12 23:45:22.482389 containerd[1548]: time="2025-08-12T23:45:22.482316970Z" level=info msg="Container acd19b3ce33037bb3fbeec97ae38ed5bd7ab086e4c8eb2c226212ade2b7c1ae5: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:45:22.497718 containerd[1548]: time="2025-08-12T23:45:22.497662706Z" level=info msg="CreateContainer within sandbox \"34b90ae2a606fb5dc735d62c3fb27135bd6deb0ee6fd7486f4c2b082a1f01c81\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"acd19b3ce33037bb3fbeec97ae38ed5bd7ab086e4c8eb2c226212ade2b7c1ae5\"" Aug 12 23:45:22.499185 containerd[1548]: time="2025-08-12T23:45:22.499077990Z" level=info msg="StartContainer for \"acd19b3ce33037bb3fbeec97ae38ed5bd7ab086e4c8eb2c226212ade2b7c1ae5\"" Aug 12 23:45:22.503079 containerd[1548]: time="2025-08-12T23:45:22.502999612Z" level=info msg="connecting to shim acd19b3ce33037bb3fbeec97ae38ed5bd7ab086e4c8eb2c226212ade2b7c1ae5" address="unix:///run/containerd/s/060a2bcd123876c67876d03a643df90b009dba67823533f58b8f0af1952dcb32" protocol=ttrpc version=3 Aug 12 23:45:22.525553 systemd[1]: Started cri-containerd-acd19b3ce33037bb3fbeec97ae38ed5bd7ab086e4c8eb2c226212ade2b7c1ae5.scope - libcontainer container acd19b3ce33037bb3fbeec97ae38ed5bd7ab086e4c8eb2c226212ade2b7c1ae5. Aug 12 23:45:22.596455 containerd[1548]: time="2025-08-12T23:45:22.596387951Z" level=info msg="StartContainer for \"acd19b3ce33037bb3fbeec97ae38ed5bd7ab086e4c8eb2c226212ade2b7c1ae5\" returns successfully" Aug 12 23:45:22.795366 kubelet[2678]: I0812 23:45:22.795281 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-q9tw6" podStartSLOduration=23.597407128 podStartE2EDuration="30.795261246s" podCreationTimestamp="2025-08-12 23:44:52 +0000 UTC" firstStartedPulling="2025-08-12 23:45:12.049680397 +0000 UTC m=+40.712916907" lastFinishedPulling="2025-08-12 23:45:19.247534515 +0000 UTC m=+47.910771025" observedRunningTime="2025-08-12 23:45:19.771890371 +0000 UTC m=+48.435126921" watchObservedRunningTime="2025-08-12 23:45:22.795261246 +0000 UTC m=+51.458497796" Aug 12 23:45:25.901779 containerd[1548]: time="2025-08-12T23:45:25.901681037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:25.903251 containerd[1548]: time="2025-08-12T23:45:25.902880370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 12 23:45:25.906646 containerd[1548]: time="2025-08-12T23:45:25.906604363Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:25.912209 containerd[1548]: time="2025-08-12T23:45:25.912154355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:25.913963 containerd[1548]: time="2025-08-12T23:45:25.913923314Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 3.451207012s" Aug 12 23:45:25.914109 containerd[1548]: time="2025-08-12T23:45:25.914090710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 12 23:45:25.915415 containerd[1548]: time="2025-08-12T23:45:25.915225164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 12 23:45:25.939212 containerd[1548]: time="2025-08-12T23:45:25.938872737Z" level=info msg="CreateContainer within sandbox \"31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 12 23:45:25.953150 containerd[1548]: time="2025-08-12T23:45:25.949868723Z" level=info msg="Container 923c4bdc9be3cd6f410a53faf35bed1590d143ff04f1f5a8fe52586d1290a693: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:45:25.961393 containerd[1548]: time="2025-08-12T23:45:25.961346338Z" level=info msg="CreateContainer within sandbox \"31fe3f07dfe47a3621d345942552242a4e793bbe251b2ba6737f969537fbcf6c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"923c4bdc9be3cd6f410a53faf35bed1590d143ff04f1f5a8fe52586d1290a693\"" Aug 12 23:45:25.963475 containerd[1548]: time="2025-08-12T23:45:25.963437569Z" level=info msg="StartContainer for \"923c4bdc9be3cd6f410a53faf35bed1590d143ff04f1f5a8fe52586d1290a693\"" Aug 12 23:45:25.967676 containerd[1548]: time="2025-08-12T23:45:25.966992447Z" level=info msg="connecting to shim 923c4bdc9be3cd6f410a53faf35bed1590d143ff04f1f5a8fe52586d1290a693" address="unix:///run/containerd/s/db94e02768184f33684cf4b307558ace559a510353beffdc4f3e7c428c00c7b6" protocol=ttrpc version=3 Aug 12 23:45:26.008421 systemd[1]: Started cri-containerd-923c4bdc9be3cd6f410a53faf35bed1590d143ff04f1f5a8fe52586d1290a693.scope - libcontainer container 923c4bdc9be3cd6f410a53faf35bed1590d143ff04f1f5a8fe52586d1290a693. Aug 12 23:45:26.071425 containerd[1548]: time="2025-08-12T23:45:26.071371796Z" level=info msg="StartContainer for \"923c4bdc9be3cd6f410a53faf35bed1590d143ff04f1f5a8fe52586d1290a693\" returns successfully" Aug 12 23:45:26.814559 kubelet[2678]: I0812 23:45:26.814025 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-fbd976b7f-f9b2m" podStartSLOduration=6.773171563 podStartE2EDuration="18.814005917s" podCreationTimestamp="2025-08-12 23:45:08 +0000 UTC" firstStartedPulling="2025-08-12 23:45:10.420880133 +0000 UTC m=+39.084116643" lastFinishedPulling="2025-08-12 23:45:22.461714487 +0000 UTC m=+51.124950997" observedRunningTime="2025-08-12 23:45:22.795669916 +0000 UTC m=+51.458906466" watchObservedRunningTime="2025-08-12 23:45:26.814005917 +0000 UTC m=+55.477242427" Aug 12 23:45:26.816924 kubelet[2678]: I0812 23:45:26.816862 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7d885f6b46-lvzfn" podStartSLOduration=23.664537147 podStartE2EDuration="35.816721976s" podCreationTimestamp="2025-08-12 23:44:51 +0000 UTC" firstStartedPulling="2025-08-12 23:45:13.76283314 +0000 UTC m=+42.426069650" lastFinishedPulling="2025-08-12 23:45:25.915017929 +0000 UTC m=+54.578254479" observedRunningTime="2025-08-12 23:45:26.816242426 +0000 UTC m=+55.479478936" watchObservedRunningTime="2025-08-12 23:45:26.816721976 +0000 UTC m=+55.479958446" Aug 12 23:45:26.842122 containerd[1548]: time="2025-08-12T23:45:26.842073485Z" level=info msg="TaskExit event in podsandbox handler container_id:\"923c4bdc9be3cd6f410a53faf35bed1590d143ff04f1f5a8fe52586d1290a693\" id:\"011316db2d5840d8fdb3e69a6a8d0f695b5e9ab6a877c8b987a7dd309559af90\" pid:4975 exited_at:{seconds:1755042326 nanos:840644437}" Aug 12 23:45:27.286066 containerd[1548]: time="2025-08-12T23:45:27.285328033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:27.288782 containerd[1548]: time="2025-08-12T23:45:27.288699079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 12 23:45:27.289862 containerd[1548]: time="2025-08-12T23:45:27.289824214Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:27.293475 containerd[1548]: time="2025-08-12T23:45:27.293425135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:27.294240 containerd[1548]: time="2025-08-12T23:45:27.294148479Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.378863757s" Aug 12 23:45:27.294486 containerd[1548]: time="2025-08-12T23:45:27.294377554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 12 23:45:27.297330 containerd[1548]: time="2025-08-12T23:45:27.297290330Z" level=info msg="CreateContainer within sandbox \"ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 12 23:45:27.315243 containerd[1548]: time="2025-08-12T23:45:27.314687069Z" level=info msg="Container 602fc8edf5ea81b77386580f07304570b0a1e25e27998923cf81bdcb1d90f0d3: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:45:27.333999 containerd[1548]: time="2025-08-12T23:45:27.333916687Z" level=info msg="CreateContainer within sandbox \"ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"602fc8edf5ea81b77386580f07304570b0a1e25e27998923cf81bdcb1d90f0d3\"" Aug 12 23:45:27.337305 containerd[1548]: time="2025-08-12T23:45:27.337158616Z" level=info msg="StartContainer for \"602fc8edf5ea81b77386580f07304570b0a1e25e27998923cf81bdcb1d90f0d3\"" Aug 12 23:45:27.341670 containerd[1548]: time="2025-08-12T23:45:27.341618879Z" level=info msg="connecting to shim 602fc8edf5ea81b77386580f07304570b0a1e25e27998923cf81bdcb1d90f0d3" address="unix:///run/containerd/s/d9fefb97d3eb2786e34e58fb7c446d5bad840b99bbbbc34c791dd669ecb269c2" protocol=ttrpc version=3 Aug 12 23:45:27.382503 systemd[1]: Started cri-containerd-602fc8edf5ea81b77386580f07304570b0a1e25e27998923cf81bdcb1d90f0d3.scope - libcontainer container 602fc8edf5ea81b77386580f07304570b0a1e25e27998923cf81bdcb1d90f0d3. Aug 12 23:45:27.446583 containerd[1548]: time="2025-08-12T23:45:27.446526138Z" level=info msg="StartContainer for \"602fc8edf5ea81b77386580f07304570b0a1e25e27998923cf81bdcb1d90f0d3\" returns successfully" Aug 12 23:45:27.449786 containerd[1548]: time="2025-08-12T23:45:27.449671629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 12 23:45:29.029284 containerd[1548]: time="2025-08-12T23:45:29.029022004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:29.031722 containerd[1548]: time="2025-08-12T23:45:29.031581871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 12 23:45:29.034292 containerd[1548]: time="2025-08-12T23:45:29.034221776Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:29.041234 containerd[1548]: time="2025-08-12T23:45:29.041105912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:45:29.043631 containerd[1548]: time="2025-08-12T23:45:29.043498703Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.593395363s" Aug 12 23:45:29.043631 containerd[1548]: time="2025-08-12T23:45:29.043540302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 12 23:45:29.046751 containerd[1548]: time="2025-08-12T23:45:29.046570199Z" level=info msg="CreateContainer within sandbox \"ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 12 23:45:29.059647 containerd[1548]: time="2025-08-12T23:45:29.059494010Z" level=info msg="Container 7b0d46413f83c26e898ab01e198f6810726ae01686a5c6715d04f644c6858122: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:45:29.074856 containerd[1548]: time="2025-08-12T23:45:29.074790372Z" level=info msg="CreateContainer within sandbox \"ddc0c44d88d486d1c1c53bc6bf4c43ac6809c712209b6c491cb0c6ef7093b0ef\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7b0d46413f83c26e898ab01e198f6810726ae01686a5c6715d04f644c6858122\"" Aug 12 23:45:29.075593 containerd[1548]: time="2025-08-12T23:45:29.075537876Z" level=info msg="StartContainer for \"7b0d46413f83c26e898ab01e198f6810726ae01686a5c6715d04f644c6858122\"" Aug 12 23:45:29.077944 containerd[1548]: time="2025-08-12T23:45:29.077858708Z" level=info msg="connecting to shim 7b0d46413f83c26e898ab01e198f6810726ae01686a5c6715d04f644c6858122" address="unix:///run/containerd/s/d9fefb97d3eb2786e34e58fb7c446d5bad840b99bbbbc34c791dd669ecb269c2" protocol=ttrpc version=3 Aug 12 23:45:29.104405 systemd[1]: Started cri-containerd-7b0d46413f83c26e898ab01e198f6810726ae01686a5c6715d04f644c6858122.scope - libcontainer container 7b0d46413f83c26e898ab01e198f6810726ae01686a5c6715d04f644c6858122. Aug 12 23:45:29.149235 containerd[1548]: time="2025-08-12T23:45:29.149137745Z" level=info msg="StartContainer for \"7b0d46413f83c26e898ab01e198f6810726ae01686a5c6715d04f644c6858122\" returns successfully" Aug 12 23:45:29.567417 kubelet[2678]: I0812 23:45:29.567284 2678 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 12 23:45:29.572384 kubelet[2678]: I0812 23:45:29.572313 2678 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 12 23:45:29.838015 kubelet[2678]: I0812 23:45:29.837846 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-m9cgr" podStartSLOduration=24.812604098 podStartE2EDuration="38.837820499s" podCreationTimestamp="2025-08-12 23:44:51 +0000 UTC" firstStartedPulling="2025-08-12 23:45:15.018977487 +0000 UTC m=+43.682213997" lastFinishedPulling="2025-08-12 23:45:29.044193888 +0000 UTC m=+57.707430398" observedRunningTime="2025-08-12 23:45:29.835726383 +0000 UTC m=+58.498962933" watchObservedRunningTime="2025-08-12 23:45:29.837820499 +0000 UTC m=+58.501057049" Aug 12 23:45:39.755666 containerd[1548]: time="2025-08-12T23:45:39.755528132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48fee397fa82b165d7cb8847fe3251ff1966f81c3bd5d5f886d48fbb6317b10e\" id:\"c4d1e7cd6a813c84a83e2427c6bda46e4b147615723f2a67e0aee4574dd95b05\" pid:5084 exited_at:{seconds:1755042339 nanos:755163097}" Aug 12 23:45:51.927287 containerd[1548]: time="2025-08-12T23:45:51.927163607Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3cafefbbe0fd7b077d8d829c2560bb39ada3a75d66b83e936e63545c94415daa\" id:\"2e266a52fdb1f5adcc70b5c0836b2cc5fc4cc56bdf8287c5c702c21e5c250317\" pid:5117 exited_at:{seconds:1755042351 nanos:926444496}" Aug 12 23:45:56.832706 containerd[1548]: time="2025-08-12T23:45:56.832613779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"923c4bdc9be3cd6f410a53faf35bed1590d143ff04f1f5a8fe52586d1290a693\" id:\"b5d18aa1144e0d33cec398ee8c2788b87c2af066b13694260f6b9232b2baf5b1\" pid:5145 exited_at:{seconds:1755042356 nanos:831865907}" Aug 12 23:46:02.219249 containerd[1548]: time="2025-08-12T23:46:02.219170704Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3cafefbbe0fd7b077d8d829c2560bb39ada3a75d66b83e936e63545c94415daa\" id:\"d5ab79ebe2b5dd90978343e9a64a4daa027b44ed955da62f563946d25c31ec59\" pid:5167 exited_at:{seconds:1755042362 nanos:218367952}" Aug 12 23:46:08.667874 containerd[1548]: time="2025-08-12T23:46:08.667829111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"923c4bdc9be3cd6f410a53faf35bed1590d143ff04f1f5a8fe52586d1290a693\" id:\"e7f2ac34242c8518e3a199c146f7bb013440a40c6d2c3de4830a11c6238031c8\" pid:5193 exited_at:{seconds:1755042368 nanos:667338995}" Aug 12 23:46:09.864583 containerd[1548]: time="2025-08-12T23:46:09.864443628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48fee397fa82b165d7cb8847fe3251ff1966f81c3bd5d5f886d48fbb6317b10e\" id:\"d1c4c6b453598ecf6a917994a5471d322b21a4e08d8bfa0e36ce4e771beb22dc\" pid:5213 exited_at:{seconds:1755042369 nanos:863994071}" Aug 12 23:46:14.116290 systemd[1]: Started sshd@7-49.13.54.157:22-204.76.203.28:60588.service - OpenSSH per-connection server daemon (204.76.203.28:60588). Aug 12 23:46:14.312853 sshd[5227]: kex_exchange_identification: read: Connection reset by peer Aug 12 23:46:14.312853 sshd[5227]: Connection reset by 204.76.203.28 port 60588 Aug 12 23:46:14.315324 systemd[1]: sshd@7-49.13.54.157:22-204.76.203.28:60588.service: Deactivated successfully. Aug 12 23:46:14.426153 systemd[1]: Started sshd@8-49.13.54.157:22-204.76.203.28:60590.service - OpenSSH per-connection server daemon (204.76.203.28:60590). Aug 12 23:46:14.550620 sshd[5231]: banner exchange: Connection from 204.76.203.28 port 60590: invalid format Aug 12 23:46:14.553543 systemd[1]: sshd@8-49.13.54.157:22-204.76.203.28:60590.service: Deactivated successfully. Aug 12 23:46:15.882496 systemd[1]: Started sshd@9-49.13.54.157:22-204.76.203.28:60596.service - OpenSSH per-connection server daemon (204.76.203.28:60596). Aug 12 23:46:16.118418 sshd[5236]: banner exchange: Connection from 204.76.203.28 port 60596: invalid format Aug 12 23:46:16.119842 systemd[1]: sshd@9-49.13.54.157:22-204.76.203.28:60596.service: Deactivated successfully. Aug 12 23:46:17.386456 systemd[1]: Started sshd@10-49.13.54.157:22-204.76.203.28:60612.service - OpenSSH per-connection server daemon (204.76.203.28:60612). Aug 12 23:46:17.498435 sshd[5240]: banner exchange: Connection from 204.76.203.28 port 60612: invalid format Aug 12 23:46:17.500969 systemd[1]: sshd@10-49.13.54.157:22-204.76.203.28:60612.service: Deactivated successfully. Aug 12 23:46:19.917560 systemd[1]: Started sshd@11-49.13.54.157:22-204.76.203.28:31004.service - OpenSSH per-connection server daemon (204.76.203.28:31004). Aug 12 23:46:20.067873 sshd[5244]: Invalid user user from 204.76.203.28 port 31004 Aug 12 23:46:20.181315 sshd[5244]: Received disconnect from 204.76.203.28 port 31004:11: Bye Bye [preauth] Aug 12 23:46:20.181315 sshd[5244]: Disconnected from invalid user user 204.76.203.28 port 31004 [preauth] Aug 12 23:46:20.182550 systemd[1]: sshd@11-49.13.54.157:22-204.76.203.28:31004.service: Deactivated successfully. Aug 12 23:46:21.613109 systemd[1]: Started sshd@12-49.13.54.157:22-204.76.203.28:31014.service - OpenSSH per-connection server daemon (204.76.203.28:31014). Aug 12 23:46:21.778676 sshd[5249]: Invalid user admin from 204.76.203.28 port 31014 Aug 12 23:46:21.867666 containerd[1548]: time="2025-08-12T23:46:21.867181997Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3cafefbbe0fd7b077d8d829c2560bb39ada3a75d66b83e936e63545c94415daa\" id:\"1b3e64d8bf103d5eabfd8f682ba2091305feb7f3a72ce508b056652a9dc7a897\" pid:5264 exited_at:{seconds:1755042381 nanos:866704440}" Aug 12 23:46:26.835027 containerd[1548]: time="2025-08-12T23:46:26.834951100Z" level=info msg="TaskExit event in podsandbox handler container_id:\"923c4bdc9be3cd6f410a53faf35bed1590d143ff04f1f5a8fe52586d1290a693\" id:\"26f2eccf20abeadd07f5553138cfdb1bfb7cc6875046405723323e4cf0226875\" pid:5288 exited_at:{seconds:1755042386 nanos:834120906}" Aug 12 23:46:39.750815 containerd[1548]: time="2025-08-12T23:46:39.750733182Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48fee397fa82b165d7cb8847fe3251ff1966f81c3bd5d5f886d48fbb6317b10e\" id:\"ad7c4ef874eaae6c841ed81311d377d484d38b26545dea77b3e5bab590082578\" pid:5322 exited_at:{seconds:1755042399 nanos:750135185}" Aug 12 23:46:49.014151 sshd[5249]: Received disconnect from 204.76.203.28 port 31014:11: Bye Bye [preauth] Aug 12 23:46:49.014151 sshd[5249]: Disconnected from invalid user admin 204.76.203.28 port 31014 [preauth] Aug 12 23:46:49.017273 systemd[1]: sshd@12-49.13.54.157:22-204.76.203.28:31014.service: Deactivated successfully. Aug 12 23:46:51.846714 containerd[1548]: time="2025-08-12T23:46:51.846441857Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3cafefbbe0fd7b077d8d829c2560bb39ada3a75d66b83e936e63545c94415daa\" id:\"6c76bc4d35bf4504543942c4d268264c4f610fc4661bbe60a9dc0e58ebacfedc\" pid:5369 exited_at:{seconds:1755042411 nanos:846030939}" Aug 12 23:46:53.176522 systemd[1]: Started sshd@13-49.13.54.157:22-204.76.203.28:55244.service - OpenSSH per-connection server daemon (204.76.203.28:55244). Aug 12 23:46:53.822008 sshd[5379]: Received disconnect from 204.76.203.28 port 55244:11: Bye Bye [preauth] Aug 12 23:46:53.822008 sshd[5379]: Disconnected from authenticating user root 204.76.203.28 port 55244 [preauth] Aug 12 23:46:53.826390 systemd[1]: sshd@13-49.13.54.157:22-204.76.203.28:55244.service: Deactivated successfully. Aug 12 23:46:55.366300 systemd[1]: Started sshd@14-49.13.54.157:22-204.76.203.28:55252.service - OpenSSH per-connection server daemon (204.76.203.28:55252). Aug 12 23:46:55.535408 sshd[5384]: Invalid user admin from 204.76.203.28 port 55252 Aug 12 23:46:55.660428 sshd[5384]: Received disconnect from 204.76.203.28 port 55252:11: Bye Bye [preauth] Aug 12 23:46:55.660428 sshd[5384]: Disconnected from invalid user admin 204.76.203.28 port 55252 [preauth] Aug 12 23:46:55.663398 systemd[1]: sshd@14-49.13.54.157:22-204.76.203.28:55252.service: Deactivated successfully. Aug 12 23:46:56.826216 containerd[1548]: time="2025-08-12T23:46:56.826149470Z" level=info msg="TaskExit event in podsandbox handler container_id:\"923c4bdc9be3cd6f410a53faf35bed1590d143ff04f1f5a8fe52586d1290a693\" id:\"752f580089a02a77796b5a605abce935d8faf766b0671f357f864d8d957bbf20\" pid:5402 exited_at:{seconds:1755042416 nanos:825583352}" Aug 12 23:46:57.197317 systemd[1]: Started sshd@15-49.13.54.157:22-204.76.203.28:55256.service - OpenSSH per-connection server daemon (204.76.203.28:55256). Aug 12 23:46:57.367184 sshd[5413]: Invalid user admin from 204.76.203.28 port 55256 Aug 12 23:46:57.444373 sshd[5413]: Received disconnect from 204.76.203.28 port 55256:11: Bye Bye [preauth] Aug 12 23:46:57.444373 sshd[5413]: Disconnected from invalid user admin 204.76.203.28 port 55256 [preauth] Aug 12 23:46:57.447625 systemd[1]: sshd@15-49.13.54.157:22-204.76.203.28:55256.service: Deactivated successfully. Aug 12 23:46:58.909828 systemd[1]: Started sshd@16-49.13.54.157:22-204.76.203.28:20254.service - OpenSSH per-connection server daemon (204.76.203.28:20254). Aug 12 23:46:59.076145 sshd[5418]: Invalid user user from 204.76.203.28 port 20254 Aug 12 23:46:59.147843 sshd[5418]: Received disconnect from 204.76.203.28 port 20254:11: Bye Bye [preauth] Aug 12 23:46:59.147843 sshd[5418]: Disconnected from invalid user user 204.76.203.28 port 20254 [preauth] Aug 12 23:46:59.151663 systemd[1]: sshd@16-49.13.54.157:22-204.76.203.28:20254.service: Deactivated successfully. Aug 12 23:47:02.180709 containerd[1548]: time="2025-08-12T23:47:02.180646529Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3cafefbbe0fd7b077d8d829c2560bb39ada3a75d66b83e936e63545c94415daa\" id:\"8fd5f388371d9ddd702f2305dafef8d0fbc64993d548303240df47defe265599\" pid:5434 exited_at:{seconds:1755042422 nanos:180182772}" Aug 12 23:47:03.022501 systemd[1]: Started sshd@17-49.13.54.157:22-204.76.203.28:20258.service - OpenSSH per-connection server daemon (204.76.203.28:20258). Aug 12 23:47:03.285021 sshd[5445]: Received disconnect from 204.76.203.28 port 20258:11: Bye Bye [preauth] Aug 12 23:47:03.285021 sshd[5445]: Disconnected from authenticating user root 204.76.203.28 port 20258 [preauth] Aug 12 23:47:03.288798 systemd[1]: sshd@17-49.13.54.157:22-204.76.203.28:20258.service: Deactivated successfully. Aug 12 23:47:04.710915 systemd[1]: Started sshd@18-49.13.54.157:22-204.76.203.28:20260.service - OpenSSH per-connection server daemon (204.76.203.28:20260). Aug 12 23:47:04.962919 sshd[5450]: Received disconnect from 204.76.203.28 port 20260:11: Bye Bye [preauth] Aug 12 23:47:04.962919 sshd[5450]: Disconnected from authenticating user root 204.76.203.28 port 20260 [preauth] Aug 12 23:47:04.966931 systemd[1]: sshd@18-49.13.54.157:22-204.76.203.28:20260.service: Deactivated successfully. Aug 12 23:47:08.652120 containerd[1548]: time="2025-08-12T23:47:08.651751517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"923c4bdc9be3cd6f410a53faf35bed1590d143ff04f1f5a8fe52586d1290a693\" id:\"bc7d5c71b13c5e59aab83ffb88dee7ff9a47515001d02d8be6a79664fd7b0ad7\" pid:5470 exited_at:{seconds:1755042428 nanos:650394083}" Aug 12 23:47:09.753088 containerd[1548]: time="2025-08-12T23:47:09.753029195Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48fee397fa82b165d7cb8847fe3251ff1966f81c3bd5d5f886d48fbb6317b10e\" id:\"c30f572e0d34f960b3e97d34e23af73852a9b08620adf2090955906e291911c4\" pid:5491 exited_at:{seconds:1755042429 nanos:752710037}" Aug 12 23:47:12.800097 systemd[1]: Started sshd@19-49.13.54.157:22-139.178.68.195:55272.service - OpenSSH per-connection server daemon (139.178.68.195:55272). Aug 12 23:47:13.820912 sshd[5508]: Accepted publickey for core from 139.178.68.195 port 55272 ssh2: RSA SHA256:mYYevtwiboZ0WAYUecJRosD7TZEB8o4Se1lqZszh41o Aug 12 23:47:13.825390 sshd-session[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:47:13.833152 systemd-logind[1520]: New session 8 of user core. Aug 12 23:47:13.838424 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 12 23:47:14.633454 sshd[5510]: Connection closed by 139.178.68.195 port 55272 Aug 12 23:47:14.633878 sshd-session[5508]: pam_unix(sshd:session): session closed for user core Aug 12 23:47:14.640192 systemd[1]: sshd@19-49.13.54.157:22-139.178.68.195:55272.service: Deactivated successfully. Aug 12 23:47:14.647210 systemd[1]: session-8.scope: Deactivated successfully. Aug 12 23:47:14.649961 systemd-logind[1520]: Session 8 logged out. Waiting for processes to exit. Aug 12 23:47:14.653455 systemd-logind[1520]: Removed session 8. Aug 12 23:47:19.807161 systemd[1]: Started sshd@20-49.13.54.157:22-139.178.68.195:55280.service - OpenSSH per-connection server daemon (139.178.68.195:55280). Aug 12 23:47:20.830290 sshd[5523]: Accepted publickey for core from 139.178.68.195 port 55280 ssh2: RSA SHA256:mYYevtwiboZ0WAYUecJRosD7TZEB8o4Se1lqZszh41o Aug 12 23:47:20.832934 sshd-session[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:47:20.840637 systemd-logind[1520]: New session 9 of user core. Aug 12 23:47:20.852517 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 12 23:47:21.603392 sshd[5525]: Connection closed by 139.178.68.195 port 55280 Aug 12 23:47:21.604357 sshd-session[5523]: pam_unix(sshd:session): session closed for user core Aug 12 23:47:21.608295 systemd[1]: sshd@20-49.13.54.157:22-139.178.68.195:55280.service: Deactivated successfully. Aug 12 23:47:21.612766 systemd[1]: session-9.scope: Deactivated successfully. Aug 12 23:47:21.615536 systemd-logind[1520]: Session 9 logged out. Waiting for processes to exit. Aug 12 23:47:21.616918 systemd-logind[1520]: Removed session 9. Aug 12 23:47:21.855279 containerd[1548]: time="2025-08-12T23:47:21.855120453Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3cafefbbe0fd7b077d8d829c2560bb39ada3a75d66b83e936e63545c94415daa\" id:\"d07557c2fadef64317e3e0f71c66d33c2bfdeb6919c7cb3318c0d451aee6fb50\" pid:5548 exited_at:{seconds:1755042441 nanos:854654926}" Aug 12 23:47:26.778038 systemd[1]: Started sshd@21-49.13.54.157:22-139.178.68.195:58524.service - OpenSSH per-connection server daemon (139.178.68.195:58524). Aug 12 23:47:26.835183 containerd[1548]: time="2025-08-12T23:47:26.835118221Z" level=info msg="TaskExit event in podsandbox handler container_id:\"923c4bdc9be3cd6f410a53faf35bed1590d143ff04f1f5a8fe52586d1290a693\" id:\"dcd66a44a074e4501987754def6b35abbbd1a80f101c3be5a020d3d1b67d5f4c\" pid:5574 exited_at:{seconds:1755042446 nanos:834723615}" Aug 12 23:47:27.795237 sshd[5560]: Accepted publickey for core from 139.178.68.195 port 58524 ssh2: RSA SHA256:mYYevtwiboZ0WAYUecJRosD7TZEB8o4Se1lqZszh41o Aug 12 23:47:27.797312 sshd-session[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:47:27.802356 systemd-logind[1520]: New session 10 of user core. Aug 12 23:47:27.810504 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 12 23:47:28.558463 sshd[5589]: Connection closed by 139.178.68.195 port 58524 Aug 12 23:47:28.559374 sshd-session[5560]: pam_unix(sshd:session): session closed for user core Aug 12 23:47:28.566187 systemd[1]: sshd@21-49.13.54.157:22-139.178.68.195:58524.service: Deactivated successfully. Aug 12 23:47:28.568687 systemd[1]: session-10.scope: Deactivated successfully. Aug 12 23:47:28.569824 systemd-logind[1520]: Session 10 logged out. Waiting for processes to exit. Aug 12 23:47:28.572719 systemd-logind[1520]: Removed session 10. Aug 12 23:47:28.733116 systemd[1]: Started sshd@22-49.13.54.157:22-139.178.68.195:58536.service - OpenSSH per-connection server daemon (139.178.68.195:58536). Aug 12 23:47:29.745575 sshd[5602]: Accepted publickey for core from 139.178.68.195 port 58536 ssh2: RSA SHA256:mYYevtwiboZ0WAYUecJRosD7TZEB8o4Se1lqZszh41o Aug 12 23:47:29.747854 sshd-session[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:47:29.757500 systemd-logind[1520]: New session 11 of user core. Aug 12 23:47:29.763389 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 12 23:47:30.560667 sshd[5604]: Connection closed by 139.178.68.195 port 58536 Aug 12 23:47:30.562934 sshd-session[5602]: pam_unix(sshd:session): session closed for user core Aug 12 23:47:30.568369 systemd[1]: sshd@22-49.13.54.157:22-139.178.68.195:58536.service: Deactivated successfully. Aug 12 23:47:30.573012 systemd[1]: session-11.scope: Deactivated successfully. Aug 12 23:47:30.574391 systemd-logind[1520]: Session 11 logged out. Waiting for processes to exit. Aug 12 23:47:30.577710 systemd-logind[1520]: Removed session 11. Aug 12 23:47:30.736939 systemd[1]: Started sshd@23-49.13.54.157:22-139.178.68.195:49864.service - OpenSSH per-connection server daemon (139.178.68.195:49864). Aug 12 23:47:31.779704 sshd[5615]: Accepted publickey for core from 139.178.68.195 port 49864 ssh2: RSA SHA256:mYYevtwiboZ0WAYUecJRosD7TZEB8o4Se1lqZszh41o Aug 12 23:47:31.782720 sshd-session[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:47:31.788140 systemd-logind[1520]: New session 12 of user core. Aug 12 23:47:31.796699 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 12 23:47:32.125583 systemd[1]: Started sshd@24-49.13.54.157:22-204.76.203.28:14514.service - OpenSSH per-connection server daemon (204.76.203.28:14514). Aug 12 23:47:32.573228 sshd[5619]: Connection closed by 139.178.68.195 port 49864 Aug 12 23:47:32.574303 sshd-session[5615]: pam_unix(sshd:session): session closed for user core Aug 12 23:47:32.581152 systemd[1]: sshd@23-49.13.54.157:22-139.178.68.195:49864.service: Deactivated successfully. Aug 12 23:47:32.583495 systemd[1]: session-12.scope: Deactivated successfully. Aug 12 23:47:32.585886 systemd-logind[1520]: Session 12 logged out. Waiting for processes to exit. Aug 12 23:47:32.587571 systemd-logind[1520]: Removed session 12. Aug 12 23:47:32.641915 sshd[5621]: Invalid user ubuntu from 204.76.203.28 port 14514 Aug 12 23:47:33.152219 sshd[5621]: Received disconnect from 204.76.203.28 port 14514:11: Bye Bye [preauth] Aug 12 23:47:33.152219 sshd[5621]: Disconnected from invalid user ubuntu 204.76.203.28 port 14514 [preauth] Aug 12 23:47:33.155978 systemd[1]: sshd@24-49.13.54.157:22-204.76.203.28:14514.service: Deactivated successfully. Aug 12 23:47:35.791298 systemd[1]: Started sshd@25-49.13.54.157:22-204.76.203.28:14530.service - OpenSSH per-connection server daemon (204.76.203.28:14530). Aug 12 23:47:35.992740 sshd[5636]: Invalid user user1 from 204.76.203.28 port 14530 Aug 12 23:47:36.059888 sshd[5636]: Received disconnect from 204.76.203.28 port 14530:11: Bye Bye [preauth] Aug 12 23:47:36.059888 sshd[5636]: Disconnected from invalid user user1 204.76.203.28 port 14530 [preauth] Aug 12 23:47:36.064863 systemd[1]: sshd@25-49.13.54.157:22-204.76.203.28:14530.service: Deactivated successfully. Aug 12 23:47:37.469592 systemd[1]: Started sshd@26-49.13.54.157:22-204.76.203.28:14546.service - OpenSSH per-connection server daemon (204.76.203.28:14546). Aug 12 23:47:37.646941 sshd[5642]: Invalid user ubnt from 204.76.203.28 port 14546 Aug 12 23:47:37.731171 sshd[5642]: Received disconnect from 204.76.203.28 port 14546:11: Bye Bye [preauth] Aug 12 23:47:37.731171 sshd[5642]: Disconnected from invalid user ubnt 204.76.203.28 port 14546 [preauth] Aug 12 23:47:37.748649 systemd[1]: sshd@26-49.13.54.157:22-204.76.203.28:14546.service: Deactivated successfully. Aug 12 23:47:37.758521 systemd[1]: Started sshd@27-49.13.54.157:22-139.178.68.195:49866.service - OpenSSH per-connection server daemon (139.178.68.195:49866). Aug 12 23:47:38.775156 sshd[5647]: Accepted publickey for core from 139.178.68.195 port 49866 ssh2: RSA SHA256:mYYevtwiboZ0WAYUecJRosD7TZEB8o4Se1lqZszh41o Aug 12 23:47:38.778599 sshd-session[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:47:38.788936 systemd-logind[1520]: New session 13 of user core. Aug 12 23:47:38.793737 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 12 23:47:39.217793 systemd[1]: Started sshd@28-49.13.54.157:22-204.76.203.28:34062.service - OpenSSH per-connection server daemon (204.76.203.28:34062). Aug 12 23:47:39.441403 sshd[5653]: Invalid user test from 204.76.203.28 port 34062 Aug 12 23:47:39.572353 sshd[5651]: Connection closed by 139.178.68.195 port 49866 Aug 12 23:47:39.572930 sshd-session[5647]: pam_unix(sshd:session): session closed for user core Aug 12 23:47:39.580442 systemd-logind[1520]: Session 13 logged out. Waiting for processes to exit. Aug 12 23:47:39.581244 systemd[1]: sshd@27-49.13.54.157:22-139.178.68.195:49866.service: Deactivated successfully. Aug 12 23:47:39.585745 systemd[1]: session-13.scope: Deactivated successfully. Aug 12 23:47:39.589645 systemd-logind[1520]: Removed session 13. Aug 12 23:47:39.609354 sshd[5653]: Received disconnect from 204.76.203.28 port 34062:11: Bye Bye [preauth] Aug 12 23:47:39.609354 sshd[5653]: Disconnected from invalid user test 204.76.203.28 port 34062 [preauth] Aug 12 23:47:39.613434 systemd[1]: sshd@28-49.13.54.157:22-204.76.203.28:34062.service: Deactivated successfully. Aug 12 23:47:39.748060 systemd[1]: Started sshd@29-49.13.54.157:22-139.178.68.195:49874.service - OpenSSH per-connection server daemon (139.178.68.195:49874). Aug 12 23:47:39.781595 containerd[1548]: time="2025-08-12T23:47:39.781549857Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48fee397fa82b165d7cb8847fe3251ff1966f81c3bd5d5f886d48fbb6317b10e\" id:\"4a77e092909874d864f1c17f8ccc4bd458394cba9fed8190f341cd7d7d4d2d65\" pid:5679 exited_at:{seconds:1755042459 nanos:780331323}" Aug 12 23:47:40.780050 sshd[5691]: Accepted publickey for core from 139.178.68.195 port 49874 ssh2: RSA SHA256:mYYevtwiboZ0WAYUecJRosD7TZEB8o4Se1lqZszh41o Aug 12 23:47:40.782469 sshd-session[5691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:47:40.788019 systemd-logind[1520]: New session 14 of user core. Aug 12 23:47:40.794550 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 12 23:47:41.104528 systemd[1]: Started sshd@30-49.13.54.157:22-204.76.203.28:34086.service - OpenSSH per-connection server daemon (204.76.203.28:34086). Aug 12 23:47:41.276485 sshd[5696]: Received disconnect from 204.76.203.28 port 34086:11: Bye Bye [preauth] Aug 12 23:47:41.277116 sshd[5696]: Disconnected from authenticating user root 204.76.203.28 port 34086 [preauth] Aug 12 23:47:41.280545 systemd[1]: sshd@30-49.13.54.157:22-204.76.203.28:34086.service: Deactivated successfully. Aug 12 23:47:41.725433 sshd[5694]: Connection closed by 139.178.68.195 port 49874 Aug 12 23:47:41.725922 sshd-session[5691]: pam_unix(sshd:session): session closed for user core Aug 12 23:47:41.731791 systemd[1]: sshd@29-49.13.54.157:22-139.178.68.195:49874.service: Deactivated successfully. Aug 12 23:47:41.731923 systemd-logind[1520]: Session 14 logged out. Waiting for processes to exit. Aug 12 23:47:41.734417 systemd[1]: session-14.scope: Deactivated successfully. Aug 12 23:47:41.737478 systemd-logind[1520]: Removed session 14. Aug 12 23:47:41.901450 systemd[1]: Started sshd@31-49.13.54.157:22-139.178.68.195:32852.service - OpenSSH per-connection server daemon (139.178.68.195:32852). Aug 12 23:47:42.497402 systemd[1]: Started sshd@32-49.13.54.157:22-204.76.203.28:34104.service - OpenSSH per-connection server daemon (204.76.203.28:34104). Aug 12 23:47:42.649003 sshd[5712]: Invalid user admin from 204.76.203.28 port 34104 Aug 12 23:47:42.726157 sshd[5712]: Received disconnect from 204.76.203.28 port 34104:11: Bye Bye [preauth] Aug 12 23:47:42.726157 sshd[5712]: Disconnected from invalid user admin 204.76.203.28 port 34104 [preauth] Aug 12 23:47:42.729107 systemd[1]: sshd@32-49.13.54.157:22-204.76.203.28:34104.service: Deactivated successfully. Aug 12 23:47:42.929710 sshd[5709]: Accepted publickey for core from 139.178.68.195 port 32852 ssh2: RSA SHA256:mYYevtwiboZ0WAYUecJRosD7TZEB8o4Se1lqZszh41o Aug 12 23:47:42.932168 sshd-session[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:47:42.940512 systemd-logind[1520]: New session 15 of user core. Aug 12 23:47:42.948492 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 12 23:47:44.167525 systemd[1]: Started sshd@33-49.13.54.157:22-204.76.203.28:34118.service - OpenSSH per-connection server daemon (204.76.203.28:34118). Aug 12 23:47:44.335284 sshd[5716]: Connection closed by 139.178.68.195 port 32852 Aug 12 23:47:44.336530 sshd-session[5709]: pam_unix(sshd:session): session closed for user core Aug 12 23:47:44.342135 systemd[1]: sshd@31-49.13.54.157:22-139.178.68.195:32852.service: Deactivated successfully. Aug 12 23:47:44.342251 systemd-logind[1520]: Session 15 logged out. Waiting for processes to exit. Aug 12 23:47:44.347728 systemd[1]: session-15.scope: Deactivated successfully. Aug 12 23:47:44.350400 systemd-logind[1520]: Removed session 15. Aug 12 23:47:44.495281 sshd[5728]: Received disconnect from 204.76.203.28 port 34118:11: Bye Bye [preauth] Aug 12 23:47:44.495281 sshd[5728]: Disconnected from authenticating user root 204.76.203.28 port 34118 [preauth] Aug 12 23:47:44.498274 systemd[1]: sshd@33-49.13.54.157:22-204.76.203.28:34118.service: Deactivated successfully. Aug 12 23:47:44.530261 systemd[1]: Started sshd@34-49.13.54.157:22-139.178.68.195:32856.service - OpenSSH per-connection server daemon (139.178.68.195:32856). Aug 12 23:47:45.610961 sshd[5738]: Accepted publickey for core from 139.178.68.195 port 32856 ssh2: RSA SHA256:mYYevtwiboZ0WAYUecJRosD7TZEB8o4Se1lqZszh41o Aug 12 23:47:45.613576 sshd-session[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:47:45.619514 systemd-logind[1520]: New session 16 of user core. Aug 12 23:47:45.627500 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 12 23:47:46.110302 systemd[1]: Started sshd@35-49.13.54.157:22-204.76.203.28:34138.service - OpenSSH per-connection server daemon (204.76.203.28:34138). Aug 12 23:47:46.296835 sshd[5742]: Invalid user user from 204.76.203.28 port 34138 Aug 12 23:47:46.419490 sshd[5742]: Received disconnect from 204.76.203.28 port 34138:11: Bye Bye [preauth] Aug 12 23:47:46.419490 sshd[5742]: Disconnected from invalid user user 204.76.203.28 port 34138 [preauth] Aug 12 23:47:46.421883 systemd[1]: sshd@35-49.13.54.157:22-204.76.203.28:34138.service: Deactivated successfully. Aug 12 23:47:46.609270 sshd[5740]: Connection closed by 139.178.68.195 port 32856 Aug 12 23:47:46.610172 sshd-session[5738]: pam_unix(sshd:session): session closed for user core Aug 12 23:47:46.616334 systemd-logind[1520]: Session 16 logged out. Waiting for processes to exit. Aug 12 23:47:46.617089 systemd[1]: sshd@34-49.13.54.157:22-139.178.68.195:32856.service: Deactivated successfully. Aug 12 23:47:46.621504 systemd[1]: session-16.scope: Deactivated successfully. Aug 12 23:47:46.627307 systemd-logind[1520]: Removed session 16. Aug 12 23:47:46.774473 systemd[1]: Started sshd@36-49.13.54.157:22-139.178.68.195:32866.service - OpenSSH per-connection server daemon (139.178.68.195:32866). Aug 12 23:47:47.803765 sshd[5755]: Accepted publickey for core from 139.178.68.195 port 32866 ssh2: RSA SHA256:mYYevtwiboZ0WAYUecJRosD7TZEB8o4Se1lqZszh41o Aug 12 23:47:47.806114 sshd-session[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:47:47.814444 systemd[1]: Started sshd@37-49.13.54.157:22-204.76.203.28:34146.service - OpenSSH per-connection server daemon (204.76.203.28:34146). Aug 12 23:47:47.821024 systemd-logind[1520]: New session 17 of user core. Aug 12 23:47:47.827537 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 12 23:47:48.586104 sshd[5759]: Connection closed by 139.178.68.195 port 32866 Aug 12 23:47:48.587631 sshd-session[5755]: pam_unix(sshd:session): session closed for user core Aug 12 23:47:48.596046 systemd-logind[1520]: Session 17 logged out. Waiting for processes to exit. Aug 12 23:47:48.597161 systemd[1]: sshd@36-49.13.54.157:22-139.178.68.195:32866.service: Deactivated successfully. Aug 12 23:47:48.604517 systemd[1]: session-17.scope: Deactivated successfully. Aug 12 23:47:48.607771 systemd-logind[1520]: Removed session 17. Aug 12 23:47:51.855418 containerd[1548]: time="2025-08-12T23:47:51.855366721Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3cafefbbe0fd7b077d8d829c2560bb39ada3a75d66b83e936e63545c94415daa\" id:\"4114395a65c1c1d2efa8af91ee37933063b88ea4f788d9e92bfae1c7c5c7cae9\" pid:5791 exited_at:{seconds:1755042471 nanos:854073229}" Aug 12 23:47:53.761567 systemd[1]: Started sshd@38-49.13.54.157:22-139.178.68.195:57900.service - OpenSSH per-connection server daemon (139.178.68.195:57900). Aug 12 23:47:54.780188 sshd[5802]: Accepted publickey for core from 139.178.68.195 port 57900 ssh2: RSA SHA256:mYYevtwiboZ0WAYUecJRosD7TZEB8o4Se1lqZszh41o Aug 12 23:47:54.782472 sshd-session[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:47:54.787893 systemd-logind[1520]: New session 18 of user core. Aug 12 23:47:54.793454 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 12 23:47:55.551403 sshd[5804]: Connection closed by 139.178.68.195 port 57900 Aug 12 23:47:55.553574 sshd-session[5802]: pam_unix(sshd:session): session closed for user core Aug 12 23:47:55.560313 systemd-logind[1520]: Session 18 logged out. Waiting for processes to exit. Aug 12 23:47:55.560528 systemd[1]: sshd@38-49.13.54.157:22-139.178.68.195:57900.service: Deactivated successfully. Aug 12 23:47:55.562710 systemd[1]: session-18.scope: Deactivated successfully. Aug 12 23:47:55.565930 systemd-logind[1520]: Removed session 18. Aug 12 23:47:56.835960 containerd[1548]: time="2025-08-12T23:47:56.835863428Z" level=info msg="TaskExit event in podsandbox handler container_id:\"923c4bdc9be3cd6f410a53faf35bed1590d143ff04f1f5a8fe52586d1290a693\" id:\"3a0248b1cfb319c52a934dcd516577e242d4c35e84d6020dcd9444af8c9674c4\" pid:5830 exited_at:{seconds:1755042476 nanos:835456144}" Aug 12 23:48:00.725192 systemd[1]: Started sshd@39-49.13.54.157:22-139.178.68.195:48402.service - OpenSSH per-connection server daemon (139.178.68.195:48402). Aug 12 23:48:01.737477 sshd[5839]: Accepted publickey for core from 139.178.68.195 port 48402 ssh2: RSA SHA256:mYYevtwiboZ0WAYUecJRosD7TZEB8o4Se1lqZszh41o Aug 12 23:48:01.739545 sshd-session[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:48:01.745870 systemd-logind[1520]: New session 19 of user core. Aug 12 23:48:01.750427 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 12 23:48:02.173875 containerd[1548]: time="2025-08-12T23:48:02.173747121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3cafefbbe0fd7b077d8d829c2560bb39ada3a75d66b83e936e63545c94415daa\" id:\"8b3954dd469b9f0a2e6cb23cc8d2f9a64bca33e9333d76b405da1a2bf393c74a\" pid:5854 exited_at:{seconds:1755042482 nanos:173069316}" Aug 12 23:48:02.517008 sshd[5841]: Connection closed by 139.178.68.195 port 48402 Aug 12 23:48:02.517654 sshd-session[5839]: pam_unix(sshd:session): session closed for user core Aug 12 23:48:02.523854 systemd[1]: sshd@39-49.13.54.157:22-139.178.68.195:48402.service: Deactivated successfully. Aug 12 23:48:02.528130 systemd[1]: session-19.scope: Deactivated successfully. Aug 12 23:48:02.532242 systemd-logind[1520]: Session 19 logged out. Waiting for processes to exit. Aug 12 23:48:02.534414 systemd-logind[1520]: Removed session 19. Aug 12 23:48:08.648877 containerd[1548]: time="2025-08-12T23:48:08.648833255Z" level=info msg="TaskExit event in podsandbox handler container_id:\"923c4bdc9be3cd6f410a53faf35bed1590d143ff04f1f5a8fe52586d1290a693\" id:\"62c6f867fe6f4e0071bdf4f732f15867a5b736f49a0f02fcf999a68bfb59a453\" pid:5888 exited_at:{seconds:1755042488 nanos:648176290}" Aug 12 23:48:09.751616 containerd[1548]: time="2025-08-12T23:48:09.751454223Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48fee397fa82b165d7cb8847fe3251ff1966f81c3bd5d5f886d48fbb6317b10e\" id:\"9d7184e474d5ce0911853401f2684588a8ddaa937fc95a3c01465a702d2e4b4e\" pid:5909 exited_at:{seconds:1755042489 nanos:750921339}" Aug 12 23:48:12.462591 sshd[5758]: Invalid user support from 204.76.203.28 port 34146 Aug 12 23:48:13.681955 sshd[5758]: Received disconnect from 204.76.203.28 port 34146:11: Bye Bye [preauth] Aug 12 23:48:13.681955 sshd[5758]: Disconnected from invalid user support 204.76.203.28 port 34146 [preauth] Aug 12 23:48:13.685777 systemd[1]: sshd@37-49.13.54.157:22-204.76.203.28:34146.service: Deactivated successfully. Aug 12 23:48:17.520615 systemd[1]: cri-containerd-ba69d0d8f93e7b5917bb9f48800f9245b33857e680955a13cad8459020dae439.scope: Deactivated successfully. Aug 12 23:48:17.522065 systemd[1]: cri-containerd-ba69d0d8f93e7b5917bb9f48800f9245b33857e680955a13cad8459020dae439.scope: Consumed 19.931s CPU time, 121.9M memory peak, 4.4M read from disk. Aug 12 23:48:17.527326 containerd[1548]: time="2025-08-12T23:48:17.526817809Z" level=info msg="received exit event container_id:\"ba69d0d8f93e7b5917bb9f48800f9245b33857e680955a13cad8459020dae439\" id:\"ba69d0d8f93e7b5917bb9f48800f9245b33857e680955a13cad8459020dae439\" pid:3012 exit_status:1 exited_at:{seconds:1755042497 nanos:526137525}" Aug 12 23:48:17.527744 containerd[1548]: time="2025-08-12T23:48:17.527358652Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba69d0d8f93e7b5917bb9f48800f9245b33857e680955a13cad8459020dae439\" id:\"ba69d0d8f93e7b5917bb9f48800f9245b33857e680955a13cad8459020dae439\" pid:3012 exit_status:1 exited_at:{seconds:1755042497 nanos:526137525}" Aug 12 23:48:17.553853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba69d0d8f93e7b5917bb9f48800f9245b33857e680955a13cad8459020dae439-rootfs.mount: Deactivated successfully. Aug 12 23:48:17.759017 kubelet[2678]: E0812 23:48:17.758925 2678 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:46788->10.0.0.2:2379: read: connection timed out" Aug 12 23:48:17.762671 systemd[1]: cri-containerd-66f3137dd5319fd94bc80cab6dd90e871befd7126420037eda1ab8c96361f929.scope: Deactivated successfully. Aug 12 23:48:17.763508 systemd[1]: cri-containerd-66f3137dd5319fd94bc80cab6dd90e871befd7126420037eda1ab8c96361f929.scope: Consumed 4.115s CPU time, 26.6M memory peak, 3.6M read from disk. Aug 12 23:48:17.766143 containerd[1548]: time="2025-08-12T23:48:17.765932183Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66f3137dd5319fd94bc80cab6dd90e871befd7126420037eda1ab8c96361f929\" id:\"66f3137dd5319fd94bc80cab6dd90e871befd7126420037eda1ab8c96361f929\" pid:2545 exit_status:1 exited_at:{seconds:1755042497 nanos:765540421}" Aug 12 23:48:17.766143 containerd[1548]: time="2025-08-12T23:48:17.765952823Z" level=info msg="received exit event container_id:\"66f3137dd5319fd94bc80cab6dd90e871befd7126420037eda1ab8c96361f929\" id:\"66f3137dd5319fd94bc80cab6dd90e871befd7126420037eda1ab8c96361f929\" pid:2545 exit_status:1 exited_at:{seconds:1755042497 nanos:765540421}" Aug 12 23:48:17.795675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66f3137dd5319fd94bc80cab6dd90e871befd7126420037eda1ab8c96361f929-rootfs.mount: Deactivated successfully. Aug 12 23:48:17.917508 systemd[1]: Started sshd@40-49.13.54.157:22-204.76.203.28:64260.service - OpenSSH per-connection server daemon (204.76.203.28:64260). Aug 12 23:48:18.271061 sshd[5948]: Invalid user config from 204.76.203.28 port 64260 Aug 12 23:48:18.389650 kubelet[2678]: I0812 23:48:18.389527 2678 scope.go:117] "RemoveContainer" containerID="ba69d0d8f93e7b5917bb9f48800f9245b33857e680955a13cad8459020dae439" Aug 12 23:48:18.394966 kubelet[2678]: I0812 23:48:18.394850 2678 scope.go:117] "RemoveContainer" containerID="66f3137dd5319fd94bc80cab6dd90e871befd7126420037eda1ab8c96361f929" Aug 12 23:48:18.395289 containerd[1548]: time="2025-08-12T23:48:18.395235315Z" level=info msg="CreateContainer within sandbox \"09ffdd08d54f6b02e7daecae5a730e70684b2b99660aec9e2f0b10caaed9791f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Aug 12 23:48:18.399539 containerd[1548]: time="2025-08-12T23:48:18.399472258Z" level=info msg="CreateContainer within sandbox \"eb51cb9053c6ea3168ae6bd78967b69494f6f8926ec0b099f26a52385dd4e998\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 12 23:48:18.409114 containerd[1548]: time="2025-08-12T23:48:18.408387189Z" level=info msg="Container d557397084461fa49ac79a8eadf891abe67d9b79e746a6f58f0051f976f8fdee: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:48:18.419395 containerd[1548]: time="2025-08-12T23:48:18.419162569Z" level=info msg="Container bfd9dd716c5966d0799f37d34286fe4e4d8fcbab6daed0ca3fe86dc8eba2cd08: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:48:18.425238 containerd[1548]: time="2025-08-12T23:48:18.425186403Z" level=info msg="CreateContainer within sandbox \"09ffdd08d54f6b02e7daecae5a730e70684b2b99660aec9e2f0b10caaed9791f\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d557397084461fa49ac79a8eadf891abe67d9b79e746a6f58f0051f976f8fdee\"" Aug 12 23:48:18.426226 containerd[1548]: time="2025-08-12T23:48:18.426174049Z" level=info msg="StartContainer for \"d557397084461fa49ac79a8eadf891abe67d9b79e746a6f58f0051f976f8fdee\"" Aug 12 23:48:18.427174 containerd[1548]: time="2025-08-12T23:48:18.427139374Z" level=info msg="connecting to shim d557397084461fa49ac79a8eadf891abe67d9b79e746a6f58f0051f976f8fdee" address="unix:///run/containerd/s/7e27c316021208d8cb8ea831952dcf825c21a7b24201439e890a4daa97a79bdd" protocol=ttrpc version=3 Aug 12 23:48:18.430890 containerd[1548]: time="2025-08-12T23:48:18.430859155Z" level=info msg="CreateContainer within sandbox \"eb51cb9053c6ea3168ae6bd78967b69494f6f8926ec0b099f26a52385dd4e998\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"bfd9dd716c5966d0799f37d34286fe4e4d8fcbab6daed0ca3fe86dc8eba2cd08\"" Aug 12 23:48:18.431615 containerd[1548]: time="2025-08-12T23:48:18.431578719Z" level=info msg="StartContainer for \"bfd9dd716c5966d0799f37d34286fe4e4d8fcbab6daed0ca3fe86dc8eba2cd08\"" Aug 12 23:48:18.435608 containerd[1548]: time="2025-08-12T23:48:18.435532822Z" level=info msg="connecting to shim bfd9dd716c5966d0799f37d34286fe4e4d8fcbab6daed0ca3fe86dc8eba2cd08" address="unix:///run/containerd/s/9cdc186dae95ea666b5885e5fb549b4e54da042d75b0c6b1d13659a2ddea5613" protocol=ttrpc version=3 Aug 12 23:48:18.457431 systemd[1]: Started cri-containerd-d557397084461fa49ac79a8eadf891abe67d9b79e746a6f58f0051f976f8fdee.scope - libcontainer container d557397084461fa49ac79a8eadf891abe67d9b79e746a6f58f0051f976f8fdee. Aug 12 23:48:18.469413 systemd[1]: Started cri-containerd-bfd9dd716c5966d0799f37d34286fe4e4d8fcbab6daed0ca3fe86dc8eba2cd08.scope - libcontainer container bfd9dd716c5966d0799f37d34286fe4e4d8fcbab6daed0ca3fe86dc8eba2cd08. Aug 12 23:48:18.509263 containerd[1548]: time="2025-08-12T23:48:18.509217797Z" level=info msg="StartContainer for \"d557397084461fa49ac79a8eadf891abe67d9b79e746a6f58f0051f976f8fdee\" returns successfully" Aug 12 23:48:18.535091 systemd[1]: cri-containerd-c0c56fe3f092fea32db8e05a42c15810c03d84d5d4d33d81543c2d69e4b68bae.scope: Deactivated successfully. Aug 12 23:48:18.535405 systemd[1]: cri-containerd-c0c56fe3f092fea32db8e05a42c15810c03d84d5d4d33d81543c2d69e4b68bae.scope: Consumed 4.528s CPU time, 64M memory peak, 3.8M read from disk. Aug 12 23:48:18.543297 sshd[5948]: Received disconnect from 204.76.203.28 port 64260:11: Bye Bye [preauth] Aug 12 23:48:18.543297 sshd[5948]: Disconnected from invalid user config 204.76.203.28 port 64260 [preauth] Aug 12 23:48:18.544444 containerd[1548]: time="2025-08-12T23:48:18.544286874Z" level=info msg="received exit event container_id:\"c0c56fe3f092fea32db8e05a42c15810c03d84d5d4d33d81543c2d69e4b68bae\" id:\"c0c56fe3f092fea32db8e05a42c15810c03d84d5d4d33d81543c2d69e4b68bae\" pid:2531 exit_status:1 exited_at:{seconds:1755042498 nanos:541881501}" Aug 12 23:48:18.545584 containerd[1548]: time="2025-08-12T23:48:18.544683956Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0c56fe3f092fea32db8e05a42c15810c03d84d5d4d33d81543c2d69e4b68bae\" id:\"c0c56fe3f092fea32db8e05a42c15810c03d84d5d4d33d81543c2d69e4b68bae\" pid:2531 exit_status:1 exited_at:{seconds:1755042498 nanos:541881501}" Aug 12 23:48:18.547741 systemd[1]: sshd@40-49.13.54.157:22-204.76.203.28:64260.service: Deactivated successfully. Aug 12 23:48:18.568493 containerd[1548]: time="2025-08-12T23:48:18.568373050Z" level=info msg="StartContainer for \"bfd9dd716c5966d0799f37d34286fe4e4d8fcbab6daed0ca3fe86dc8eba2cd08\" returns successfully" Aug 12 23:48:18.599686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0c56fe3f092fea32db8e05a42c15810c03d84d5d4d33d81543c2d69e4b68bae-rootfs.mount: Deactivated successfully. Aug 12 23:48:19.402455 kubelet[2678]: I0812 23:48:19.402417 2678 scope.go:117] "RemoveContainer" containerID="c0c56fe3f092fea32db8e05a42c15810c03d84d5d4d33d81543c2d69e4b68bae" Aug 12 23:48:19.408492 containerd[1548]: time="2025-08-12T23:48:19.408445777Z" level=info msg="CreateContainer within sandbox \"8130b61d5c4b873247e576ccbd0b493376027f575100ce2bbdefd57f26b82e39\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 12 23:48:19.431611 containerd[1548]: time="2025-08-12T23:48:19.431476304Z" level=info msg="Container a9598238dcb30cf397427adbbc4e824d9259e91828a730cf5cd27f5d96992734: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:48:19.443272 containerd[1548]: time="2025-08-12T23:48:19.443222889Z" level=info msg="CreateContainer within sandbox \"8130b61d5c4b873247e576ccbd0b493376027f575100ce2bbdefd57f26b82e39\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a9598238dcb30cf397427adbbc4e824d9259e91828a730cf5cd27f5d96992734\"" Aug 12 23:48:19.443828 containerd[1548]: time="2025-08-12T23:48:19.443792892Z" level=info msg="StartContainer for \"a9598238dcb30cf397427adbbc4e824d9259e91828a730cf5cd27f5d96992734\"" Aug 12 23:48:19.446610 containerd[1548]: time="2025-08-12T23:48:19.446570668Z" level=info msg="connecting to shim a9598238dcb30cf397427adbbc4e824d9259e91828a730cf5cd27f5d96992734" address="unix:///run/containerd/s/3bc6a9503806106dd2badf2c4a541d00d97b92b1ef522f8c2949024fa5fa7c46" protocol=ttrpc version=3 Aug 12 23:48:19.474510 systemd[1]: Started cri-containerd-a9598238dcb30cf397427adbbc4e824d9259e91828a730cf5cd27f5d96992734.scope - libcontainer container a9598238dcb30cf397427adbbc4e824d9259e91828a730cf5cd27f5d96992734. Aug 12 23:48:19.530917 containerd[1548]: time="2025-08-12T23:48:19.530875533Z" level=info msg="StartContainer for \"a9598238dcb30cf397427adbbc4e824d9259e91828a730cf5cd27f5d96992734\" returns successfully" Aug 12 23:48:20.151459 systemd[1]: Started sshd@41-49.13.54.157:22-204.76.203.28:59366.service - OpenSSH per-connection server daemon (204.76.203.28:59366). Aug 12 23:48:20.506632 sshd[6077]: Received disconnect from 204.76.203.28 port 59366:11: Bye Bye [preauth] Aug 12 23:48:20.507721 sshd[6077]: Disconnected from authenticating user sshd 204.76.203.28 port 59366 [preauth] Aug 12 23:48:20.515593 systemd[1]: sshd@41-49.13.54.157:22-204.76.203.28:59366.service: Deactivated successfully. Aug 12 23:48:21.850731 containerd[1548]: time="2025-08-12T23:48:21.850662331Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3cafefbbe0fd7b077d8d829c2560bb39ada3a75d66b83e936e63545c94415daa\" id:\"b14b0ab3e1e3ffdc03e505b81103f124ad8cdfc6b4538caf1151d9661f307526\" pid:6101 exited_at:{seconds:1755042501 nanos:850298769}" Aug 12 23:48:22.679407 kubelet[2678]: E0812 23:48:22.678953 2678 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:46584->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4372-1-0-f-e67fdcf04d.185b29d9192fce12 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4372-1-0-f-e67fdcf04d,UID:5a18637e370445f2fc5007cd65b5be83,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4372-1-0-f-e67fdcf04d,},FirstTimestamp:2025-08-12 23:48:12.198145554 +0000 UTC m=+220.861382224,LastTimestamp:2025-08-12 23:48:12.198145554 +0000 UTC m=+220.861382224,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372-1-0-f-e67fdcf04d,}"