Sep 5 23:54:05.333890 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 5 23:54:05.333912 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 5 22:30:47 -00 2025 Sep 5 23:54:05.333920 kernel: KASLR enabled Sep 5 23:54:05.333926 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 5 23:54:05.333934 kernel: printk: bootconsole [pl11] enabled Sep 5 23:54:05.333940 kernel: efi: EFI v2.7 by EDK II Sep 5 23:54:05.333947 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Sep 5 23:54:05.333953 kernel: random: crng init done Sep 5 23:54:05.333960 kernel: ACPI: Early table checksum verification disabled Sep 5 23:54:05.333966 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 5 23:54:05.333972 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:54:05.333978 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:54:05.333986 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 5 23:54:05.333992 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:54:05.334000 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:54:05.334006 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:54:05.334012 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:54:05.334020 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:54:05.334027 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:54:05.334033 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 5 23:54:05.334039 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:54:05.334046 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 5 23:54:05.334052 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 5 23:54:05.334059 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Sep 5 23:54:05.334065 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Sep 5 23:54:05.334072 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Sep 5 23:54:05.334078 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Sep 5 23:54:05.334084 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Sep 5 23:54:05.334093 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Sep 5 23:54:05.334099 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Sep 5 23:54:05.334106 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Sep 5 23:54:05.334112 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Sep 5 23:54:05.334118 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Sep 5 23:54:05.334125 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Sep 5 23:54:05.334131 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Sep 5 23:54:05.334137 kernel: Zone ranges: Sep 5 23:54:05.334143 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 5 23:54:05.334167 kernel: DMA32 empty Sep 5 23:54:05.334173 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 5 23:54:05.334179 kernel: Movable zone start for each node Sep 5 23:54:05.334190 kernel: Early memory node ranges Sep 5 23:54:05.335227 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 5 23:54:05.335238 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 5 23:54:05.335245 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 5 23:54:05.335252 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 5 23:54:05.335263 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 5 23:54:05.335270 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 5 23:54:05.335277 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 5 23:54:05.335284 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 5 23:54:05.335291 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 5 23:54:05.335298 kernel: psci: probing for conduit method from ACPI. Sep 5 23:54:05.335305 kernel: psci: PSCIv1.1 detected in firmware. Sep 5 23:54:05.335312 kernel: psci: Using standard PSCI v0.2 function IDs Sep 5 23:54:05.335319 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 5 23:54:05.335325 kernel: psci: SMC Calling Convention v1.4 Sep 5 23:54:05.335333 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 5 23:54:05.335339 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 5 23:54:05.335355 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 5 23:54:05.335362 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 5 23:54:05.335369 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 5 23:54:05.335376 kernel: Detected PIPT I-cache on CPU0 Sep 5 23:54:05.335383 kernel: CPU features: detected: GIC system register CPU interface Sep 5 23:54:05.335390 kernel: CPU features: detected: Hardware dirty bit management Sep 5 23:54:05.335400 kernel: CPU features: detected: Spectre-BHB Sep 5 23:54:05.335407 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 5 23:54:05.335414 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 5 23:54:05.335421 kernel: CPU features: detected: ARM erratum 1418040 Sep 5 23:54:05.335428 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 5 23:54:05.335440 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 5 23:54:05.335446 kernel: alternatives: applying boot alternatives Sep 5 23:54:05.335455 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:54:05.335462 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 23:54:05.335469 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 23:54:05.335476 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 23:54:05.335485 kernel: Fallback order for Node 0: 0 Sep 5 23:54:05.335492 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 5 23:54:05.335499 kernel: Policy zone: Normal Sep 5 23:54:05.335506 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 23:54:05.335513 kernel: software IO TLB: area num 2. Sep 5 23:54:05.335523 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Sep 5 23:54:05.335531 kernel: Memory: 3982628K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 211532K reserved, 0K cma-reserved) Sep 5 23:54:05.335538 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 5 23:54:05.335544 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 23:54:05.335552 kernel: rcu: RCU event tracing is enabled. Sep 5 23:54:05.335559 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 5 23:54:05.335566 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 23:54:05.335575 kernel: Tracing variant of Tasks RCU enabled. Sep 5 23:54:05.335582 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 23:54:05.335589 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 5 23:54:05.335596 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 5 23:54:05.335604 kernel: GICv3: 960 SPIs implemented Sep 5 23:54:05.335611 kernel: GICv3: 0 Extended SPIs implemented Sep 5 23:54:05.335618 kernel: Root IRQ handler: gic_handle_irq Sep 5 23:54:05.335627 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 5 23:54:05.335634 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 5 23:54:05.335641 kernel: ITS: No ITS available, not enabling LPIs Sep 5 23:54:05.335648 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 23:54:05.335655 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:54:05.335664 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 5 23:54:05.335671 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 5 23:54:05.335678 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 5 23:54:05.335687 kernel: Console: colour dummy device 80x25 Sep 5 23:54:05.335694 kernel: printk: console [tty1] enabled Sep 5 23:54:05.335701 kernel: ACPI: Core revision 20230628 Sep 5 23:54:05.335711 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 5 23:54:05.335718 kernel: pid_max: default: 32768 minimum: 301 Sep 5 23:54:05.335725 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 23:54:05.335732 kernel: landlock: Up and running. Sep 5 23:54:05.335739 kernel: SELinux: Initializing. Sep 5 23:54:05.335748 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:54:05.335755 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:54:05.335764 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:54:05.335772 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:54:05.335779 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 5 23:54:05.335786 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 5 23:54:05.335795 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 5 23:54:05.335803 kernel: rcu: Hierarchical SRCU implementation. Sep 5 23:54:05.335810 kernel: rcu: Max phase no-delay instances is 400. Sep 5 23:54:05.335823 kernel: Remapping and enabling EFI services. Sep 5 23:54:05.335831 kernel: smp: Bringing up secondary CPUs ... Sep 5 23:54:05.335841 kernel: Detected PIPT I-cache on CPU1 Sep 5 23:54:05.335848 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 5 23:54:05.335857 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:54:05.335864 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 5 23:54:05.335871 kernel: smp: Brought up 1 node, 2 CPUs Sep 5 23:54:05.335879 kernel: SMP: Total of 2 processors activated. Sep 5 23:54:05.335887 kernel: CPU features: detected: 32-bit EL0 Support Sep 5 23:54:05.335900 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 5 23:54:05.335908 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 5 23:54:05.335915 kernel: CPU features: detected: CRC32 instructions Sep 5 23:54:05.335923 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 5 23:54:05.335930 kernel: CPU features: detected: LSE atomic instructions Sep 5 23:54:05.335938 kernel: CPU features: detected: Privileged Access Never Sep 5 23:54:05.335948 kernel: CPU: All CPU(s) started at EL1 Sep 5 23:54:05.335955 kernel: alternatives: applying system-wide alternatives Sep 5 23:54:05.335962 kernel: devtmpfs: initialized Sep 5 23:54:05.335971 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 23:54:05.335979 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 5 23:54:05.335989 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 23:54:05.335996 kernel: SMBIOS 3.1.0 present. Sep 5 23:54:05.336004 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 5 23:54:05.336011 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 23:54:05.336019 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 5 23:54:05.336026 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 5 23:54:05.336036 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 5 23:54:05.336045 kernel: audit: initializing netlink subsys (disabled) Sep 5 23:54:05.336052 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Sep 5 23:54:05.336060 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 23:54:05.336070 kernel: cpuidle: using governor menu Sep 5 23:54:05.336078 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 5 23:54:05.336086 kernel: ASID allocator initialised with 32768 entries Sep 5 23:54:05.336093 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 23:54:05.336101 kernel: Serial: AMBA PL011 UART driver Sep 5 23:54:05.336108 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 5 23:54:05.336120 kernel: Modules: 0 pages in range for non-PLT usage Sep 5 23:54:05.336128 kernel: Modules: 509008 pages in range for PLT usage Sep 5 23:54:05.336136 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 23:54:05.336143 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 23:54:05.336152 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 5 23:54:05.336162 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 5 23:54:05.336170 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 23:54:05.336177 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 23:54:05.336185 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 5 23:54:05.336194 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 5 23:54:05.336211 kernel: ACPI: Added _OSI(Module Device) Sep 5 23:54:05.336219 kernel: ACPI: Added _OSI(Processor Device) Sep 5 23:54:05.336226 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 23:54:05.336234 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 23:54:05.336241 kernel: ACPI: Interpreter enabled Sep 5 23:54:05.336253 kernel: ACPI: Using GIC for interrupt routing Sep 5 23:54:05.336260 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 5 23:54:05.336268 kernel: printk: console [ttyAMA0] enabled Sep 5 23:54:05.336277 kernel: printk: bootconsole [pl11] disabled Sep 5 23:54:05.336285 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 5 23:54:05.336296 kernel: iommu: Default domain type: Translated Sep 5 23:54:05.336303 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 5 23:54:05.336311 kernel: efivars: Registered efivars operations Sep 5 23:54:05.336318 kernel: vgaarb: loaded Sep 5 23:54:05.336325 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 5 23:54:05.336333 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 23:54:05.336343 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 23:54:05.336352 kernel: pnp: PnP ACPI init Sep 5 23:54:05.336360 kernel: pnp: PnP ACPI: found 0 devices Sep 5 23:54:05.336368 kernel: NET: Registered PF_INET protocol family Sep 5 23:54:05.336376 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 23:54:05.336383 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 23:54:05.336394 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 23:54:05.336402 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 23:54:05.336410 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 23:54:05.336417 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 23:54:05.336427 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:54:05.336437 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:54:05.336445 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 23:54:05.336452 kernel: PCI: CLS 0 bytes, default 64 Sep 5 23:54:05.336460 kernel: kvm [1]: HYP mode not available Sep 5 23:54:05.336468 kernel: Initialise system trusted keyrings Sep 5 23:54:05.336475 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 23:54:05.336483 kernel: Key type asymmetric registered Sep 5 23:54:05.336504 kernel: Asymmetric key parser 'x509' registered Sep 5 23:54:05.336513 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 23:54:05.336521 kernel: io scheduler mq-deadline registered Sep 5 23:54:05.336528 kernel: io scheduler kyber registered Sep 5 23:54:05.336536 kernel: io scheduler bfq registered Sep 5 23:54:05.336543 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 23:54:05.336551 kernel: thunder_xcv, ver 1.0 Sep 5 23:54:05.336558 kernel: thunder_bgx, ver 1.0 Sep 5 23:54:05.336566 kernel: nicpf, ver 1.0 Sep 5 23:54:05.336573 kernel: nicvf, ver 1.0 Sep 5 23:54:05.336735 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 5 23:54:05.336814 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-05T23:54:04 UTC (1757116444) Sep 5 23:54:05.336825 kernel: efifb: probing for efifb Sep 5 23:54:05.336833 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 5 23:54:05.336840 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 5 23:54:05.336848 kernel: efifb: scrolling: redraw Sep 5 23:54:05.336855 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 5 23:54:05.336862 kernel: Console: switching to colour frame buffer device 128x48 Sep 5 23:54:05.336872 kernel: fb0: EFI VGA frame buffer device Sep 5 23:54:05.336879 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 5 23:54:05.336886 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 23:54:05.336894 kernel: No ACPI PMU IRQ for CPU0 Sep 5 23:54:05.336901 kernel: No ACPI PMU IRQ for CPU1 Sep 5 23:54:05.336909 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 5 23:54:05.336916 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 5 23:54:05.336924 kernel: watchdog: Hard watchdog permanently disabled Sep 5 23:54:05.336931 kernel: NET: Registered PF_INET6 protocol family Sep 5 23:54:05.336940 kernel: Segment Routing with IPv6 Sep 5 23:54:05.336948 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 23:54:05.336955 kernel: NET: Registered PF_PACKET protocol family Sep 5 23:54:05.336962 kernel: Key type dns_resolver registered Sep 5 23:54:05.336970 kernel: registered taskstats version 1 Sep 5 23:54:05.336977 kernel: Loading compiled-in X.509 certificates Sep 5 23:54:05.336984 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 5b16e1dfa86dac534548885fd675b87757ff9e20' Sep 5 23:54:05.336992 kernel: Key type .fscrypt registered Sep 5 23:54:05.336999 kernel: Key type fscrypt-provisioning registered Sep 5 23:54:05.337008 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 23:54:05.337015 kernel: ima: Allocated hash algorithm: sha1 Sep 5 23:54:05.337023 kernel: ima: No architecture policies found Sep 5 23:54:05.337030 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 5 23:54:05.337038 kernel: clk: Disabling unused clocks Sep 5 23:54:05.337045 kernel: Freeing unused kernel memory: 39424K Sep 5 23:54:05.337052 kernel: Run /init as init process Sep 5 23:54:05.337060 kernel: with arguments: Sep 5 23:54:05.337067 kernel: /init Sep 5 23:54:05.337076 kernel: with environment: Sep 5 23:54:05.337083 kernel: HOME=/ Sep 5 23:54:05.337090 kernel: TERM=linux Sep 5 23:54:05.337098 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 23:54:05.337107 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:54:05.337117 systemd[1]: Detected virtualization microsoft. Sep 5 23:54:05.337129 systemd[1]: Detected architecture arm64. Sep 5 23:54:05.337136 systemd[1]: Running in initrd. Sep 5 23:54:05.337146 systemd[1]: No hostname configured, using default hostname. Sep 5 23:54:05.337154 systemd[1]: Hostname set to . Sep 5 23:54:05.337162 systemd[1]: Initializing machine ID from random generator. Sep 5 23:54:05.337169 systemd[1]: Queued start job for default target initrd.target. Sep 5 23:54:05.337177 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:54:05.337185 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:54:05.344143 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 23:54:05.344188 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:54:05.344213 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 23:54:05.344222 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 23:54:05.344232 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 23:54:05.344240 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 23:54:05.344249 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:54:05.344256 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:54:05.344266 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:54:05.344275 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:54:05.344283 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:54:05.344291 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:54:05.344298 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:54:05.344306 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:54:05.344314 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 23:54:05.344322 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 23:54:05.344330 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:54:05.344341 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:54:05.344349 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:54:05.344357 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:54:05.344365 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 23:54:05.344372 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:54:05.344381 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 23:54:05.344388 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 23:54:05.344397 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:54:05.344405 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:54:05.344445 systemd-journald[217]: Collecting audit messages is disabled. Sep 5 23:54:05.344468 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:54:05.344477 systemd-journald[217]: Journal started Sep 5 23:54:05.344498 systemd-journald[217]: Runtime Journal (/run/log/journal/5e8ee778946043f99fa101b95e6d4926) is 8.0M, max 78.5M, 70.5M free. Sep 5 23:54:05.345184 systemd-modules-load[218]: Inserted module 'overlay' Sep 5 23:54:05.356253 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:54:05.362830 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 23:54:05.398040 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 23:54:05.398065 kernel: Bridge firewalling registered Sep 5 23:54:05.386706 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:54:05.404454 systemd-modules-load[218]: Inserted module 'br_netfilter' Sep 5 23:54:05.405348 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 23:54:05.416106 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:54:05.427074 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:54:05.455542 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:54:05.470266 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:54:05.485364 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:54:05.510396 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:54:05.517566 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:54:05.534551 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:54:05.546662 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:54:05.560553 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:54:05.589470 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 23:54:05.603813 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:54:05.618370 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:54:05.650725 dracut-cmdline[249]: dracut-dracut-053 Sep 5 23:54:05.650725 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:54:05.642864 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:54:05.655830 systemd-resolved[253]: Positive Trust Anchors: Sep 5 23:54:05.655841 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:54:05.655872 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:54:05.658177 systemd-resolved[253]: Defaulting to hostname 'linux'. Sep 5 23:54:05.663459 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:54:05.706010 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:54:05.821210 kernel: SCSI subsystem initialized Sep 5 23:54:05.827221 kernel: Loading iSCSI transport class v2.0-870. Sep 5 23:54:05.838229 kernel: iscsi: registered transport (tcp) Sep 5 23:54:05.856413 kernel: iscsi: registered transport (qla4xxx) Sep 5 23:54:05.856489 kernel: QLogic iSCSI HBA Driver Sep 5 23:54:05.891772 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 23:54:05.905445 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 23:54:05.937222 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 23:54:05.937270 kernel: device-mapper: uevent: version 1.0.3 Sep 5 23:54:05.943835 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 23:54:05.993228 kernel: raid6: neonx8 gen() 15741 MB/s Sep 5 23:54:06.013209 kernel: raid6: neonx4 gen() 15678 MB/s Sep 5 23:54:06.033206 kernel: raid6: neonx2 gen() 13237 MB/s Sep 5 23:54:06.054212 kernel: raid6: neonx1 gen() 10520 MB/s Sep 5 23:54:06.074205 kernel: raid6: int64x8 gen() 6978 MB/s Sep 5 23:54:06.094206 kernel: raid6: int64x4 gen() 7337 MB/s Sep 5 23:54:06.115207 kernel: raid6: int64x2 gen() 6131 MB/s Sep 5 23:54:06.138635 kernel: raid6: int64x1 gen() 5059 MB/s Sep 5 23:54:06.138647 kernel: raid6: using algorithm neonx8 gen() 15741 MB/s Sep 5 23:54:06.162951 kernel: raid6: .... xor() 12060 MB/s, rmw enabled Sep 5 23:54:06.162979 kernel: raid6: using neon recovery algorithm Sep 5 23:54:06.172208 kernel: xor: measuring software checksum speed Sep 5 23:54:06.179262 kernel: 8regs : 18730 MB/sec Sep 5 23:54:06.179273 kernel: 32regs : 19617 MB/sec Sep 5 23:54:06.184466 kernel: arm64_neon : 26954 MB/sec Sep 5 23:54:06.188846 kernel: xor: using function: arm64_neon (26954 MB/sec) Sep 5 23:54:06.240221 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 23:54:06.250145 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:54:06.267338 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:54:06.290143 systemd-udevd[436]: Using default interface naming scheme 'v255'. Sep 5 23:54:06.295714 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:54:06.315345 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 23:54:06.327791 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Sep 5 23:54:06.356953 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:54:06.372528 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:54:06.419713 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:54:06.440400 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 23:54:06.481249 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 23:54:06.494336 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:54:06.510153 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:54:06.523510 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:54:06.543219 kernel: hv_vmbus: Vmbus version:5.3 Sep 5 23:54:06.545354 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 23:54:06.571209 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:54:06.584236 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 5 23:54:06.584272 kernel: hv_vmbus: registering driver hid_hyperv Sep 5 23:54:06.607361 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 5 23:54:06.607414 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 5 23:54:06.607426 kernel: hv_vmbus: registering driver hv_netvsc Sep 5 23:54:06.607435 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 5 23:54:06.608106 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:54:06.634954 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 5 23:54:06.608403 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:54:06.657636 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 5 23:54:06.648845 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:54:06.675527 kernel: hv_vmbus: registering driver hv_storvsc Sep 5 23:54:06.675553 kernel: PTP clock support registered Sep 5 23:54:06.671285 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:54:06.671519 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:54:06.714889 kernel: scsi host0: storvsc_host_t Sep 5 23:54:06.715082 kernel: scsi host1: storvsc_host_t Sep 5 23:54:06.715175 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 5 23:54:06.715247 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 5 23:54:06.700328 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:54:06.740659 kernel: hv_utils: Registering HyperV Utility Driver Sep 5 23:54:06.740681 kernel: hv_vmbus: registering driver hv_utils Sep 5 23:54:06.737517 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:54:07.143435 kernel: hv_utils: Heartbeat IC version 3.0 Sep 5 23:54:07.143460 kernel: hv_netvsc 000d3af9-9be5-000d-3af9-9be5000d3af9 eth0: VF slot 1 added Sep 5 23:54:07.143650 kernel: hv_utils: Shutdown IC version 3.2 Sep 5 23:54:07.143661 kernel: hv_utils: TimeSync IC version 4.0 Sep 5 23:54:07.143284 systemd-resolved[253]: Clock change detected. Flushing caches. Sep 5 23:54:07.165933 kernel: hv_vmbus: registering driver hv_pci Sep 5 23:54:07.165969 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Sep 5 23:54:07.166121 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 23:54:07.175208 kernel: hv_pci 84d57326-e87b-4b8f-93ac-df2dd60d6077: PCI VMBus probing: Using version 0x10004 Sep 5 23:54:07.177848 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Sep 5 23:54:07.193849 kernel: hv_pci 84d57326-e87b-4b8f-93ac-df2dd60d6077: PCI host bridge to bus e87b:00 Sep 5 23:54:07.194043 kernel: pci_bus e87b:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 5 23:54:07.194145 kernel: pci_bus e87b:00: No busn resource found for root bus, will use [bus 00-ff] Sep 5 23:54:07.200464 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:54:07.234346 kernel: pci e87b:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 5 23:54:07.234391 kernel: pci e87b:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 5 23:54:07.234406 kernel: pci e87b:00:02.0: enabling Extended Tags Sep 5 23:54:07.239240 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:54:07.278266 kernel: pci e87b:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e87b:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 5 23:54:07.278459 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 5 23:54:07.278567 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Sep 5 23:54:07.278652 kernel: pci_bus e87b:00: busn_res: [bus 00-ff] end is updated to 00 Sep 5 23:54:07.287708 kernel: sd 1:0:0:0: [sda] Write Protect is off Sep 5 23:54:07.287937 kernel: pci e87b:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 5 23:54:07.295497 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 5 23:54:07.304175 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 5 23:54:07.306579 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:54:07.332162 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:54:07.332204 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Sep 5 23:54:07.371448 kernel: mlx5_core e87b:00:02.0: enabling device (0000 -> 0002) Sep 5 23:54:07.379852 kernel: mlx5_core e87b:00:02.0: firmware version: 16.30.1284 Sep 5 23:54:07.577472 kernel: hv_netvsc 000d3af9-9be5-000d-3af9-9be5000d3af9 eth0: VF registering: eth1 Sep 5 23:54:07.577661 kernel: mlx5_core e87b:00:02.0 eth1: joined to eth0 Sep 5 23:54:07.585902 kernel: mlx5_core e87b:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 5 23:54:07.595875 kernel: mlx5_core e87b:00:02.0 enP59515s1: renamed from eth1 Sep 5 23:54:08.034880 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 5 23:54:08.062929 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (495) Sep 5 23:54:08.077592 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 5 23:54:08.096733 kernel: BTRFS: device fsid 045c118e-b098-46f0-884a-43665575c70e devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (490) Sep 5 23:54:08.115336 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 5 23:54:08.123391 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 5 23:54:08.149281 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 5 23:54:08.166190 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 23:54:08.193858 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:54:08.202860 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:54:09.212007 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:54:09.212928 disk-uuid[598]: The operation has completed successfully. Sep 5 23:54:09.273066 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 23:54:09.274878 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 23:54:09.314975 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 23:54:09.329510 sh[685]: Success Sep 5 23:54:09.364299 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 5 23:54:09.576307 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 23:54:09.584950 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 23:54:09.594932 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 23:54:09.623705 kernel: BTRFS info (device dm-0): first mount of filesystem 045c118e-b098-46f0-884a-43665575c70e Sep 5 23:54:09.623757 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:54:09.630917 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 23:54:09.636190 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 23:54:09.640415 kernel: BTRFS info (device dm-0): using free space tree Sep 5 23:54:09.973500 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 23:54:09.979317 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 23:54:09.999129 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 23:54:10.024843 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:54:10.024907 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:54:10.021068 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 23:54:10.048487 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:54:10.093149 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:54:10.106243 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 23:54:10.113008 kernel: BTRFS info (device sda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:54:10.121140 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 23:54:10.139404 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 23:54:10.160589 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:54:10.180982 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:54:10.209902 systemd-networkd[869]: lo: Link UP Sep 5 23:54:10.213290 systemd-networkd[869]: lo: Gained carrier Sep 5 23:54:10.214930 systemd-networkd[869]: Enumeration completed Sep 5 23:54:10.219166 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:54:10.228293 systemd[1]: Reached target network.target - Network. Sep 5 23:54:10.239612 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:54:10.239616 systemd-networkd[869]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:54:10.309878 kernel: mlx5_core e87b:00:02.0 enP59515s1: Link up Sep 5 23:54:10.350509 kernel: hv_netvsc 000d3af9-9be5-000d-3af9-9be5000d3af9 eth0: Data path switched to VF: enP59515s1 Sep 5 23:54:10.350758 systemd-networkd[869]: enP59515s1: Link UP Sep 5 23:54:10.350929 systemd-networkd[869]: eth0: Link UP Sep 5 23:54:10.351040 systemd-networkd[869]: eth0: Gained carrier Sep 5 23:54:10.351051 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:54:10.359028 systemd-networkd[869]: enP59515s1: Gained carrier Sep 5 23:54:10.388882 systemd-networkd[869]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 5 23:54:10.994733 ignition[853]: Ignition 2.19.0 Sep 5 23:54:10.994752 ignition[853]: Stage: fetch-offline Sep 5 23:54:10.999866 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:54:10.994791 ignition[853]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:54:10.994799 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:54:10.994919 ignition[853]: parsed url from cmdline: "" Sep 5 23:54:10.994925 ignition[853]: no config URL provided Sep 5 23:54:10.994929 ignition[853]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:54:10.994936 ignition[853]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:54:10.994941 ignition[853]: failed to fetch config: resource requires networking Sep 5 23:54:11.037177 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 5 23:54:10.995183 ignition[853]: Ignition finished successfully Sep 5 23:54:11.069348 ignition[879]: Ignition 2.19.0 Sep 5 23:54:11.069364 ignition[879]: Stage: fetch Sep 5 23:54:11.069580 ignition[879]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:54:11.069592 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:54:11.069697 ignition[879]: parsed url from cmdline: "" Sep 5 23:54:11.069700 ignition[879]: no config URL provided Sep 5 23:54:11.069705 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:54:11.069712 ignition[879]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:54:11.069735 ignition[879]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 5 23:54:11.168911 ignition[879]: GET result: OK Sep 5 23:54:11.168998 ignition[879]: config has been read from IMDS userdata Sep 5 23:54:11.169059 ignition[879]: parsing config with SHA512: 355fc70f12636779a448e5c7acfbff7157e533cb7b6f8a03e0819b8d6ae0b051caac558233cd05daebfbc468c609abd13e7c228d817363821021d315f0147eb7 Sep 5 23:54:11.172722 unknown[879]: fetched base config from "system" Sep 5 23:54:11.173116 ignition[879]: fetch: fetch complete Sep 5 23:54:11.172732 unknown[879]: fetched base config from "system" Sep 5 23:54:11.173120 ignition[879]: fetch: fetch passed Sep 5 23:54:11.172737 unknown[879]: fetched user config from "azure" Sep 5 23:54:11.173159 ignition[879]: Ignition finished successfully Sep 5 23:54:11.178326 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 5 23:54:11.205048 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 23:54:11.229658 ignition[886]: Ignition 2.19.0 Sep 5 23:54:11.229668 ignition[886]: Stage: kargs Sep 5 23:54:11.234543 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 23:54:11.229887 ignition[886]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:54:11.229897 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:54:11.253041 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 23:54:11.230767 ignition[886]: kargs: kargs passed Sep 5 23:54:11.230819 ignition[886]: Ignition finished successfully Sep 5 23:54:11.272987 ignition[893]: Ignition 2.19.0 Sep 5 23:54:11.273002 ignition[893]: Stage: disks Sep 5 23:54:11.280582 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 23:54:11.273230 ignition[893]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:54:11.289036 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 23:54:11.273239 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:54:11.297999 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 23:54:11.274794 ignition[893]: disks: disks passed Sep 5 23:54:11.310070 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:54:11.275075 ignition[893]: Ignition finished successfully Sep 5 23:54:11.320390 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:54:11.332659 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:54:11.360178 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 23:54:11.426837 systemd-fsck[901]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 5 23:54:11.437876 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 23:54:11.455067 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 23:54:11.508863 kernel: EXT4-fs (sda9): mounted filesystem 72e55cb0-8368-4871-a3a0-8637412e72e8 r/w with ordered data mode. Quota mode: none. Sep 5 23:54:11.509816 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 23:54:11.514736 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 23:54:11.566961 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:54:11.575975 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 23:54:11.605279 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (912) Sep 5 23:54:11.605312 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:54:11.592083 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 5 23:54:11.629044 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:54:11.629069 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:54:11.621436 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 23:54:11.621480 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:54:11.663334 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:54:11.662652 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 23:54:11.674690 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:54:11.691094 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 23:54:12.337012 coreos-metadata[914]: Sep 05 23:54:12.336 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 5 23:54:12.347301 coreos-metadata[914]: Sep 05 23:54:12.347 INFO Fetch successful Sep 5 23:54:12.347301 coreos-metadata[914]: Sep 05 23:54:12.347 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 5 23:54:12.363797 coreos-metadata[914]: Sep 05 23:54:12.359 INFO Fetch successful Sep 5 23:54:12.375924 coreos-metadata[914]: Sep 05 23:54:12.375 INFO wrote hostname ci-4081.3.5-n-8e502b48f1 to /sysroot/etc/hostname Sep 5 23:54:12.385463 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 5 23:54:12.404985 systemd-networkd[869]: eth0: Gained IPv6LL Sep 5 23:54:12.496746 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 23:54:12.525291 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Sep 5 23:54:12.554599 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 23:54:12.563602 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 23:54:13.689869 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 23:54:13.708085 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 23:54:13.733117 kernel: BTRFS info (device sda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:54:13.728250 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 23:54:13.736520 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 23:54:13.760884 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 23:54:13.774237 ignition[1035]: INFO : Ignition 2.19.0 Sep 5 23:54:13.774237 ignition[1035]: INFO : Stage: mount Sep 5 23:54:13.783051 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:54:13.783051 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:54:13.783051 ignition[1035]: INFO : mount: mount passed Sep 5 23:54:13.783051 ignition[1035]: INFO : Ignition finished successfully Sep 5 23:54:13.779678 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 23:54:13.806079 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 23:54:13.824122 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:54:13.851040 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1045) Sep 5 23:54:13.851078 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:54:13.863526 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:54:13.867880 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:54:13.874858 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:54:13.876371 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:54:13.902293 ignition[1063]: INFO : Ignition 2.19.0 Sep 5 23:54:13.902293 ignition[1063]: INFO : Stage: files Sep 5 23:54:13.911150 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:54:13.911150 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:54:13.911150 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Sep 5 23:54:13.911150 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 23:54:13.911150 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 23:54:13.988263 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 23:54:13.995751 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 23:54:13.995751 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 23:54:13.988629 unknown[1063]: wrote ssh authorized keys file for user: core Sep 5 23:54:14.020680 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 5 23:54:14.031536 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 5 23:54:14.075486 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 23:54:14.428821 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 5 23:54:14.428821 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 5 23:54:14.935928 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 23:54:15.211613 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 5 23:54:15.211613 ignition[1063]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 23:54:15.245994 ignition[1063]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:54:15.264947 ignition[1063]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:54:15.264947 ignition[1063]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 23:54:15.264947 ignition[1063]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 5 23:54:15.264947 ignition[1063]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 23:54:15.264947 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:54:15.264947 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:54:15.264947 ignition[1063]: INFO : files: files passed Sep 5 23:54:15.264947 ignition[1063]: INFO : Ignition finished successfully Sep 5 23:54:15.258678 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 23:54:15.293170 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 23:54:15.309035 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 23:54:15.334790 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 23:54:15.392917 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:54:15.392917 initrd-setup-root-after-ignition[1090]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:54:15.334907 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 23:54:15.426424 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:54:15.345844 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:54:15.355793 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 23:54:15.386081 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 23:54:15.428584 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 23:54:15.428700 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 23:54:15.443239 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 23:54:15.456889 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 23:54:15.468130 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 23:54:15.494154 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 23:54:15.532455 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:54:15.548140 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 23:54:15.566322 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:54:15.573073 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:54:15.585759 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 23:54:15.596977 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 23:54:15.597106 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:54:15.614411 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 23:54:15.621008 systemd[1]: Stopped target basic.target - Basic System. Sep 5 23:54:15.633371 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 23:54:15.645022 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:54:15.656179 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 23:54:15.668593 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 23:54:15.680880 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:54:15.693614 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 23:54:15.704730 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 23:54:15.717261 systemd[1]: Stopped target swap.target - Swaps. Sep 5 23:54:15.727707 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 23:54:15.727854 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:54:15.743695 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:54:15.750333 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:54:15.762963 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 23:54:15.766858 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:54:15.776439 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 23:54:15.776560 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 23:54:15.795281 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 23:54:15.795410 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:54:15.802946 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 23:54:15.803045 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 23:54:15.883917 ignition[1115]: INFO : Ignition 2.19.0 Sep 5 23:54:15.883917 ignition[1115]: INFO : Stage: umount Sep 5 23:54:15.883917 ignition[1115]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:54:15.883917 ignition[1115]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:54:15.883917 ignition[1115]: INFO : umount: umount passed Sep 5 23:54:15.883917 ignition[1115]: INFO : Ignition finished successfully Sep 5 23:54:15.814043 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 5 23:54:15.814146 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 5 23:54:15.849161 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 23:54:15.867731 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 23:54:15.867918 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:54:15.879039 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 23:54:15.889790 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 23:54:15.891739 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:54:15.909974 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 23:54:15.910097 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:54:15.931184 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 23:54:15.931825 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 23:54:15.931950 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 23:54:15.952030 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 23:54:15.952125 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 23:54:15.964867 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 23:54:15.965124 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 23:54:15.981382 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 23:54:15.981446 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 23:54:15.992729 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 5 23:54:15.992775 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 5 23:54:16.004616 systemd[1]: Stopped target network.target - Network. Sep 5 23:54:16.021142 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 23:54:16.021207 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:54:16.034598 systemd[1]: Stopped target paths.target - Path Units. Sep 5 23:54:16.045846 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 23:54:16.050898 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:54:16.060093 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 23:54:16.071636 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 23:54:16.082032 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 23:54:16.082090 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:54:16.093355 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 23:54:16.093399 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:54:16.104841 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 23:54:16.104898 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 23:54:16.117232 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 23:54:16.359549 kernel: hv_netvsc 000d3af9-9be5-000d-3af9-9be5000d3af9 eth0: Data path switched from VF: enP59515s1 Sep 5 23:54:16.117282 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 23:54:16.129387 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 23:54:16.129430 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 23:54:16.142158 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 23:54:16.154084 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 23:54:16.165007 systemd-networkd[869]: eth0: DHCPv6 lease lost Sep 5 23:54:16.165642 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 23:54:16.165723 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 23:54:16.178433 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 23:54:16.178570 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 23:54:16.191381 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 23:54:16.191447 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:54:16.222372 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 23:54:16.232090 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 23:54:16.232184 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:54:16.245251 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:54:16.261200 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 23:54:16.262061 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 23:54:16.288849 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 23:54:16.288950 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:54:16.298418 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 23:54:16.298485 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 23:54:16.309664 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 23:54:16.309728 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:54:16.322917 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 23:54:16.323080 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:54:16.335426 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 23:54:16.335520 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 23:54:16.354434 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 23:54:16.354478 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:54:16.365095 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 23:54:16.365150 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:54:16.382059 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 23:54:16.382111 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 23:54:16.398889 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:54:16.398957 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:54:16.432098 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 23:54:16.446998 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 23:54:16.447083 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:54:16.675149 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Sep 5 23:54:16.461220 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:54:16.461282 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:54:16.473993 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 23:54:16.474097 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 23:54:16.506012 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 23:54:16.506170 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 23:54:16.519200 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 23:54:16.549999 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 23:54:16.578404 systemd[1]: Switching root. Sep 5 23:54:16.726101 systemd-journald[217]: Journal stopped Sep 5 23:54:05.333890 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 5 23:54:05.333912 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 5 22:30:47 -00 2025 Sep 5 23:54:05.333920 kernel: KASLR enabled Sep 5 23:54:05.333926 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 5 23:54:05.333934 kernel: printk: bootconsole [pl11] enabled Sep 5 23:54:05.333940 kernel: efi: EFI v2.7 by EDK II Sep 5 23:54:05.333947 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Sep 5 23:54:05.333953 kernel: random: crng init done Sep 5 23:54:05.333960 kernel: ACPI: Early table checksum verification disabled Sep 5 23:54:05.333966 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 5 23:54:05.333972 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:54:05.333978 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:54:05.333986 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 5 23:54:05.333992 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:54:05.334000 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:54:05.334006 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:54:05.334012 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:54:05.334020 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:54:05.334027 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:54:05.334033 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 5 23:54:05.334039 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:54:05.334046 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 5 23:54:05.334052 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 5 23:54:05.334059 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Sep 5 23:54:05.334065 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Sep 5 23:54:05.334072 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Sep 5 23:54:05.334078 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Sep 5 23:54:05.334084 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Sep 5 23:54:05.334093 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Sep 5 23:54:05.334099 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Sep 5 23:54:05.334106 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Sep 5 23:54:05.334112 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Sep 5 23:54:05.334118 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Sep 5 23:54:05.334125 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Sep 5 23:54:05.334131 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Sep 5 23:54:05.334137 kernel: Zone ranges: Sep 5 23:54:05.334143 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 5 23:54:05.334167 kernel: DMA32 empty Sep 5 23:54:05.334173 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 5 23:54:05.334179 kernel: Movable zone start for each node Sep 5 23:54:05.334190 kernel: Early memory node ranges Sep 5 23:54:05.335227 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 5 23:54:05.335238 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 5 23:54:05.335245 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 5 23:54:05.335252 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 5 23:54:05.335263 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 5 23:54:05.335270 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 5 23:54:05.335277 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 5 23:54:05.335284 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 5 23:54:05.335291 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 5 23:54:05.335298 kernel: psci: probing for conduit method from ACPI. Sep 5 23:54:05.335305 kernel: psci: PSCIv1.1 detected in firmware. Sep 5 23:54:05.335312 kernel: psci: Using standard PSCI v0.2 function IDs Sep 5 23:54:05.335319 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 5 23:54:05.335325 kernel: psci: SMC Calling Convention v1.4 Sep 5 23:54:05.335333 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 5 23:54:05.335339 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 5 23:54:05.335355 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 5 23:54:05.335362 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 5 23:54:05.335369 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 5 23:54:05.335376 kernel: Detected PIPT I-cache on CPU0 Sep 5 23:54:05.335383 kernel: CPU features: detected: GIC system register CPU interface Sep 5 23:54:05.335390 kernel: CPU features: detected: Hardware dirty bit management Sep 5 23:54:05.335400 kernel: CPU features: detected: Spectre-BHB Sep 5 23:54:05.335407 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 5 23:54:05.335414 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 5 23:54:05.335421 kernel: CPU features: detected: ARM erratum 1418040 Sep 5 23:54:05.335428 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 5 23:54:05.335440 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 5 23:54:05.335446 kernel: alternatives: applying boot alternatives Sep 5 23:54:05.335455 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:54:05.335462 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 23:54:05.335469 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 23:54:05.335476 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 23:54:05.335485 kernel: Fallback order for Node 0: 0 Sep 5 23:54:05.335492 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 5 23:54:05.335499 kernel: Policy zone: Normal Sep 5 23:54:05.335506 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 23:54:05.335513 kernel: software IO TLB: area num 2. Sep 5 23:54:05.335523 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Sep 5 23:54:05.335531 kernel: Memory: 3982628K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 211532K reserved, 0K cma-reserved) Sep 5 23:54:05.335538 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 5 23:54:05.335544 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 23:54:05.335552 kernel: rcu: RCU event tracing is enabled. Sep 5 23:54:05.335559 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 5 23:54:05.335566 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 23:54:05.335575 kernel: Tracing variant of Tasks RCU enabled. Sep 5 23:54:05.335582 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 23:54:05.335589 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 5 23:54:05.335596 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 5 23:54:05.335604 kernel: GICv3: 960 SPIs implemented Sep 5 23:54:05.335611 kernel: GICv3: 0 Extended SPIs implemented Sep 5 23:54:05.335618 kernel: Root IRQ handler: gic_handle_irq Sep 5 23:54:05.335627 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 5 23:54:05.335634 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 5 23:54:05.335641 kernel: ITS: No ITS available, not enabling LPIs Sep 5 23:54:05.335648 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 23:54:05.335655 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:54:05.335664 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 5 23:54:05.335671 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 5 23:54:05.335678 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 5 23:54:05.335687 kernel: Console: colour dummy device 80x25 Sep 5 23:54:05.335694 kernel: printk: console [tty1] enabled Sep 5 23:54:05.335701 kernel: ACPI: Core revision 20230628 Sep 5 23:54:05.335711 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 5 23:54:05.335718 kernel: pid_max: default: 32768 minimum: 301 Sep 5 23:54:05.335725 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 23:54:05.335732 kernel: landlock: Up and running. Sep 5 23:54:05.335739 kernel: SELinux: Initializing. Sep 5 23:54:05.335748 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:54:05.335755 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:54:05.335764 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:54:05.335772 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:54:05.335779 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 5 23:54:05.335786 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 5 23:54:05.335795 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 5 23:54:05.335803 kernel: rcu: Hierarchical SRCU implementation. Sep 5 23:54:05.335810 kernel: rcu: Max phase no-delay instances is 400. Sep 5 23:54:05.335823 kernel: Remapping and enabling EFI services. Sep 5 23:54:05.335831 kernel: smp: Bringing up secondary CPUs ... Sep 5 23:54:05.335841 kernel: Detected PIPT I-cache on CPU1 Sep 5 23:54:05.335848 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 5 23:54:05.335857 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:54:05.335864 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 5 23:54:05.335871 kernel: smp: Brought up 1 node, 2 CPUs Sep 5 23:54:05.335879 kernel: SMP: Total of 2 processors activated. Sep 5 23:54:05.335887 kernel: CPU features: detected: 32-bit EL0 Support Sep 5 23:54:05.335900 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 5 23:54:05.335908 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 5 23:54:05.335915 kernel: CPU features: detected: CRC32 instructions Sep 5 23:54:05.335923 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 5 23:54:05.335930 kernel: CPU features: detected: LSE atomic instructions Sep 5 23:54:05.335938 kernel: CPU features: detected: Privileged Access Never Sep 5 23:54:05.335948 kernel: CPU: All CPU(s) started at EL1 Sep 5 23:54:05.335955 kernel: alternatives: applying system-wide alternatives Sep 5 23:54:05.335962 kernel: devtmpfs: initialized Sep 5 23:54:05.335971 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 23:54:05.335979 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 5 23:54:05.335989 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 23:54:05.335996 kernel: SMBIOS 3.1.0 present. Sep 5 23:54:05.336004 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 5 23:54:05.336011 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 23:54:05.336019 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 5 23:54:05.336026 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 5 23:54:05.336036 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 5 23:54:05.336045 kernel: audit: initializing netlink subsys (disabled) Sep 5 23:54:05.336052 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Sep 5 23:54:05.336060 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 23:54:05.336070 kernel: cpuidle: using governor menu Sep 5 23:54:05.336078 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 5 23:54:05.336086 kernel: ASID allocator initialised with 32768 entries Sep 5 23:54:05.336093 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 23:54:05.336101 kernel: Serial: AMBA PL011 UART driver Sep 5 23:54:05.336108 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 5 23:54:05.336120 kernel: Modules: 0 pages in range for non-PLT usage Sep 5 23:54:05.336128 kernel: Modules: 509008 pages in range for PLT usage Sep 5 23:54:05.336136 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 23:54:05.336143 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 23:54:05.336152 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 5 23:54:05.336162 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 5 23:54:05.336170 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 23:54:05.336177 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 23:54:05.336185 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 5 23:54:05.336194 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 5 23:54:05.336211 kernel: ACPI: Added _OSI(Module Device) Sep 5 23:54:05.336219 kernel: ACPI: Added _OSI(Processor Device) Sep 5 23:54:05.336226 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 23:54:05.336234 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 23:54:05.336241 kernel: ACPI: Interpreter enabled Sep 5 23:54:05.336253 kernel: ACPI: Using GIC for interrupt routing Sep 5 23:54:05.336260 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 5 23:54:05.336268 kernel: printk: console [ttyAMA0] enabled Sep 5 23:54:05.336277 kernel: printk: bootconsole [pl11] disabled Sep 5 23:54:05.336285 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 5 23:54:05.336296 kernel: iommu: Default domain type: Translated Sep 5 23:54:05.336303 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 5 23:54:05.336311 kernel: efivars: Registered efivars operations Sep 5 23:54:05.336318 kernel: vgaarb: loaded Sep 5 23:54:05.336325 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 5 23:54:05.336333 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 23:54:05.336343 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 23:54:05.336352 kernel: pnp: PnP ACPI init Sep 5 23:54:05.336360 kernel: pnp: PnP ACPI: found 0 devices Sep 5 23:54:05.336368 kernel: NET: Registered PF_INET protocol family Sep 5 23:54:05.336376 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 23:54:05.336383 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 23:54:05.336394 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 23:54:05.336402 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 23:54:05.336410 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 23:54:05.336417 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 23:54:05.336427 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:54:05.336437 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:54:05.336445 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 23:54:05.336452 kernel: PCI: CLS 0 bytes, default 64 Sep 5 23:54:05.336460 kernel: kvm [1]: HYP mode not available Sep 5 23:54:05.336468 kernel: Initialise system trusted keyrings Sep 5 23:54:05.336475 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 23:54:05.336483 kernel: Key type asymmetric registered Sep 5 23:54:05.336504 kernel: Asymmetric key parser 'x509' registered Sep 5 23:54:05.336513 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 23:54:05.336521 kernel: io scheduler mq-deadline registered Sep 5 23:54:05.336528 kernel: io scheduler kyber registered Sep 5 23:54:05.336536 kernel: io scheduler bfq registered Sep 5 23:54:05.336543 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 23:54:05.336551 kernel: thunder_xcv, ver 1.0 Sep 5 23:54:05.336558 kernel: thunder_bgx, ver 1.0 Sep 5 23:54:05.336566 kernel: nicpf, ver 1.0 Sep 5 23:54:05.336573 kernel: nicvf, ver 1.0 Sep 5 23:54:05.336735 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 5 23:54:05.336814 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-05T23:54:04 UTC (1757116444) Sep 5 23:54:05.336825 kernel: efifb: probing for efifb Sep 5 23:54:05.336833 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 5 23:54:05.336840 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 5 23:54:05.336848 kernel: efifb: scrolling: redraw Sep 5 23:54:05.336855 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 5 23:54:05.336862 kernel: Console: switching to colour frame buffer device 128x48 Sep 5 23:54:05.336872 kernel: fb0: EFI VGA frame buffer device Sep 5 23:54:05.336879 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 5 23:54:05.336886 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 23:54:05.336894 kernel: No ACPI PMU IRQ for CPU0 Sep 5 23:54:05.336901 kernel: No ACPI PMU IRQ for CPU1 Sep 5 23:54:05.336909 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 5 23:54:05.336916 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 5 23:54:05.336924 kernel: watchdog: Hard watchdog permanently disabled Sep 5 23:54:05.336931 kernel: NET: Registered PF_INET6 protocol family Sep 5 23:54:05.336940 kernel: Segment Routing with IPv6 Sep 5 23:54:05.336948 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 23:54:05.336955 kernel: NET: Registered PF_PACKET protocol family Sep 5 23:54:05.336962 kernel: Key type dns_resolver registered Sep 5 23:54:05.336970 kernel: registered taskstats version 1 Sep 5 23:54:05.336977 kernel: Loading compiled-in X.509 certificates Sep 5 23:54:05.336984 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 5b16e1dfa86dac534548885fd675b87757ff9e20' Sep 5 23:54:05.336992 kernel: Key type .fscrypt registered Sep 5 23:54:05.336999 kernel: Key type fscrypt-provisioning registered Sep 5 23:54:05.337008 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 23:54:05.337015 kernel: ima: Allocated hash algorithm: sha1 Sep 5 23:54:05.337023 kernel: ima: No architecture policies found Sep 5 23:54:05.337030 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 5 23:54:05.337038 kernel: clk: Disabling unused clocks Sep 5 23:54:05.337045 kernel: Freeing unused kernel memory: 39424K Sep 5 23:54:05.337052 kernel: Run /init as init process Sep 5 23:54:05.337060 kernel: with arguments: Sep 5 23:54:05.337067 kernel: /init Sep 5 23:54:05.337076 kernel: with environment: Sep 5 23:54:05.337083 kernel: HOME=/ Sep 5 23:54:05.337090 kernel: TERM=linux Sep 5 23:54:05.337098 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 23:54:05.337107 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:54:05.337117 systemd[1]: Detected virtualization microsoft. Sep 5 23:54:05.337129 systemd[1]: Detected architecture arm64. Sep 5 23:54:05.337136 systemd[1]: Running in initrd. Sep 5 23:54:05.337146 systemd[1]: No hostname configured, using default hostname. Sep 5 23:54:05.337154 systemd[1]: Hostname set to . Sep 5 23:54:05.337162 systemd[1]: Initializing machine ID from random generator. Sep 5 23:54:05.337169 systemd[1]: Queued start job for default target initrd.target. Sep 5 23:54:05.337177 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:54:05.337185 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:54:05.344143 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 23:54:05.344188 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:54:05.344213 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 23:54:05.344222 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 23:54:05.344232 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 23:54:05.344240 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 23:54:05.344249 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:54:05.344256 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:54:05.344266 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:54:05.344275 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:54:05.344283 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:54:05.344291 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:54:05.344298 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:54:05.344306 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:54:05.344314 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 23:54:05.344322 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 23:54:05.344330 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:54:05.344341 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:54:05.344349 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:54:05.344357 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:54:05.344365 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 23:54:05.344372 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:54:05.344381 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 23:54:05.344388 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 23:54:05.344397 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:54:05.344405 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:54:05.344445 systemd-journald[217]: Collecting audit messages is disabled. Sep 5 23:54:05.344468 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:54:05.344477 systemd-journald[217]: Journal started Sep 5 23:54:05.344498 systemd-journald[217]: Runtime Journal (/run/log/journal/5e8ee778946043f99fa101b95e6d4926) is 8.0M, max 78.5M, 70.5M free. Sep 5 23:54:05.345184 systemd-modules-load[218]: Inserted module 'overlay' Sep 5 23:54:05.356253 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:54:05.362830 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 23:54:05.398040 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 23:54:05.398065 kernel: Bridge firewalling registered Sep 5 23:54:05.386706 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:54:05.404454 systemd-modules-load[218]: Inserted module 'br_netfilter' Sep 5 23:54:05.405348 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 23:54:05.416106 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:54:05.427074 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:54:05.455542 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:54:05.470266 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:54:05.485364 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:54:05.510396 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:54:05.517566 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:54:05.534551 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:54:05.546662 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:54:05.560553 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:54:05.589470 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 23:54:05.603813 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:54:05.618370 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:54:05.650725 dracut-cmdline[249]: dracut-dracut-053 Sep 5 23:54:05.650725 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:54:05.642864 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:54:05.655830 systemd-resolved[253]: Positive Trust Anchors: Sep 5 23:54:05.655841 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:54:05.655872 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:54:05.658177 systemd-resolved[253]: Defaulting to hostname 'linux'. Sep 5 23:54:05.663459 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:54:05.706010 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:54:05.821210 kernel: SCSI subsystem initialized Sep 5 23:54:05.827221 kernel: Loading iSCSI transport class v2.0-870. Sep 5 23:54:05.838229 kernel: iscsi: registered transport (tcp) Sep 5 23:54:05.856413 kernel: iscsi: registered transport (qla4xxx) Sep 5 23:54:05.856489 kernel: QLogic iSCSI HBA Driver Sep 5 23:54:05.891772 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 23:54:05.905445 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 23:54:05.937222 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 23:54:05.937270 kernel: device-mapper: uevent: version 1.0.3 Sep 5 23:54:05.943835 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 23:54:05.993228 kernel: raid6: neonx8 gen() 15741 MB/s Sep 5 23:54:06.013209 kernel: raid6: neonx4 gen() 15678 MB/s Sep 5 23:54:06.033206 kernel: raid6: neonx2 gen() 13237 MB/s Sep 5 23:54:06.054212 kernel: raid6: neonx1 gen() 10520 MB/s Sep 5 23:54:06.074205 kernel: raid6: int64x8 gen() 6978 MB/s Sep 5 23:54:06.094206 kernel: raid6: int64x4 gen() 7337 MB/s Sep 5 23:54:06.115207 kernel: raid6: int64x2 gen() 6131 MB/s Sep 5 23:54:06.138635 kernel: raid6: int64x1 gen() 5059 MB/s Sep 5 23:54:06.138647 kernel: raid6: using algorithm neonx8 gen() 15741 MB/s Sep 5 23:54:06.162951 kernel: raid6: .... xor() 12060 MB/s, rmw enabled Sep 5 23:54:06.162979 kernel: raid6: using neon recovery algorithm Sep 5 23:54:06.172208 kernel: xor: measuring software checksum speed Sep 5 23:54:06.179262 kernel: 8regs : 18730 MB/sec Sep 5 23:54:06.179273 kernel: 32regs : 19617 MB/sec Sep 5 23:54:06.184466 kernel: arm64_neon : 26954 MB/sec Sep 5 23:54:06.188846 kernel: xor: using function: arm64_neon (26954 MB/sec) Sep 5 23:54:06.240221 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 23:54:06.250145 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:54:06.267338 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:54:06.290143 systemd-udevd[436]: Using default interface naming scheme 'v255'. Sep 5 23:54:06.295714 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:54:06.315345 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 23:54:06.327791 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Sep 5 23:54:06.356953 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:54:06.372528 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:54:06.419713 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:54:06.440400 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 23:54:06.481249 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 23:54:06.494336 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:54:06.510153 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:54:06.523510 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:54:06.543219 kernel: hv_vmbus: Vmbus version:5.3 Sep 5 23:54:06.545354 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 23:54:06.571209 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:54:06.584236 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 5 23:54:06.584272 kernel: hv_vmbus: registering driver hid_hyperv Sep 5 23:54:06.607361 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 5 23:54:06.607414 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 5 23:54:06.607426 kernel: hv_vmbus: registering driver hv_netvsc Sep 5 23:54:06.607435 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 5 23:54:06.608106 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:54:06.634954 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 5 23:54:06.608403 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:54:06.657636 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 5 23:54:06.648845 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:54:06.675527 kernel: hv_vmbus: registering driver hv_storvsc Sep 5 23:54:06.675553 kernel: PTP clock support registered Sep 5 23:54:06.671285 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:54:06.671519 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:54:06.714889 kernel: scsi host0: storvsc_host_t Sep 5 23:54:06.715082 kernel: scsi host1: storvsc_host_t Sep 5 23:54:06.715175 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 5 23:54:06.715247 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 5 23:54:06.700328 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:54:06.740659 kernel: hv_utils: Registering HyperV Utility Driver Sep 5 23:54:06.740681 kernel: hv_vmbus: registering driver hv_utils Sep 5 23:54:06.737517 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:54:07.143435 kernel: hv_utils: Heartbeat IC version 3.0 Sep 5 23:54:07.143460 kernel: hv_netvsc 000d3af9-9be5-000d-3af9-9be5000d3af9 eth0: VF slot 1 added Sep 5 23:54:07.143650 kernel: hv_utils: Shutdown IC version 3.2 Sep 5 23:54:07.143661 kernel: hv_utils: TimeSync IC version 4.0 Sep 5 23:54:07.143284 systemd-resolved[253]: Clock change detected. Flushing caches. Sep 5 23:54:07.165933 kernel: hv_vmbus: registering driver hv_pci Sep 5 23:54:07.165969 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Sep 5 23:54:07.166121 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 23:54:07.175208 kernel: hv_pci 84d57326-e87b-4b8f-93ac-df2dd60d6077: PCI VMBus probing: Using version 0x10004 Sep 5 23:54:07.177848 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Sep 5 23:54:07.193849 kernel: hv_pci 84d57326-e87b-4b8f-93ac-df2dd60d6077: PCI host bridge to bus e87b:00 Sep 5 23:54:07.194043 kernel: pci_bus e87b:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 5 23:54:07.194145 kernel: pci_bus e87b:00: No busn resource found for root bus, will use [bus 00-ff] Sep 5 23:54:07.200464 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:54:07.234346 kernel: pci e87b:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 5 23:54:07.234391 kernel: pci e87b:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 5 23:54:07.234406 kernel: pci e87b:00:02.0: enabling Extended Tags Sep 5 23:54:07.239240 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:54:07.278266 kernel: pci e87b:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e87b:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 5 23:54:07.278459 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 5 23:54:07.278567 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Sep 5 23:54:07.278652 kernel: pci_bus e87b:00: busn_res: [bus 00-ff] end is updated to 00 Sep 5 23:54:07.287708 kernel: sd 1:0:0:0: [sda] Write Protect is off Sep 5 23:54:07.287937 kernel: pci e87b:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 5 23:54:07.295497 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 5 23:54:07.304175 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 5 23:54:07.306579 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:54:07.332162 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:54:07.332204 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Sep 5 23:54:07.371448 kernel: mlx5_core e87b:00:02.0: enabling device (0000 -> 0002) Sep 5 23:54:07.379852 kernel: mlx5_core e87b:00:02.0: firmware version: 16.30.1284 Sep 5 23:54:07.577472 kernel: hv_netvsc 000d3af9-9be5-000d-3af9-9be5000d3af9 eth0: VF registering: eth1 Sep 5 23:54:07.577661 kernel: mlx5_core e87b:00:02.0 eth1: joined to eth0 Sep 5 23:54:07.585902 kernel: mlx5_core e87b:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 5 23:54:07.595875 kernel: mlx5_core e87b:00:02.0 enP59515s1: renamed from eth1 Sep 5 23:54:08.034880 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 5 23:54:08.062929 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (495) Sep 5 23:54:08.077592 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 5 23:54:08.096733 kernel: BTRFS: device fsid 045c118e-b098-46f0-884a-43665575c70e devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (490) Sep 5 23:54:08.115336 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 5 23:54:08.123391 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 5 23:54:08.149281 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 5 23:54:08.166190 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 23:54:08.193858 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:54:08.202860 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:54:09.212007 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:54:09.212928 disk-uuid[598]: The operation has completed successfully. Sep 5 23:54:09.273066 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 23:54:09.274878 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 23:54:09.314975 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 23:54:09.329510 sh[685]: Success Sep 5 23:54:09.364299 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 5 23:54:09.576307 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 23:54:09.584950 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 23:54:09.594932 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 23:54:09.623705 kernel: BTRFS info (device dm-0): first mount of filesystem 045c118e-b098-46f0-884a-43665575c70e Sep 5 23:54:09.623757 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:54:09.630917 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 23:54:09.636190 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 23:54:09.640415 kernel: BTRFS info (device dm-0): using free space tree Sep 5 23:54:09.973500 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 23:54:09.979317 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 23:54:09.999129 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 23:54:10.024843 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:54:10.024907 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:54:10.021068 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 23:54:10.048487 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:54:10.093149 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:54:10.106243 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 23:54:10.113008 kernel: BTRFS info (device sda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:54:10.121140 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 23:54:10.139404 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 23:54:10.160589 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:54:10.180982 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:54:10.209902 systemd-networkd[869]: lo: Link UP Sep 5 23:54:10.213290 systemd-networkd[869]: lo: Gained carrier Sep 5 23:54:10.214930 systemd-networkd[869]: Enumeration completed Sep 5 23:54:10.219166 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:54:10.228293 systemd[1]: Reached target network.target - Network. Sep 5 23:54:10.239612 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:54:10.239616 systemd-networkd[869]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:54:10.309878 kernel: mlx5_core e87b:00:02.0 enP59515s1: Link up Sep 5 23:54:10.350509 kernel: hv_netvsc 000d3af9-9be5-000d-3af9-9be5000d3af9 eth0: Data path switched to VF: enP59515s1 Sep 5 23:54:10.350758 systemd-networkd[869]: enP59515s1: Link UP Sep 5 23:54:10.350929 systemd-networkd[869]: eth0: Link UP Sep 5 23:54:10.351040 systemd-networkd[869]: eth0: Gained carrier Sep 5 23:54:10.351051 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:54:10.359028 systemd-networkd[869]: enP59515s1: Gained carrier Sep 5 23:54:10.388882 systemd-networkd[869]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 5 23:54:10.994733 ignition[853]: Ignition 2.19.0 Sep 5 23:54:10.994752 ignition[853]: Stage: fetch-offline Sep 5 23:54:10.999866 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:54:10.994791 ignition[853]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:54:10.994799 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:54:10.994919 ignition[853]: parsed url from cmdline: "" Sep 5 23:54:10.994925 ignition[853]: no config URL provided Sep 5 23:54:10.994929 ignition[853]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:54:10.994936 ignition[853]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:54:10.994941 ignition[853]: failed to fetch config: resource requires networking Sep 5 23:54:11.037177 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 5 23:54:10.995183 ignition[853]: Ignition finished successfully Sep 5 23:54:11.069348 ignition[879]: Ignition 2.19.0 Sep 5 23:54:11.069364 ignition[879]: Stage: fetch Sep 5 23:54:11.069580 ignition[879]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:54:11.069592 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:54:11.069697 ignition[879]: parsed url from cmdline: "" Sep 5 23:54:11.069700 ignition[879]: no config URL provided Sep 5 23:54:11.069705 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:54:11.069712 ignition[879]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:54:11.069735 ignition[879]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 5 23:54:11.168911 ignition[879]: GET result: OK Sep 5 23:54:11.168998 ignition[879]: config has been read from IMDS userdata Sep 5 23:54:11.169059 ignition[879]: parsing config with SHA512: 355fc70f12636779a448e5c7acfbff7157e533cb7b6f8a03e0819b8d6ae0b051caac558233cd05daebfbc468c609abd13e7c228d817363821021d315f0147eb7 Sep 5 23:54:11.172722 unknown[879]: fetched base config from "system" Sep 5 23:54:11.173116 ignition[879]: fetch: fetch complete Sep 5 23:54:11.172732 unknown[879]: fetched base config from "system" Sep 5 23:54:11.173120 ignition[879]: fetch: fetch passed Sep 5 23:54:11.172737 unknown[879]: fetched user config from "azure" Sep 5 23:54:11.173159 ignition[879]: Ignition finished successfully Sep 5 23:54:11.178326 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 5 23:54:11.205048 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 23:54:11.229658 ignition[886]: Ignition 2.19.0 Sep 5 23:54:11.229668 ignition[886]: Stage: kargs Sep 5 23:54:11.234543 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 23:54:11.229887 ignition[886]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:54:11.229897 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:54:11.253041 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 23:54:11.230767 ignition[886]: kargs: kargs passed Sep 5 23:54:11.230819 ignition[886]: Ignition finished successfully Sep 5 23:54:11.272987 ignition[893]: Ignition 2.19.0 Sep 5 23:54:11.273002 ignition[893]: Stage: disks Sep 5 23:54:11.280582 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 23:54:11.273230 ignition[893]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:54:11.289036 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 23:54:11.273239 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:54:11.297999 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 23:54:11.274794 ignition[893]: disks: disks passed Sep 5 23:54:11.310070 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:54:11.275075 ignition[893]: Ignition finished successfully Sep 5 23:54:11.320390 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:54:11.332659 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:54:11.360178 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 23:54:11.426837 systemd-fsck[901]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 5 23:54:11.437876 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 23:54:11.455067 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 23:54:11.508863 kernel: EXT4-fs (sda9): mounted filesystem 72e55cb0-8368-4871-a3a0-8637412e72e8 r/w with ordered data mode. Quota mode: none. Sep 5 23:54:11.509816 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 23:54:11.514736 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 23:54:11.566961 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:54:11.575975 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 23:54:11.605279 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (912) Sep 5 23:54:11.605312 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:54:11.592083 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 5 23:54:11.629044 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:54:11.629069 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:54:11.621436 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 23:54:11.621480 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:54:11.663334 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:54:11.662652 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 23:54:11.674690 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:54:11.691094 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 23:54:12.337012 coreos-metadata[914]: Sep 05 23:54:12.336 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 5 23:54:12.347301 coreos-metadata[914]: Sep 05 23:54:12.347 INFO Fetch successful Sep 5 23:54:12.347301 coreos-metadata[914]: Sep 05 23:54:12.347 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 5 23:54:12.363797 coreos-metadata[914]: Sep 05 23:54:12.359 INFO Fetch successful Sep 5 23:54:12.375924 coreos-metadata[914]: Sep 05 23:54:12.375 INFO wrote hostname ci-4081.3.5-n-8e502b48f1 to /sysroot/etc/hostname Sep 5 23:54:12.385463 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 5 23:54:12.404985 systemd-networkd[869]: eth0: Gained IPv6LL Sep 5 23:54:12.496746 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 23:54:12.525291 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Sep 5 23:54:12.554599 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 23:54:12.563602 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 23:54:13.689869 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 23:54:13.708085 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 23:54:13.733117 kernel: BTRFS info (device sda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:54:13.728250 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 23:54:13.736520 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 23:54:13.760884 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 23:54:13.774237 ignition[1035]: INFO : Ignition 2.19.0 Sep 5 23:54:13.774237 ignition[1035]: INFO : Stage: mount Sep 5 23:54:13.783051 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:54:13.783051 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:54:13.783051 ignition[1035]: INFO : mount: mount passed Sep 5 23:54:13.783051 ignition[1035]: INFO : Ignition finished successfully Sep 5 23:54:13.779678 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 23:54:13.806079 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 23:54:13.824122 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:54:13.851040 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1045) Sep 5 23:54:13.851078 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:54:13.863526 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:54:13.867880 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:54:13.874858 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:54:13.876371 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:54:13.902293 ignition[1063]: INFO : Ignition 2.19.0 Sep 5 23:54:13.902293 ignition[1063]: INFO : Stage: files Sep 5 23:54:13.911150 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:54:13.911150 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:54:13.911150 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Sep 5 23:54:13.911150 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 23:54:13.911150 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 23:54:13.988263 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 23:54:13.995751 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 23:54:13.995751 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 23:54:13.988629 unknown[1063]: wrote ssh authorized keys file for user: core Sep 5 23:54:14.020680 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 5 23:54:14.031536 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 5 23:54:14.075486 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 23:54:14.428821 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 5 23:54:14.428821 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 5 23:54:14.449687 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 5 23:54:14.935928 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 23:54:15.211613 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 5 23:54:15.211613 ignition[1063]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 23:54:15.245994 ignition[1063]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:54:15.264947 ignition[1063]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:54:15.264947 ignition[1063]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 23:54:15.264947 ignition[1063]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 5 23:54:15.264947 ignition[1063]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 23:54:15.264947 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:54:15.264947 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:54:15.264947 ignition[1063]: INFO : files: files passed Sep 5 23:54:15.264947 ignition[1063]: INFO : Ignition finished successfully Sep 5 23:54:15.258678 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 23:54:15.293170 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 23:54:15.309035 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 23:54:15.334790 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 23:54:15.392917 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:54:15.392917 initrd-setup-root-after-ignition[1090]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:54:15.334907 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 23:54:15.426424 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:54:15.345844 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:54:15.355793 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 23:54:15.386081 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 23:54:15.428584 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 23:54:15.428700 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 23:54:15.443239 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 23:54:15.456889 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 23:54:15.468130 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 23:54:15.494154 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 23:54:15.532455 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:54:15.548140 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 23:54:15.566322 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:54:15.573073 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:54:15.585759 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 23:54:15.596977 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 23:54:15.597106 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:54:15.614411 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 23:54:15.621008 systemd[1]: Stopped target basic.target - Basic System. Sep 5 23:54:15.633371 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 23:54:15.645022 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:54:15.656179 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 23:54:15.668593 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 23:54:15.680880 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:54:15.693614 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 23:54:15.704730 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 23:54:15.717261 systemd[1]: Stopped target swap.target - Swaps. Sep 5 23:54:15.727707 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 23:54:15.727854 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:54:15.743695 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:54:15.750333 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:54:15.762963 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 23:54:15.766858 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:54:15.776439 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 23:54:15.776560 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 23:54:15.795281 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 23:54:15.795410 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:54:15.802946 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 23:54:15.803045 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 23:54:15.883917 ignition[1115]: INFO : Ignition 2.19.0 Sep 5 23:54:15.883917 ignition[1115]: INFO : Stage: umount Sep 5 23:54:15.883917 ignition[1115]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:54:15.883917 ignition[1115]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:54:15.883917 ignition[1115]: INFO : umount: umount passed Sep 5 23:54:15.883917 ignition[1115]: INFO : Ignition finished successfully Sep 5 23:54:15.814043 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 5 23:54:15.814146 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 5 23:54:15.849161 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 23:54:15.867731 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 23:54:15.867918 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:54:15.879039 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 23:54:15.889790 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 23:54:15.891739 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:54:15.909974 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 23:54:15.910097 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:54:15.931184 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 23:54:15.931825 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 23:54:15.931950 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 23:54:15.952030 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 23:54:15.952125 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 23:54:15.964867 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 23:54:15.965124 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 23:54:15.981382 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 23:54:15.981446 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 23:54:15.992729 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 5 23:54:15.992775 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 5 23:54:16.004616 systemd[1]: Stopped target network.target - Network. Sep 5 23:54:16.021142 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 23:54:16.021207 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:54:16.034598 systemd[1]: Stopped target paths.target - Path Units. Sep 5 23:54:16.045846 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 23:54:16.050898 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:54:16.060093 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 23:54:16.071636 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 23:54:16.082032 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 23:54:16.082090 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:54:16.093355 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 23:54:16.093399 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:54:16.104841 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 23:54:16.104898 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 23:54:16.117232 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 23:54:16.359549 kernel: hv_netvsc 000d3af9-9be5-000d-3af9-9be5000d3af9 eth0: Data path switched from VF: enP59515s1 Sep 5 23:54:16.117282 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 23:54:16.129387 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 23:54:16.129430 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 23:54:16.142158 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 23:54:16.154084 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 23:54:16.165007 systemd-networkd[869]: eth0: DHCPv6 lease lost Sep 5 23:54:16.165642 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 23:54:16.165723 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 23:54:16.178433 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 23:54:16.178570 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 23:54:16.191381 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 23:54:16.191447 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:54:16.222372 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 23:54:16.232090 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 23:54:16.232184 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:54:16.245251 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:54:16.261200 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 23:54:16.262061 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 23:54:16.288849 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 23:54:16.288950 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:54:16.298418 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 23:54:16.298485 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 23:54:16.309664 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 23:54:16.309728 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:54:16.322917 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 23:54:16.323080 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:54:16.335426 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 23:54:16.335520 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 23:54:16.354434 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 23:54:16.354478 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:54:16.365095 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 23:54:16.365150 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:54:16.382059 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 23:54:16.382111 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 23:54:16.398889 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:54:16.398957 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:54:16.432098 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 23:54:16.446998 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 23:54:16.447083 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:54:16.675149 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Sep 5 23:54:16.461220 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:54:16.461282 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:54:16.473993 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 23:54:16.474097 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 23:54:16.506012 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 23:54:16.506170 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 23:54:16.519200 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 23:54:16.549999 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 23:54:16.578404 systemd[1]: Switching root. Sep 5 23:54:16.726101 systemd-journald[217]: Journal stopped Sep 5 23:54:26.427195 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 23:54:26.427223 kernel: SELinux: policy capability open_perms=1 Sep 5 23:54:26.427233 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 23:54:26.427240 kernel: SELinux: policy capability always_check_network=0 Sep 5 23:54:26.427251 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 23:54:26.427258 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 23:54:26.427267 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 23:54:26.427275 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 23:54:26.427284 kernel: audit: type=1403 audit(1757116462.881:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 23:54:26.427294 systemd[1]: Successfully loaded SELinux policy in 185.071ms. Sep 5 23:54:26.427306 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.254ms. Sep 5 23:54:26.427319 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:54:26.427328 systemd[1]: Detected virtualization microsoft. Sep 5 23:54:26.427351 systemd[1]: Detected architecture arm64. Sep 5 23:54:26.427361 systemd[1]: Detected first boot. Sep 5 23:54:26.427372 systemd[1]: Hostname set to . Sep 5 23:54:26.427381 systemd[1]: Initializing machine ID from random generator. Sep 5 23:54:26.427390 zram_generator::config[1157]: No configuration found. Sep 5 23:54:26.427400 systemd[1]: Populated /etc with preset unit settings. Sep 5 23:54:26.427409 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 23:54:26.427417 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 23:54:26.427427 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 23:54:26.427439 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 23:54:26.427448 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 23:54:26.427457 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 23:54:26.427467 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 23:54:26.427476 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 23:54:26.427485 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 23:54:26.427495 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 23:54:26.427505 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 23:54:26.427515 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:54:26.427524 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:54:26.427534 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 23:54:26.427544 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 23:54:26.427554 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 23:54:26.427563 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:54:26.427572 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 5 23:54:26.427583 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:54:26.427593 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 23:54:26.427602 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 23:54:26.427613 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 23:54:26.427623 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 23:54:26.427632 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:54:26.427642 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:54:26.427651 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:54:26.427662 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:54:26.427671 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 23:54:26.427681 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 23:54:26.427690 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:54:26.427700 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:54:26.427710 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:54:26.427721 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 23:54:26.427730 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 23:54:26.427742 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 23:54:26.427751 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 23:54:26.427760 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 23:54:26.427770 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 23:54:26.427779 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 23:54:26.427791 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 23:54:26.427801 systemd[1]: Reached target machines.target - Containers. Sep 5 23:54:26.427810 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 23:54:26.427820 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:54:26.427841 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:54:26.427867 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 23:54:26.427877 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:54:26.427887 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:54:26.427899 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:54:26.427909 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 23:54:26.427918 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:54:26.427929 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 23:54:26.427938 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 23:54:26.427948 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 23:54:26.427958 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 23:54:26.427967 kernel: fuse: init (API version 7.39) Sep 5 23:54:26.427979 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 23:54:26.427989 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:54:26.427998 kernel: loop: module loaded Sep 5 23:54:26.428006 kernel: ACPI: bus type drm_connector registered Sep 5 23:54:26.428015 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:54:26.428025 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 23:54:26.428051 systemd-journald[1246]: Collecting audit messages is disabled. Sep 5 23:54:26.428073 systemd-journald[1246]: Journal started Sep 5 23:54:26.428097 systemd-journald[1246]: Runtime Journal (/run/log/journal/832689dc22b74d9ab70e1a891220f5b8) is 8.0M, max 78.5M, 70.5M free. Sep 5 23:54:26.428142 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 23:54:25.368966 systemd[1]: Queued start job for default target multi-user.target. Sep 5 23:54:25.576667 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 5 23:54:25.577049 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 23:54:25.577348 systemd[1]: systemd-journald.service: Consumed 3.199s CPU time. Sep 5 23:54:26.465044 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:54:26.474704 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 23:54:26.474866 systemd[1]: Stopped verity-setup.service. Sep 5 23:54:26.492324 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:54:26.493211 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 23:54:26.501619 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 23:54:26.508002 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 23:54:26.513568 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 23:54:26.519772 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 23:54:26.526353 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 23:54:26.533868 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 23:54:26.541437 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:54:26.549056 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 23:54:26.549185 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 23:54:26.556277 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:54:26.556413 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:54:26.563402 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:54:26.563526 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:54:26.569609 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:54:26.569739 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:54:26.577159 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 23:54:26.577279 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 23:54:26.584366 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:54:26.584496 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:54:26.591586 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:54:26.599071 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 23:54:26.608872 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 23:54:26.616891 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:54:26.632147 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 23:54:26.643927 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 23:54:26.650943 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 23:54:26.657236 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 23:54:26.657274 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:54:26.663754 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 23:54:26.671761 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 23:54:26.679609 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 23:54:26.685479 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:54:26.711986 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 23:54:26.720029 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 23:54:26.727281 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:54:26.729074 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 23:54:26.741956 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:54:26.748023 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:54:26.757162 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 23:54:26.768050 systemd-journald[1246]: Time spent on flushing to /var/log/journal/832689dc22b74d9ab70e1a891220f5b8 is 17.492ms for 889 entries. Sep 5 23:54:26.768050 systemd-journald[1246]: System Journal (/var/log/journal/832689dc22b74d9ab70e1a891220f5b8) is 8.0M, max 2.6G, 2.6G free. Sep 5 23:54:26.808927 systemd-journald[1246]: Received client request to flush runtime journal. Sep 5 23:54:26.769001 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 23:54:26.794136 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 23:54:26.805692 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 23:54:26.819512 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 23:54:26.829775 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 23:54:26.837678 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 23:54:26.845076 kernel: loop0: detected capacity change from 0 to 207008 Sep 5 23:54:26.845751 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 23:54:26.852534 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:54:26.865205 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 23:54:26.879110 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 23:54:26.886960 udevadm[1296]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 5 23:54:26.941078 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 23:54:26.960709 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 23:54:26.963920 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 23:54:26.984875 kernel: loop1: detected capacity change from 0 to 31320 Sep 5 23:54:27.009911 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 23:54:27.021061 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:54:27.121967 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Sep 5 23:54:27.121982 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Sep 5 23:54:27.126379 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:54:27.462857 kernel: loop2: detected capacity change from 0 to 114328 Sep 5 23:54:27.972862 kernel: loop3: detected capacity change from 0 to 114432 Sep 5 23:54:28.314054 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 23:54:28.327017 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:54:28.362158 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Sep 5 23:54:28.380875 kernel: loop4: detected capacity change from 0 to 207008 Sep 5 23:54:28.390853 kernel: loop5: detected capacity change from 0 to 31320 Sep 5 23:54:28.399854 kernel: loop6: detected capacity change from 0 to 114328 Sep 5 23:54:28.408852 kernel: loop7: detected capacity change from 0 to 114432 Sep 5 23:54:28.412350 (sd-merge)[1317]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 5 23:54:28.412772 (sd-merge)[1317]: Merged extensions into '/usr'. Sep 5 23:54:28.417735 systemd[1]: Reloading requested from client PID 1292 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 23:54:28.417899 systemd[1]: Reloading... Sep 5 23:54:28.484957 zram_generator::config[1340]: No configuration found. Sep 5 23:54:28.716546 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:54:28.718444 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 23:54:28.718526 kernel: hv_vmbus: registering driver hyperv_fb Sep 5 23:54:28.734338 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 5 23:54:28.734444 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 5 23:54:28.745425 kernel: hv_vmbus: registering driver hv_balloon Sep 5 23:54:28.745529 kernel: Console: switching to colour dummy device 80x25 Sep 5 23:54:28.756595 kernel: Console: switching to colour frame buffer device 128x48 Sep 5 23:54:28.759865 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 5 23:54:28.759933 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 5 23:54:28.801753 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 5 23:54:28.803407 systemd[1]: Reloading finished in 385 ms. Sep 5 23:54:28.805967 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1372) Sep 5 23:54:28.830917 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:54:28.844295 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 23:54:28.887044 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 5 23:54:28.899068 systemd[1]: Starting ensure-sysext.service... Sep 5 23:54:28.904003 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 23:54:28.914436 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:54:28.932187 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:54:28.945226 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:54:28.962475 systemd[1]: Reloading requested from client PID 1471 ('systemctl') (unit ensure-sysext.service)... Sep 5 23:54:28.962496 systemd[1]: Reloading... Sep 5 23:54:28.993610 systemd-tmpfiles[1474]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 23:54:28.993930 systemd-tmpfiles[1474]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 23:54:28.994569 systemd-tmpfiles[1474]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 23:54:28.994796 systemd-tmpfiles[1474]: ACLs are not supported, ignoring. Sep 5 23:54:28.996958 systemd-tmpfiles[1474]: ACLs are not supported, ignoring. Sep 5 23:54:29.037115 systemd-tmpfiles[1474]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:54:29.037127 systemd-tmpfiles[1474]: Skipping /boot Sep 5 23:54:29.047399 systemd-tmpfiles[1474]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:54:29.047412 systemd-tmpfiles[1474]: Skipping /boot Sep 5 23:54:29.058016 zram_generator::config[1508]: No configuration found. Sep 5 23:54:29.174668 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:54:29.250287 systemd[1]: Reloading finished in 287 ms. Sep 5 23:54:29.268747 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 23:54:29.283951 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 23:54:29.291728 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:54:29.315131 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:54:29.325216 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 23:54:29.349967 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 23:54:29.371647 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 23:54:29.381350 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:54:29.388119 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 23:54:29.399125 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 23:54:29.409741 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:54:29.415970 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:54:29.425916 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:54:29.437290 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:54:29.445196 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:54:29.446331 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:54:29.446501 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:54:29.456599 lvm[1572]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:54:29.460928 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:54:29.471901 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:54:29.472060 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:54:29.480303 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:54:29.480440 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:54:29.494399 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 23:54:29.503768 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 23:54:29.513074 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 23:54:29.525029 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:54:29.532337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:54:29.543129 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 23:54:29.557157 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:54:29.564858 lvm[1605]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:54:29.576784 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:54:29.588122 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:54:29.596251 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:54:29.597252 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:54:29.597419 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:54:29.608358 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:54:29.609922 augenrules[1609]: No rules Sep 5 23:54:29.610094 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:54:29.618805 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:54:29.626707 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 23:54:29.634674 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 23:54:29.642283 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:54:29.642551 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:54:29.644961 systemd-resolved[1579]: Positive Trust Anchors: Sep 5 23:54:29.644982 systemd-resolved[1579]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:54:29.645015 systemd-resolved[1579]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:54:29.648262 systemd-resolved[1579]: Using system hostname 'ci-4081.3.5-n-8e502b48f1'. Sep 5 23:54:29.652013 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:54:29.662510 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:54:29.669262 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:54:29.675107 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:54:29.683761 systemd-networkd[1473]: lo: Link UP Sep 5 23:54:29.684091 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:54:29.684235 systemd-networkd[1473]: lo: Gained carrier Sep 5 23:54:29.686307 systemd-networkd[1473]: Enumeration completed Sep 5 23:54:29.687046 systemd-networkd[1473]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:54:29.687052 systemd-networkd[1473]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:54:29.691179 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:54:29.700227 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:54:29.706314 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:54:29.706503 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 23:54:29.713140 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:54:29.719624 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:54:29.719788 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:54:29.727463 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:54:29.727615 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:54:29.734130 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:54:29.734274 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:54:29.741667 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:54:29.741816 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:54:29.750632 systemd[1]: Finished ensure-sysext.service. Sep 5 23:54:29.757612 systemd[1]: Reached target network.target - Network. Sep 5 23:54:29.768983 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 23:54:29.775580 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:54:29.775643 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:54:29.798995 kernel: mlx5_core e87b:00:02.0 enP59515s1: Link up Sep 5 23:54:29.824870 kernel: hv_netvsc 000d3af9-9be5-000d-3af9-9be5000d3af9 eth0: Data path switched to VF: enP59515s1 Sep 5 23:54:29.825869 systemd-networkd[1473]: enP59515s1: Link UP Sep 5 23:54:29.825990 systemd-networkd[1473]: eth0: Link UP Sep 5 23:54:29.825993 systemd-networkd[1473]: eth0: Gained carrier Sep 5 23:54:29.826009 systemd-networkd[1473]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:54:29.831149 systemd-networkd[1473]: enP59515s1: Gained carrier Sep 5 23:54:29.840900 systemd-networkd[1473]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 5 23:54:30.081710 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 23:54:30.089827 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 23:54:31.348998 systemd-networkd[1473]: eth0: Gained IPv6LL Sep 5 23:54:31.350749 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 23:54:31.358460 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 23:54:32.999301 ldconfig[1286]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 23:54:33.013122 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 23:54:33.024100 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 23:54:33.038478 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 23:54:33.045139 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:54:33.052183 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 23:54:33.059397 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 23:54:33.068156 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 23:54:33.075095 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 23:54:33.083801 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 23:54:33.091364 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 23:54:33.091397 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:54:33.096693 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:54:33.102816 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 23:54:33.110439 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 23:54:33.120366 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 23:54:33.127091 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 23:54:33.133941 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:54:33.139587 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:54:33.145409 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:54:33.145433 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:54:33.161949 systemd[1]: Starting chronyd.service - NTP client/server... Sep 5 23:54:33.169986 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 23:54:33.181952 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 5 23:54:33.192334 (chronyd)[1640]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 5 23:54:33.193057 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 23:54:33.200004 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 23:54:33.209062 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 23:54:33.209805 jq[1646]: false Sep 5 23:54:33.218138 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 23:54:33.218178 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Sep 5 23:54:33.218901 chronyd[1649]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 5 23:54:33.221110 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 5 23:54:33.228038 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 5 23:54:33.229662 KVP[1650]: KVP starting; pid is:1650 Sep 5 23:54:33.230145 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:54:33.239735 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 23:54:33.249051 KVP[1650]: KVP LIC Version: 3.1 Sep 5 23:54:33.249938 kernel: hv_utils: KVP IC version 4.0 Sep 5 23:54:33.250609 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 23:54:33.261039 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 23:54:33.274126 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 23:54:33.277230 extend-filesystems[1647]: Found loop4 Sep 5 23:54:33.287161 extend-filesystems[1647]: Found loop5 Sep 5 23:54:33.287161 extend-filesystems[1647]: Found loop6 Sep 5 23:54:33.287161 extend-filesystems[1647]: Found loop7 Sep 5 23:54:33.287161 extend-filesystems[1647]: Found sda Sep 5 23:54:33.287161 extend-filesystems[1647]: Found sda1 Sep 5 23:54:33.287161 extend-filesystems[1647]: Found sda2 Sep 5 23:54:33.287161 extend-filesystems[1647]: Found sda3 Sep 5 23:54:33.287161 extend-filesystems[1647]: Found usr Sep 5 23:54:33.287161 extend-filesystems[1647]: Found sda4 Sep 5 23:54:33.287161 extend-filesystems[1647]: Found sda6 Sep 5 23:54:33.287161 extend-filesystems[1647]: Found sda7 Sep 5 23:54:33.287161 extend-filesystems[1647]: Found sda9 Sep 5 23:54:33.287161 extend-filesystems[1647]: Checking size of /dev/sda9 Sep 5 23:54:33.287046 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 23:54:33.478976 extend-filesystems[1647]: Old size kept for /dev/sda9 Sep 5 23:54:33.478976 extend-filesystems[1647]: Found sr0 Sep 5 23:54:33.370907 chronyd[1649]: Timezone right/UTC failed leap second check, ignoring Sep 5 23:54:33.306055 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 23:54:33.371098 chronyd[1649]: Loaded seccomp filter (level 2) Sep 5 23:54:33.324817 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 23:54:33.421397 dbus-daemon[1643]: [system] SELinux support is enabled Sep 5 23:54:33.325385 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 23:54:33.342025 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 23:54:33.528199 update_engine[1665]: I20250905 23:54:33.424314 1665 main.cc:92] Flatcar Update Engine starting Sep 5 23:54:33.528199 update_engine[1665]: I20250905 23:54:33.436522 1665 update_check_scheduler.cc:74] Next update check in 2m59s Sep 5 23:54:33.354977 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 23:54:33.528506 jq[1675]: true Sep 5 23:54:33.381242 systemd[1]: Started chronyd.service - NTP client/server. Sep 5 23:54:33.397474 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 23:54:33.397653 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 23:54:33.538503 coreos-metadata[1642]: Sep 05 23:54:33.533 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 5 23:54:33.538503 coreos-metadata[1642]: Sep 05 23:54:33.537 INFO Fetch successful Sep 5 23:54:33.397915 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 23:54:33.549529 coreos-metadata[1642]: Sep 05 23:54:33.538 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 5 23:54:33.549529 coreos-metadata[1642]: Sep 05 23:54:33.544 INFO Fetch successful Sep 5 23:54:33.549529 coreos-metadata[1642]: Sep 05 23:54:33.545 INFO Fetching http://168.63.129.16/machine/7bfaa4fb-48c2-4fe2-9d8e-c1257e08a786/31f9b43f%2D4d14%2D46c9%2Db0ea%2Dd32f51f11b4f.%5Fci%2D4081.3.5%2Dn%2D8e502b48f1?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 5 23:54:33.549529 coreos-metadata[1642]: Sep 05 23:54:33.549 INFO Fetch successful Sep 5 23:54:33.549529 coreos-metadata[1642]: Sep 05 23:54:33.549 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 5 23:54:33.398060 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 23:54:33.549688 jq[1697]: true Sep 5 23:54:33.416252 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 23:54:33.416456 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 23:54:33.435499 systemd-logind[1662]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Sep 5 23:54:33.436319 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 23:54:33.440374 systemd-logind[1662]: New seat seat0. Sep 5 23:54:33.453825 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 23:54:33.467111 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 23:54:33.512439 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 23:54:33.512637 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 23:54:33.548158 (ntainerd)[1698]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 23:54:33.552698 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 23:54:33.562789 dbus-daemon[1643]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 5 23:54:33.565175 coreos-metadata[1642]: Sep 05 23:54:33.562 INFO Fetch successful Sep 5 23:54:33.553922 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 23:54:33.571115 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 23:54:33.571142 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 23:54:33.589174 systemd[1]: Started update-engine.service - Update Engine. Sep 5 23:54:33.602075 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 23:54:33.657301 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 5 23:54:33.668535 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 23:54:33.673421 tar[1692]: linux-arm64/LICENSE Sep 5 23:54:33.673421 tar[1692]: linux-arm64/helm Sep 5 23:54:33.715126 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1686) Sep 5 23:54:33.721791 bash[1732]: Updated "/home/core/.ssh/authorized_keys" Sep 5 23:54:33.724767 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 23:54:33.756480 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 23:54:33.949132 locksmithd[1717]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 23:54:34.346030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:54:34.365357 (kubelet)[1770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:54:34.499704 containerd[1698]: time="2025-09-05T23:54:34.499362240Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 23:54:34.563521 tar[1692]: linux-arm64/README.md Sep 5 23:54:34.581049 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 23:54:34.587308 containerd[1698]: time="2025-09-05T23:54:34.587237040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:54:34.594998 containerd[1698]: time="2025-09-05T23:54:34.594955600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:54:34.595505 containerd[1698]: time="2025-09-05T23:54:34.595489120Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 23:54:34.595608 containerd[1698]: time="2025-09-05T23:54:34.595574000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 23:54:34.595814 containerd[1698]: time="2025-09-05T23:54:34.595794800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 23:54:34.595944 containerd[1698]: time="2025-09-05T23:54:34.595927680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 23:54:34.596073 containerd[1698]: time="2025-09-05T23:54:34.596051240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:54:34.596130 containerd[1698]: time="2025-09-05T23:54:34.596114360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:54:34.597461 containerd[1698]: time="2025-09-05T23:54:34.597083120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:54:34.597461 containerd[1698]: time="2025-09-05T23:54:34.597107760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 23:54:34.597461 containerd[1698]: time="2025-09-05T23:54:34.597122800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:54:34.597461 containerd[1698]: time="2025-09-05T23:54:34.597132720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 23:54:34.597461 containerd[1698]: time="2025-09-05T23:54:34.597229640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:54:34.597461 containerd[1698]: time="2025-09-05T23:54:34.597426680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:54:34.597755 containerd[1698]: time="2025-09-05T23:54:34.597728080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:54:34.597815 containerd[1698]: time="2025-09-05T23:54:34.597802320Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 23:54:34.597984 containerd[1698]: time="2025-09-05T23:54:34.597966040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 23:54:34.598092 containerd[1698]: time="2025-09-05T23:54:34.598076600Z" level=info msg="metadata content store policy set" policy=shared Sep 5 23:54:34.617346 containerd[1698]: time="2025-09-05T23:54:34.617252120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 23:54:34.617611 containerd[1698]: time="2025-09-05T23:54:34.617504160Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 23:54:34.617731 containerd[1698]: time="2025-09-05T23:54:34.617651640Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 23:54:34.617731 containerd[1698]: time="2025-09-05T23:54:34.617689600Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 23:54:34.617731 containerd[1698]: time="2025-09-05T23:54:34.617708240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 23:54:34.617935 containerd[1698]: time="2025-09-05T23:54:34.617911520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 23:54:34.618190 containerd[1698]: time="2025-09-05T23:54:34.618157560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 23:54:34.618295 containerd[1698]: time="2025-09-05T23:54:34.618269960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 23:54:34.618332 containerd[1698]: time="2025-09-05T23:54:34.618303720Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 23:54:34.618332 containerd[1698]: time="2025-09-05T23:54:34.618319160Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 23:54:34.618367 containerd[1698]: time="2025-09-05T23:54:34.618333080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 23:54:34.618367 containerd[1698]: time="2025-09-05T23:54:34.618346000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 23:54:34.618367 containerd[1698]: time="2025-09-05T23:54:34.618357880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 23:54:34.618430 containerd[1698]: time="2025-09-05T23:54:34.618372960Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 23:54:34.618430 containerd[1698]: time="2025-09-05T23:54:34.618392440Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 23:54:34.618430 containerd[1698]: time="2025-09-05T23:54:34.618407520Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 23:54:34.618430 containerd[1698]: time="2025-09-05T23:54:34.618420840Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 23:54:34.618505 containerd[1698]: time="2025-09-05T23:54:34.618434040Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 23:54:34.618505 containerd[1698]: time="2025-09-05T23:54:34.618455720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 23:54:34.618505 containerd[1698]: time="2025-09-05T23:54:34.618469600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 23:54:34.618505 containerd[1698]: time="2025-09-05T23:54:34.618482400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 23:54:34.618505 containerd[1698]: time="2025-09-05T23:54:34.618496440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 23:54:34.618614 containerd[1698]: time="2025-09-05T23:54:34.618508400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 23:54:34.618614 containerd[1698]: time="2025-09-05T23:54:34.618521160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 23:54:34.618614 containerd[1698]: time="2025-09-05T23:54:34.618533640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 23:54:34.618614 containerd[1698]: time="2025-09-05T23:54:34.618547320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 23:54:34.618614 containerd[1698]: time="2025-09-05T23:54:34.618564360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 23:54:34.618614 containerd[1698]: time="2025-09-05T23:54:34.618579240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 23:54:34.618614 containerd[1698]: time="2025-09-05T23:54:34.618590600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 23:54:34.618614 containerd[1698]: time="2025-09-05T23:54:34.618602560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 23:54:34.618614 containerd[1698]: time="2025-09-05T23:54:34.618616040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 23:54:34.618772 containerd[1698]: time="2025-09-05T23:54:34.618632120Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 23:54:34.618772 containerd[1698]: time="2025-09-05T23:54:34.618654200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 23:54:34.618772 containerd[1698]: time="2025-09-05T23:54:34.618666440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 23:54:34.618772 containerd[1698]: time="2025-09-05T23:54:34.618677880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 23:54:34.618772 containerd[1698]: time="2025-09-05T23:54:34.618724360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 23:54:34.618772 containerd[1698]: time="2025-09-05T23:54:34.618741840Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 23:54:34.618772 containerd[1698]: time="2025-09-05T23:54:34.618753960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 23:54:34.618772 containerd[1698]: time="2025-09-05T23:54:34.618766000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 23:54:34.618772 containerd[1698]: time="2025-09-05T23:54:34.618776160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 23:54:34.618948 containerd[1698]: time="2025-09-05T23:54:34.618789160Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 23:54:34.618948 containerd[1698]: time="2025-09-05T23:54:34.618799680Z" level=info msg="NRI interface is disabled by configuration." Sep 5 23:54:34.618948 containerd[1698]: time="2025-09-05T23:54:34.618809880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 23:54:34.619482 containerd[1698]: time="2025-09-05T23:54:34.619118240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 23:54:34.619482 containerd[1698]: time="2025-09-05T23:54:34.619186200Z" level=info msg="Connect containerd service" Sep 5 23:54:34.619482 containerd[1698]: time="2025-09-05T23:54:34.619225000Z" level=info msg="using legacy CRI server" Sep 5 23:54:34.619482 containerd[1698]: time="2025-09-05T23:54:34.619232560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 23:54:34.621633 containerd[1698]: time="2025-09-05T23:54:34.621454440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 23:54:34.623685 containerd[1698]: time="2025-09-05T23:54:34.623557360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:54:34.625858 containerd[1698]: time="2025-09-05T23:54:34.625170840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 23:54:34.626049 containerd[1698]: time="2025-09-05T23:54:34.625813280Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 23:54:34.626274 containerd[1698]: time="2025-09-05T23:54:34.626182280Z" level=info msg="Start subscribing containerd event" Sep 5 23:54:34.626309 containerd[1698]: time="2025-09-05T23:54:34.626291280Z" level=info msg="Start recovering state" Sep 5 23:54:34.626381 containerd[1698]: time="2025-09-05T23:54:34.626363000Z" level=info msg="Start event monitor" Sep 5 23:54:34.626381 containerd[1698]: time="2025-09-05T23:54:34.626379640Z" level=info msg="Start snapshots syncer" Sep 5 23:54:34.626428 containerd[1698]: time="2025-09-05T23:54:34.626389400Z" level=info msg="Start cni network conf syncer for default" Sep 5 23:54:34.626428 containerd[1698]: time="2025-09-05T23:54:34.626398160Z" level=info msg="Start streaming server" Sep 5 23:54:34.626516 containerd[1698]: time="2025-09-05T23:54:34.626482520Z" level=info msg="containerd successfully booted in 0.131854s" Sep 5 23:54:34.626652 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 23:54:34.830984 kubelet[1770]: E0905 23:54:34.830925 1770 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:54:34.833064 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:54:34.833204 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:54:35.041785 sshd_keygen[1671]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 23:54:35.061394 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 23:54:35.073085 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 23:54:35.080117 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 5 23:54:35.087006 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 23:54:35.087187 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 23:54:35.103737 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 23:54:35.112037 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 5 23:54:35.119770 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 23:54:35.137114 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 23:54:35.144388 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 5 23:54:35.151105 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 23:54:35.156284 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 23:54:35.163436 systemd[1]: Startup finished in 698ms (kernel) + 17.561s (initrd) + 12.466s (userspace) = 30.725s. Sep 5 23:54:35.508164 login[1807]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:35.514902 login[1808]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:35.521549 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 23:54:35.532167 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 23:54:35.535118 systemd-logind[1662]: New session 1 of user core. Sep 5 23:54:35.539362 systemd-logind[1662]: New session 2 of user core. Sep 5 23:54:35.546451 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 23:54:35.552172 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 23:54:35.555975 (systemd)[1815]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 23:54:35.709583 systemd[1815]: Queued start job for default target default.target. Sep 5 23:54:35.719780 systemd[1815]: Created slice app.slice - User Application Slice. Sep 5 23:54:35.720151 systemd[1815]: Reached target paths.target - Paths. Sep 5 23:54:35.720173 systemd[1815]: Reached target timers.target - Timers. Sep 5 23:54:35.721481 systemd[1815]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 23:54:35.732866 systemd[1815]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 23:54:35.732990 systemd[1815]: Reached target sockets.target - Sockets. Sep 5 23:54:35.733004 systemd[1815]: Reached target basic.target - Basic System. Sep 5 23:54:35.733125 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 23:54:35.733940 systemd[1815]: Reached target default.target - Main User Target. Sep 5 23:54:35.733990 systemd[1815]: Startup finished in 170ms. Sep 5 23:54:35.734457 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 23:54:35.735160 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 23:54:36.967855 waagent[1804]: 2025-09-05T23:54:36.963903Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 5 23:54:36.971382 waagent[1804]: 2025-09-05T23:54:36.971307Z INFO Daemon Daemon OS: flatcar 4081.3.5 Sep 5 23:54:36.976524 waagent[1804]: 2025-09-05T23:54:36.976466Z INFO Daemon Daemon Python: 3.11.9 Sep 5 23:54:36.983711 waagent[1804]: 2025-09-05T23:54:36.983630Z INFO Daemon Daemon Run daemon Sep 5 23:54:36.987795 waagent[1804]: 2025-09-05T23:54:36.987746Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.5' Sep 5 23:54:36.997403 waagent[1804]: 2025-09-05T23:54:36.997211Z INFO Daemon Daemon Using waagent for provisioning Sep 5 23:54:37.003112 waagent[1804]: 2025-09-05T23:54:37.003057Z INFO Daemon Daemon Activate resource disk Sep 5 23:54:37.008093 waagent[1804]: 2025-09-05T23:54:37.008034Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 5 23:54:37.019340 waagent[1804]: 2025-09-05T23:54:37.019273Z INFO Daemon Daemon Found device: None Sep 5 23:54:37.023967 waagent[1804]: 2025-09-05T23:54:37.023908Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 5 23:54:37.035057 waagent[1804]: 2025-09-05T23:54:37.034997Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 5 23:54:37.048563 waagent[1804]: 2025-09-05T23:54:37.048498Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 5 23:54:37.054242 waagent[1804]: 2025-09-05T23:54:37.054189Z INFO Daemon Daemon Running default provisioning handler Sep 5 23:54:37.066670 waagent[1804]: 2025-09-05T23:54:37.066600Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 5 23:54:37.081437 waagent[1804]: 2025-09-05T23:54:37.081371Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 5 23:54:37.091111 waagent[1804]: 2025-09-05T23:54:37.091055Z INFO Daemon Daemon cloud-init is enabled: False Sep 5 23:54:37.096186 waagent[1804]: 2025-09-05T23:54:37.096139Z INFO Daemon Daemon Copying ovf-env.xml Sep 5 23:54:37.227704 waagent[1804]: 2025-09-05T23:54:37.227544Z INFO Daemon Daemon Successfully mounted dvd Sep 5 23:54:37.257979 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 5 23:54:37.260005 waagent[1804]: 2025-09-05T23:54:37.259933Z INFO Daemon Daemon Detect protocol endpoint Sep 5 23:54:37.264969 waagent[1804]: 2025-09-05T23:54:37.264805Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 5 23:54:37.271287 waagent[1804]: 2025-09-05T23:54:37.271233Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 5 23:54:37.277949 waagent[1804]: 2025-09-05T23:54:37.277899Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 5 23:54:37.283643 waagent[1804]: 2025-09-05T23:54:37.283594Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 5 23:54:37.288638 waagent[1804]: 2025-09-05T23:54:37.288595Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 5 23:54:37.323242 waagent[1804]: 2025-09-05T23:54:37.323194Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 5 23:54:37.332286 waagent[1804]: 2025-09-05T23:54:37.332255Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 5 23:54:37.338148 waagent[1804]: 2025-09-05T23:54:37.338089Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 5 23:54:37.548084 waagent[1804]: 2025-09-05T23:54:37.547920Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 5 23:54:37.555031 waagent[1804]: 2025-09-05T23:54:37.554956Z INFO Daemon Daemon Forcing an update of the goal state. Sep 5 23:54:37.564517 waagent[1804]: 2025-09-05T23:54:37.564461Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 5 23:54:37.630338 waagent[1804]: 2025-09-05T23:54:37.630284Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 5 23:54:37.637199 waagent[1804]: 2025-09-05T23:54:37.637142Z INFO Daemon Sep 5 23:54:37.640643 waagent[1804]: 2025-09-05T23:54:37.640587Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 10bd52a1-d2b3-48b3-b285-31c67dc70f68 eTag: 18260760033761229007 source: Fabric] Sep 5 23:54:37.653539 waagent[1804]: 2025-09-05T23:54:37.653485Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 5 23:54:37.661141 waagent[1804]: 2025-09-05T23:54:37.661092Z INFO Daemon Sep 5 23:54:37.664459 waagent[1804]: 2025-09-05T23:54:37.664409Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 5 23:54:37.676313 waagent[1804]: 2025-09-05T23:54:37.676274Z INFO Daemon Daemon Downloading artifacts profile blob Sep 5 23:54:37.757229 waagent[1804]: 2025-09-05T23:54:37.757130Z INFO Daemon Downloaded certificate {'thumbprint': 'A41329D664778D031B619D2064ED71E23862ADD7', 'hasPrivateKey': True} Sep 5 23:54:37.767955 waagent[1804]: 2025-09-05T23:54:37.767900Z INFO Daemon Fetch goal state completed Sep 5 23:54:37.779983 waagent[1804]: 2025-09-05T23:54:37.779943Z INFO Daemon Daemon Starting provisioning Sep 5 23:54:37.785910 waagent[1804]: 2025-09-05T23:54:37.785850Z INFO Daemon Daemon Handle ovf-env.xml. Sep 5 23:54:37.791097 waagent[1804]: 2025-09-05T23:54:37.791048Z INFO Daemon Daemon Set hostname [ci-4081.3.5-n-8e502b48f1] Sep 5 23:54:37.854857 waagent[1804]: 2025-09-05T23:54:37.854212Z INFO Daemon Daemon Publish hostname [ci-4081.3.5-n-8e502b48f1] Sep 5 23:54:37.860968 waagent[1804]: 2025-09-05T23:54:37.860899Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 5 23:54:37.868525 waagent[1804]: 2025-09-05T23:54:37.868465Z INFO Daemon Daemon Primary interface is [eth0] Sep 5 23:54:37.898201 systemd-networkd[1473]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:54:37.898208 systemd-networkd[1473]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:54:37.899487 waagent[1804]: 2025-09-05T23:54:37.899260Z INFO Daemon Daemon Create user account if not exists Sep 5 23:54:37.898236 systemd-networkd[1473]: eth0: DHCP lease lost Sep 5 23:54:37.905366 waagent[1804]: 2025-09-05T23:54:37.905296Z INFO Daemon Daemon User core already exists, skip useradd Sep 5 23:54:37.911475 waagent[1804]: 2025-09-05T23:54:37.911404Z INFO Daemon Daemon Configure sudoer Sep 5 23:54:37.911928 systemd-networkd[1473]: eth0: DHCPv6 lease lost Sep 5 23:54:37.916679 waagent[1804]: 2025-09-05T23:54:37.916589Z INFO Daemon Daemon Configure sshd Sep 5 23:54:37.921631 waagent[1804]: 2025-09-05T23:54:37.921542Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 5 23:54:37.936550 waagent[1804]: 2025-09-05T23:54:37.936464Z INFO Daemon Daemon Deploy ssh public key. Sep 5 23:54:37.957893 systemd-networkd[1473]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 5 23:54:39.067939 waagent[1804]: 2025-09-05T23:54:39.067885Z INFO Daemon Daemon Provisioning complete Sep 5 23:54:39.087188 waagent[1804]: 2025-09-05T23:54:39.087139Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 5 23:54:39.093689 waagent[1804]: 2025-09-05T23:54:39.093614Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 5 23:54:39.104143 waagent[1804]: 2025-09-05T23:54:39.104085Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 5 23:54:39.240684 waagent[1867]: 2025-09-05T23:54:39.239999Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 5 23:54:39.240684 waagent[1867]: 2025-09-05T23:54:39.240156Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.5 Sep 5 23:54:39.240684 waagent[1867]: 2025-09-05T23:54:39.240209Z INFO ExtHandler ExtHandler Python: 3.11.9 Sep 5 23:54:43.917897 waagent[1867]: 2025-09-05T23:54:43.917780Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.5; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 5 23:54:43.918855 waagent[1867]: 2025-09-05T23:54:43.918461Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 5 23:54:43.918855 waagent[1867]: 2025-09-05T23:54:43.918557Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 5 23:54:43.927762 waagent[1867]: 2025-09-05T23:54:43.927678Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 5 23:54:43.937760 waagent[1867]: 2025-09-05T23:54:43.937710Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 5 23:54:43.938336 waagent[1867]: 2025-09-05T23:54:43.938286Z INFO ExtHandler Sep 5 23:54:43.938413 waagent[1867]: 2025-09-05T23:54:43.938382Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 88485b8d-8aec-4a3d-98af-1ef4eafeb6fa eTag: 18260760033761229007 source: Fabric] Sep 5 23:54:43.938722 waagent[1867]: 2025-09-05T23:54:43.938681Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 5 23:54:43.939921 waagent[1867]: 2025-09-05T23:54:43.939873Z INFO ExtHandler Sep 5 23:54:43.940001 waagent[1867]: 2025-09-05T23:54:43.939969Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 5 23:54:43.944338 waagent[1867]: 2025-09-05T23:54:43.944304Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 5 23:54:44.054861 waagent[1867]: 2025-09-05T23:54:44.053996Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A41329D664778D031B619D2064ED71E23862ADD7', 'hasPrivateKey': True} Sep 5 23:54:44.054861 waagent[1867]: 2025-09-05T23:54:44.054594Z INFO ExtHandler Fetch goal state completed Sep 5 23:54:44.070656 waagent[1867]: 2025-09-05T23:54:44.070599Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1867 Sep 5 23:54:44.070804 waagent[1867]: 2025-09-05T23:54:44.070768Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 5 23:54:44.072524 waagent[1867]: 2025-09-05T23:54:44.072479Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.5', '', 'Flatcar Container Linux by Kinvolk'] Sep 5 23:54:44.072912 waagent[1867]: 2025-09-05T23:54:44.072875Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 5 23:54:44.148280 waagent[1867]: 2025-09-05T23:54:44.147749Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 5 23:54:44.148280 waagent[1867]: 2025-09-05T23:54:44.147988Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 5 23:54:44.155157 waagent[1867]: 2025-09-05T23:54:44.155109Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 5 23:54:44.162202 systemd[1]: Reloading requested from client PID 1882 ('systemctl') (unit waagent.service)... Sep 5 23:54:44.162219 systemd[1]: Reloading... Sep 5 23:54:44.239463 zram_generator::config[1912]: No configuration found. Sep 5 23:54:44.356764 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:54:44.432092 systemd[1]: Reloading finished in 269 ms. Sep 5 23:54:44.460851 waagent[1867]: 2025-09-05T23:54:44.459311Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 5 23:54:44.465861 systemd[1]: Reloading requested from client PID 1970 ('systemctl') (unit waagent.service)... Sep 5 23:54:44.465876 systemd[1]: Reloading... Sep 5 23:54:44.547869 zram_generator::config[2004]: No configuration found. Sep 5 23:54:44.652135 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:54:44.727188 systemd[1]: Reloading finished in 261 ms. Sep 5 23:54:44.750848 waagent[1867]: 2025-09-05T23:54:44.750048Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 5 23:54:44.750848 waagent[1867]: 2025-09-05T23:54:44.750219Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 5 23:54:44.939504 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 23:54:44.950018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:54:45.305603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:54:45.310169 (kubelet)[2070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:54:45.353697 kubelet[2070]: E0905 23:54:45.353636 2070 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:54:45.356646 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:54:45.356789 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:54:45.729567 waagent[1867]: 2025-09-05T23:54:45.729480Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 5 23:54:45.730202 waagent[1867]: 2025-09-05T23:54:45.730139Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 5 23:54:45.731042 waagent[1867]: 2025-09-05T23:54:45.730949Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 5 23:54:45.731539 waagent[1867]: 2025-09-05T23:54:45.731366Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 5 23:54:45.732623 waagent[1867]: 2025-09-05T23:54:45.731755Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 5 23:54:45.732623 waagent[1867]: 2025-09-05T23:54:45.731868Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 5 23:54:45.732623 waagent[1867]: 2025-09-05T23:54:45.732029Z INFO EnvHandler ExtHandler Configure routes Sep 5 23:54:45.732623 waagent[1867]: 2025-09-05T23:54:45.732093Z INFO EnvHandler ExtHandler Gateway:None Sep 5 23:54:45.732623 waagent[1867]: 2025-09-05T23:54:45.732135Z INFO EnvHandler ExtHandler Routes:None Sep 5 23:54:45.732958 waagent[1867]: 2025-09-05T23:54:45.732893Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 5 23:54:45.733197 waagent[1867]: 2025-09-05T23:54:45.733159Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 5 23:54:45.733336 waagent[1867]: 2025-09-05T23:54:45.733303Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 5 23:54:45.733634 waagent[1867]: 2025-09-05T23:54:45.733591Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 5 23:54:45.733940 waagent[1867]: 2025-09-05T23:54:45.733892Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 5 23:54:45.733940 waagent[1867]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 5 23:54:45.733940 waagent[1867]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 5 23:54:45.733940 waagent[1867]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 5 23:54:45.733940 waagent[1867]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 5 23:54:45.733940 waagent[1867]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 5 23:54:45.733940 waagent[1867]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 5 23:54:45.734577 waagent[1867]: 2025-09-05T23:54:45.734510Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 5 23:54:45.734928 waagent[1867]: 2025-09-05T23:54:45.734891Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 5 23:54:45.735352 waagent[1867]: 2025-09-05T23:54:45.734807Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 5 23:54:45.735466 waagent[1867]: 2025-09-05T23:54:45.735424Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 5 23:54:45.743868 waagent[1867]: 2025-09-05T23:54:45.743244Z INFO ExtHandler ExtHandler Sep 5 23:54:45.743868 waagent[1867]: 2025-09-05T23:54:45.743347Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 0f29d2da-87ba-4457-98d1-f7b9ef0e2568 correlation 9abc0431-093a-4c96-8dbb-a8d0423e63ce created: 2025-09-05T23:53:16.170868Z] Sep 5 23:54:45.743868 waagent[1867]: 2025-09-05T23:54:45.743738Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 5 23:54:45.744354 waagent[1867]: 2025-09-05T23:54:45.744299Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Sep 5 23:54:45.779705 waagent[1867]: 2025-09-05T23:54:45.779575Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 41AC617F-741C-4107-B892-2AF5010F41F4;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 5 23:54:45.784894 waagent[1867]: 2025-09-05T23:54:45.784530Z INFO MonitorHandler ExtHandler Network interfaces: Sep 5 23:54:45.784894 waagent[1867]: Executing ['ip', '-a', '-o', 'link']: Sep 5 23:54:45.784894 waagent[1867]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 5 23:54:45.784894 waagent[1867]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f9:9b:e5 brd ff:ff:ff:ff:ff:ff Sep 5 23:54:45.784894 waagent[1867]: 3: enP59515s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f9:9b:e5 brd ff:ff:ff:ff:ff:ff\ altname enP59515p0s2 Sep 5 23:54:45.784894 waagent[1867]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 5 23:54:45.784894 waagent[1867]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 5 23:54:45.784894 waagent[1867]: 2: eth0 inet 10.200.20.33/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 5 23:54:45.784894 waagent[1867]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 5 23:54:45.784894 waagent[1867]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 5 23:54:45.784894 waagent[1867]: 2: eth0 inet6 fe80::20d:3aff:fef9:9be5/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 5 23:54:45.855989 waagent[1867]: 2025-09-05T23:54:45.855897Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 5 23:54:45.855989 waagent[1867]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 5 23:54:45.855989 waagent[1867]: pkts bytes target prot opt in out source destination Sep 5 23:54:45.855989 waagent[1867]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 5 23:54:45.855989 waagent[1867]: pkts bytes target prot opt in out source destination Sep 5 23:54:45.855989 waagent[1867]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 5 23:54:45.855989 waagent[1867]: pkts bytes target prot opt in out source destination Sep 5 23:54:45.855989 waagent[1867]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 5 23:54:45.855989 waagent[1867]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 5 23:54:45.855989 waagent[1867]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 5 23:54:45.858912 waagent[1867]: 2025-09-05T23:54:45.858854Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 5 23:54:45.858912 waagent[1867]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 5 23:54:45.858912 waagent[1867]: pkts bytes target prot opt in out source destination Sep 5 23:54:45.858912 waagent[1867]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 5 23:54:45.858912 waagent[1867]: pkts bytes target prot opt in out source destination Sep 5 23:54:45.858912 waagent[1867]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 5 23:54:45.858912 waagent[1867]: pkts bytes target prot opt in out source destination Sep 5 23:54:45.858912 waagent[1867]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 5 23:54:45.858912 waagent[1867]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 5 23:54:45.858912 waagent[1867]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 5 23:54:45.859153 waagent[1867]: 2025-09-05T23:54:45.859119Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 5 23:54:55.439585 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 23:54:55.445036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:54:55.899991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:54:55.912122 (kubelet)[2113]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:54:55.948899 kubelet[2113]: E0905 23:54:55.948846 2113 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:54:55.951552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:54:55.951812 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:54:57.162094 chronyd[1649]: Selected source PHC0 Sep 5 23:55:02.851943 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 23:55:02.853158 systemd[1]: Started sshd@0-10.200.20.33:22-10.200.16.10:40726.service - OpenSSH per-connection server daemon (10.200.16.10:40726). Sep 5 23:55:03.354502 sshd[2120]: Accepted publickey for core from 10.200.16.10 port 40726 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:55:03.355825 sshd[2120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:03.360519 systemd-logind[1662]: New session 3 of user core. Sep 5 23:55:03.366006 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 23:55:03.748472 systemd[1]: Started sshd@1-10.200.20.33:22-10.200.16.10:40734.service - OpenSSH per-connection server daemon (10.200.16.10:40734). Sep 5 23:55:04.181643 sshd[2125]: Accepted publickey for core from 10.200.16.10 port 40734 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:55:04.182807 sshd[2125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:04.186435 systemd-logind[1662]: New session 4 of user core. Sep 5 23:55:04.194997 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 23:55:04.504372 sshd[2125]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:04.508730 systemd[1]: sshd@1-10.200.20.33:22-10.200.16.10:40734.service: Deactivated successfully. Sep 5 23:55:04.510682 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 23:55:04.511572 systemd-logind[1662]: Session 4 logged out. Waiting for processes to exit. Sep 5 23:55:04.512442 systemd-logind[1662]: Removed session 4. Sep 5 23:55:04.581793 systemd[1]: Started sshd@2-10.200.20.33:22-10.200.16.10:40744.service - OpenSSH per-connection server daemon (10.200.16.10:40744). Sep 5 23:55:05.003785 sshd[2132]: Accepted publickey for core from 10.200.16.10 port 40744 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:55:05.005217 sshd[2132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:05.009549 systemd-logind[1662]: New session 5 of user core. Sep 5 23:55:05.018985 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 23:55:05.326879 sshd[2132]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:05.330184 systemd-logind[1662]: Session 5 logged out. Waiting for processes to exit. Sep 5 23:55:05.330338 systemd[1]: sshd@2-10.200.20.33:22-10.200.16.10:40744.service: Deactivated successfully. Sep 5 23:55:05.332571 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 23:55:05.334290 systemd-logind[1662]: Removed session 5. Sep 5 23:55:05.419099 systemd[1]: Started sshd@3-10.200.20.33:22-10.200.16.10:40746.service - OpenSSH per-connection server daemon (10.200.16.10:40746). Sep 5 23:55:05.874237 sshd[2139]: Accepted publickey for core from 10.200.16.10 port 40746 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:55:05.875539 sshd[2139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:05.879157 systemd-logind[1662]: New session 6 of user core. Sep 5 23:55:05.887983 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 23:55:06.136861 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 5 23:55:06.143044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:55:06.211069 sshd[2139]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:06.213906 systemd-logind[1662]: Session 6 logged out. Waiting for processes to exit. Sep 5 23:55:06.214304 systemd[1]: sshd@3-10.200.20.33:22-10.200.16.10:40746.service: Deactivated successfully. Sep 5 23:55:06.216417 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 23:55:06.218698 systemd-logind[1662]: Removed session 6. Sep 5 23:55:06.308091 systemd[1]: Started sshd@4-10.200.20.33:22-10.200.16.10:40754.service - OpenSSH per-connection server daemon (10.200.16.10:40754). Sep 5 23:55:06.489183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:55:06.504117 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:55:06.544326 kubelet[2156]: E0905 23:55:06.544265 2156 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:55:06.546980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:55:06.547266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:55:06.752376 sshd[2149]: Accepted publickey for core from 10.200.16.10 port 40754 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:55:06.753427 sshd[2149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:06.757868 systemd-logind[1662]: New session 7 of user core. Sep 5 23:55:06.768993 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 23:55:07.202662 sudo[2164]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 23:55:07.202971 sudo[2164]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:55:07.231779 sudo[2164]: pam_unix(sudo:session): session closed for user root Sep 5 23:55:07.304261 sshd[2149]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:07.308378 systemd[1]: sshd@4-10.200.20.33:22-10.200.16.10:40754.service: Deactivated successfully. Sep 5 23:55:07.310047 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 23:55:07.310857 systemd-logind[1662]: Session 7 logged out. Waiting for processes to exit. Sep 5 23:55:07.312077 systemd-logind[1662]: Removed session 7. Sep 5 23:55:07.389465 systemd[1]: Started sshd@5-10.200.20.33:22-10.200.16.10:40766.service - OpenSSH per-connection server daemon (10.200.16.10:40766). Sep 5 23:55:07.837260 sshd[2169]: Accepted publickey for core from 10.200.16.10 port 40766 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:55:07.838682 sshd[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:07.842526 systemd-logind[1662]: New session 8 of user core. Sep 5 23:55:07.853016 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 23:55:08.092456 sudo[2173]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 23:55:08.093141 sudo[2173]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:55:08.095956 sudo[2173]: pam_unix(sudo:session): session closed for user root Sep 5 23:55:08.100558 sudo[2172]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 23:55:08.100859 sudo[2172]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:55:08.113163 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 23:55:08.115592 auditctl[2176]: No rules Sep 5 23:55:08.115944 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 23:55:08.116126 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 23:55:08.124600 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:55:08.145296 augenrules[2194]: No rules Sep 5 23:55:08.146246 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:55:08.148872 sudo[2172]: pam_unix(sudo:session): session closed for user root Sep 5 23:55:08.237492 sshd[2169]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:08.241171 systemd[1]: sshd@5-10.200.20.33:22-10.200.16.10:40766.service: Deactivated successfully. Sep 5 23:55:08.242971 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 23:55:08.244658 systemd-logind[1662]: Session 8 logged out. Waiting for processes to exit. Sep 5 23:55:08.245528 systemd-logind[1662]: Removed session 8. Sep 5 23:55:08.314077 systemd[1]: Started sshd@6-10.200.20.33:22-10.200.16.10:40772.service - OpenSSH per-connection server daemon (10.200.16.10:40772). Sep 5 23:55:08.733996 sshd[2202]: Accepted publickey for core from 10.200.16.10 port 40772 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:55:08.735327 sshd[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:08.740149 systemd-logind[1662]: New session 9 of user core. Sep 5 23:55:08.750039 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 23:55:08.975625 sudo[2205]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 23:55:08.975980 sudo[2205]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:55:10.192077 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 23:55:10.192233 (dockerd)[2220]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 23:55:11.126637 dockerd[2220]: time="2025-09-05T23:55:11.126325667Z" level=info msg="Starting up" Sep 5 23:55:11.725510 dockerd[2220]: time="2025-09-05T23:55:11.725416380Z" level=info msg="Loading containers: start." Sep 5 23:55:11.979851 kernel: Initializing XFRM netlink socket Sep 5 23:55:12.124242 systemd-networkd[1473]: docker0: Link UP Sep 5 23:55:12.147980 dockerd[2220]: time="2025-09-05T23:55:12.147941998Z" level=info msg="Loading containers: done." Sep 5 23:55:12.158724 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1122314285-merged.mount: Deactivated successfully. Sep 5 23:55:12.668813 dockerd[2220]: time="2025-09-05T23:55:12.668694833Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 23:55:12.669344 dockerd[2220]: time="2025-09-05T23:55:12.669009714Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 23:55:12.669344 dockerd[2220]: time="2025-09-05T23:55:12.669159034Z" level=info msg="Daemon has completed initialization" Sep 5 23:55:12.888115 dockerd[2220]: time="2025-09-05T23:55:12.888023210Z" level=info msg="API listen on /run/docker.sock" Sep 5 23:55:12.888743 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 23:55:13.551137 containerd[1698]: time="2025-09-05T23:55:13.551095216Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 5 23:55:14.516942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3373299631.mount: Deactivated successfully. Sep 5 23:55:16.689496 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 5 23:55:16.700188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:55:16.819739 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:55:16.825389 (kubelet)[2378]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:55:16.906992 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 5 23:55:16.937692 kubelet[2378]: E0905 23:55:16.937586 2378 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:55:16.939996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:55:16.940156 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:55:18.445056 update_engine[1665]: I20250905 23:55:18.444986 1665 update_attempter.cc:509] Updating boot flags... Sep 5 23:55:20.161885 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2397) Sep 5 23:55:20.816894 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2400) Sep 5 23:55:22.479868 containerd[1698]: time="2025-09-05T23:55:22.477877118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:22.483753 containerd[1698]: time="2025-09-05T23:55:22.483698449Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328357" Sep 5 23:55:22.490939 containerd[1698]: time="2025-09-05T23:55:22.490907664Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:22.496391 containerd[1698]: time="2025-09-05T23:55:22.496356354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:22.497415 containerd[1698]: time="2025-09-05T23:55:22.497374236Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 8.94623302s" Sep 5 23:55:22.497476 containerd[1698]: time="2025-09-05T23:55:22.497415236Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 5 23:55:22.498772 containerd[1698]: time="2025-09-05T23:55:22.498732559Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 5 23:55:24.745872 containerd[1698]: time="2025-09-05T23:55:24.745777295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:24.751004 containerd[1698]: time="2025-09-05T23:55:24.750960185Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528552" Sep 5 23:55:24.758099 containerd[1698]: time="2025-09-05T23:55:24.758051079Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:24.768651 containerd[1698]: time="2025-09-05T23:55:24.768580540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:24.769802 containerd[1698]: time="2025-09-05T23:55:24.769622822Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 2.270716503s" Sep 5 23:55:24.769802 containerd[1698]: time="2025-09-05T23:55:24.769666542Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 5 23:55:24.770310 containerd[1698]: time="2025-09-05T23:55:24.770116343Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 5 23:55:26.324874 containerd[1698]: time="2025-09-05T23:55:26.324604736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:26.328601 containerd[1698]: time="2025-09-05T23:55:26.328563421Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483527" Sep 5 23:55:26.337060 containerd[1698]: time="2025-09-05T23:55:26.337023350Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:26.347655 containerd[1698]: time="2025-09-05T23:55:26.347597282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:26.349694 containerd[1698]: time="2025-09-05T23:55:26.349658244Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.579509541s" Sep 5 23:55:26.349740 containerd[1698]: time="2025-09-05T23:55:26.349699924Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 5 23:55:26.350135 containerd[1698]: time="2025-09-05T23:55:26.350104005Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 5 23:55:27.189647 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 5 23:55:27.196050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:55:27.336420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:55:27.341312 (kubelet)[2507]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:55:27.775905 kubelet[2507]: E0905 23:55:27.495165 2507 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:55:27.497462 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:55:27.497605 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:55:28.111746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3839038048.mount: Deactivated successfully. Sep 5 23:55:28.489565 containerd[1698]: time="2025-09-05T23:55:28.489510148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:28.493629 containerd[1698]: time="2025-09-05T23:55:28.493480112Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376724" Sep 5 23:55:28.500283 containerd[1698]: time="2025-09-05T23:55:28.499741319Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:28.505207 containerd[1698]: time="2025-09-05T23:55:28.505149965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:28.506206 containerd[1698]: time="2025-09-05T23:55:28.505752286Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 2.155612721s" Sep 5 23:55:28.506206 containerd[1698]: time="2025-09-05T23:55:28.505785286Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 5 23:55:28.506375 containerd[1698]: time="2025-09-05T23:55:28.506340887Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 23:55:29.296956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1592012429.mount: Deactivated successfully. Sep 5 23:55:35.923873 containerd[1698]: time="2025-09-05T23:55:35.923482535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:35.926820 containerd[1698]: time="2025-09-05T23:55:35.926577940Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 5 23:55:35.931545 containerd[1698]: time="2025-09-05T23:55:35.931497267Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:35.969210 containerd[1698]: time="2025-09-05T23:55:35.969149282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:35.970485 containerd[1698]: time="2025-09-05T23:55:35.970353404Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 7.463973957s" Sep 5 23:55:35.970485 containerd[1698]: time="2025-09-05T23:55:35.970388164Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 5 23:55:35.971662 containerd[1698]: time="2025-09-05T23:55:35.971611646Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 23:55:36.945374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount927324071.mount: Deactivated successfully. Sep 5 23:55:37.120288 containerd[1698]: time="2025-09-05T23:55:37.120235832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:37.181789 containerd[1698]: time="2025-09-05T23:55:37.181748784Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 5 23:55:37.188815 containerd[1698]: time="2025-09-05T23:55:37.188782632Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:37.230869 containerd[1698]: time="2025-09-05T23:55:37.230754400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:37.232396 containerd[1698]: time="2025-09-05T23:55:37.232357162Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.260704876s" Sep 5 23:55:37.232506 containerd[1698]: time="2025-09-05T23:55:37.232399162Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 5 23:55:37.232882 containerd[1698]: time="2025-09-05T23:55:37.232856083Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 5 23:55:37.689530 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 5 23:55:37.695025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:55:37.793385 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:55:37.797866 (kubelet)[2586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:55:37.835194 kubelet[2586]: E0905 23:55:37.835119 2586 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:55:37.837823 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:55:37.838129 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:55:47.939576 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Sep 5 23:55:47.947048 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:55:48.045557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:55:48.050228 (kubelet)[2601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:55:48.085692 kubelet[2601]: E0905 23:55:48.085600 2601 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:55:48.087941 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:55:48.088083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:55:49.120076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1003208215.mount: Deactivated successfully. Sep 5 23:55:51.117880 containerd[1698]: time="2025-09-05T23:55:51.116946588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:51.120545 containerd[1698]: time="2025-09-05T23:55:51.120284872Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Sep 5 23:55:51.128036 containerd[1698]: time="2025-09-05T23:55:51.127995362Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:51.135501 containerd[1698]: time="2025-09-05T23:55:51.135450011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:51.137353 containerd[1698]: time="2025-09-05T23:55:51.136926373Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 13.90403853s" Sep 5 23:55:51.137353 containerd[1698]: time="2025-09-05T23:55:51.136964133Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 5 23:55:56.184368 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:55:56.192046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:55:56.224301 systemd[1]: Reloading requested from client PID 2688 ('systemctl') (unit session-9.scope)... Sep 5 23:55:56.224467 systemd[1]: Reloading... Sep 5 23:55:56.334575 zram_generator::config[2729]: No configuration found. Sep 5 23:55:56.434989 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:55:56.513343 systemd[1]: Reloading finished in 288 ms. Sep 5 23:55:56.560010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:55:56.561610 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:55:56.564530 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 23:55:56.564888 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:55:56.575212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:55:56.693957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:55:56.706296 (kubelet)[2797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:55:56.844542 kubelet[2797]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:55:56.844542 kubelet[2797]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 23:55:56.844542 kubelet[2797]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:55:56.845358 kubelet[2797]: I0905 23:55:56.845002 2797 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:55:58.036720 kubelet[2797]: I0905 23:55:58.036676 2797 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 5 23:55:58.038862 kubelet[2797]: I0905 23:55:58.037655 2797 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:55:58.038862 kubelet[2797]: I0905 23:55:58.037971 2797 server.go:954] "Client rotation is on, will bootstrap in background" Sep 5 23:55:58.057864 kubelet[2797]: E0905 23:55:58.057344 2797 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:55:58.058815 kubelet[2797]: I0905 23:55:58.058786 2797 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:55:58.064002 kubelet[2797]: E0905 23:55:58.063967 2797 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:55:58.064002 kubelet[2797]: I0905 23:55:58.063999 2797 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:55:58.066788 kubelet[2797]: I0905 23:55:58.066767 2797 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:55:58.068223 kubelet[2797]: I0905 23:55:58.068187 2797 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:55:58.068392 kubelet[2797]: I0905 23:55:58.068226 2797 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-8e502b48f1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 23:55:58.068485 kubelet[2797]: I0905 23:55:58.068401 2797 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:55:58.068485 kubelet[2797]: I0905 23:55:58.068410 2797 container_manager_linux.go:304] "Creating device plugin manager" Sep 5 23:55:58.068561 kubelet[2797]: I0905 23:55:58.068529 2797 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:55:58.071431 kubelet[2797]: I0905 23:55:58.071232 2797 kubelet.go:446] "Attempting to sync node with API server" Sep 5 23:55:58.071431 kubelet[2797]: I0905 23:55:58.071258 2797 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:55:58.071431 kubelet[2797]: I0905 23:55:58.071356 2797 kubelet.go:352] "Adding apiserver pod source" Sep 5 23:55:58.071431 kubelet[2797]: I0905 23:55:58.071368 2797 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:55:58.083469 kubelet[2797]: W0905 23:55:58.083201 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-8e502b48f1&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Sep 5 23:55:58.083469 kubelet[2797]: E0905 23:55:58.083260 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-8e502b48f1&limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:55:58.083932 kubelet[2797]: W0905 23:55:58.083887 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Sep 5 23:55:58.084527 kubelet[2797]: E0905 23:55:58.084030 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:55:58.084712 kubelet[2797]: I0905 23:55:58.084650 2797 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:55:58.085624 kubelet[2797]: I0905 23:55:58.085603 2797 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 23:55:58.085865 kubelet[2797]: W0905 23:55:58.085763 2797 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 23:55:58.088548 kubelet[2797]: I0905 23:55:58.088521 2797 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 23:55:58.088860 kubelet[2797]: I0905 23:55:58.088666 2797 server.go:1287] "Started kubelet" Sep 5 23:55:58.090749 kubelet[2797]: I0905 23:55:58.090589 2797 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:55:58.094400 kubelet[2797]: I0905 23:55:58.094371 2797 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:55:58.095611 kubelet[2797]: I0905 23:55:58.095592 2797 server.go:479] "Adding debug handlers to kubelet server" Sep 5 23:55:58.096855 kubelet[2797]: E0905 23:55:58.096700 2797 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.33:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.33:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-n-8e502b48f1.1862883331d49fb1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-n-8e502b48f1,UID:ci-4081.3.5-n-8e502b48f1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-n-8e502b48f1,},FirstTimestamp:2025-09-05 23:55:58.088642481 +0000 UTC m=+1.379144430,LastTimestamp:2025-09-05 23:55:58.088642481 +0000 UTC m=+1.379144430,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-n-8e502b48f1,}" Sep 5 23:55:58.098080 kubelet[2797]: I0905 23:55:58.097421 2797 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 23:55:58.098080 kubelet[2797]: E0905 23:55:58.097642 2797 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-8e502b48f1\" not found" Sep 5 23:55:58.098080 kubelet[2797]: I0905 23:55:58.098039 2797 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:55:58.098520 kubelet[2797]: I0905 23:55:58.098453 2797 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:55:58.098691 kubelet[2797]: I0905 23:55:58.098657 2797 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:55:58.100862 kubelet[2797]: E0905 23:55:58.100784 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-8e502b48f1?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="200ms" Sep 5 23:55:58.102590 kubelet[2797]: I0905 23:55:58.102567 2797 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:55:58.102672 kubelet[2797]: I0905 23:55:58.102610 2797 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 23:55:58.103636 kubelet[2797]: W0905 23:55:58.103577 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Sep 5 23:55:58.103636 kubelet[2797]: E0905 23:55:58.103633 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:55:58.103906 kubelet[2797]: I0905 23:55:58.103766 2797 factory.go:221] Registration of the containerd container factory successfully Sep 5 23:55:58.103906 kubelet[2797]: I0905 23:55:58.103781 2797 factory.go:221] Registration of the systemd container factory successfully Sep 5 23:55:58.103906 kubelet[2797]: I0905 23:55:58.103875 2797 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:55:58.111904 kubelet[2797]: E0905 23:55:58.111870 2797 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:55:58.198100 kubelet[2797]: E0905 23:55:58.198051 2797 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-8e502b48f1\" not found" Sep 5 23:55:58.242648 kubelet[2797]: I0905 23:55:58.242524 2797 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 23:55:58.242648 kubelet[2797]: I0905 23:55:58.242573 2797 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 23:55:58.242648 kubelet[2797]: I0905 23:55:58.242601 2797 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:55:58.298788 kubelet[2797]: E0905 23:55:58.298684 2797 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-8e502b48f1\" not found" Sep 5 23:55:58.302207 kubelet[2797]: E0905 23:55:58.302177 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-8e502b48f1?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="400ms" Sep 5 23:55:58.399647 kubelet[2797]: E0905 23:55:58.399606 2797 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-8e502b48f1\" not found" Sep 5 23:55:58.500016 kubelet[2797]: E0905 23:55:58.499984 2797 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-8e502b48f1\" not found" Sep 5 23:55:58.598466 kubelet[2797]: I0905 23:55:58.597984 2797 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 23:55:58.599461 kubelet[2797]: I0905 23:55:58.599345 2797 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 23:55:58.599461 kubelet[2797]: I0905 23:55:58.599384 2797 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 5 23:55:58.599461 kubelet[2797]: I0905 23:55:58.599405 2797 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 23:55:58.599461 kubelet[2797]: I0905 23:55:58.599410 2797 kubelet.go:2382] "Starting kubelet main sync loop" Sep 5 23:55:58.599461 kubelet[2797]: E0905 23:55:58.599459 2797 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:55:58.600699 kubelet[2797]: E0905 23:55:58.600652 2797 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-8e502b48f1\" not found" Sep 5 23:55:58.601584 kubelet[2797]: W0905 23:55:58.601523 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Sep 5 23:55:58.601659 kubelet[2797]: E0905 23:55:58.601587 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:55:58.631336 kubelet[2797]: I0905 23:55:58.631026 2797 policy_none.go:49] "None policy: Start" Sep 5 23:55:58.631336 kubelet[2797]: I0905 23:55:58.631058 2797 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 23:55:58.631336 kubelet[2797]: I0905 23:55:58.631073 2797 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:55:58.644772 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 23:55:58.659270 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 23:55:58.662186 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 23:55:58.669716 kubelet[2797]: I0905 23:55:58.669682 2797 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 23:55:58.670469 kubelet[2797]: I0905 23:55:58.669909 2797 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:55:58.670469 kubelet[2797]: I0905 23:55:58.669929 2797 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:55:58.670469 kubelet[2797]: I0905 23:55:58.670233 2797 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:55:58.672338 kubelet[2797]: E0905 23:55:58.672291 2797 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 23:55:58.672338 kubelet[2797]: E0905 23:55:58.672337 2797 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-n-8e502b48f1\" not found" Sep 5 23:55:58.702928 kubelet[2797]: E0905 23:55:58.702746 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-8e502b48f1?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="800ms" Sep 5 23:55:58.704954 kubelet[2797]: I0905 23:55:58.704441 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c76b9af9895addee84c722d50d322ed1-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-8e502b48f1\" (UID: \"c76b9af9895addee84c722d50d322ed1\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-8e502b48f1" Sep 5 23:55:58.704954 kubelet[2797]: I0905 23:55:58.704475 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c76b9af9895addee84c722d50d322ed1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-8e502b48f1\" (UID: \"c76b9af9895addee84c722d50d322ed1\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-8e502b48f1" Sep 5 23:55:58.704954 kubelet[2797]: I0905 23:55:58.704495 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57bc0f4980b4dba2bf976b51f5d3d673-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-8e502b48f1\" (UID: \"57bc0f4980b4dba2bf976b51f5d3d673\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-8e502b48f1" Sep 5 23:55:58.704954 kubelet[2797]: I0905 23:55:58.704512 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/57bc0f4980b4dba2bf976b51f5d3d673-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-8e502b48f1\" (UID: \"57bc0f4980b4dba2bf976b51f5d3d673\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-8e502b48f1" Sep 5 23:55:58.704954 kubelet[2797]: I0905 23:55:58.704527 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57bc0f4980b4dba2bf976b51f5d3d673-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-8e502b48f1\" (UID: \"57bc0f4980b4dba2bf976b51f5d3d673\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-8e502b48f1" Sep 5 23:55:58.705163 kubelet[2797]: I0905 23:55:58.704543 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/57bc0f4980b4dba2bf976b51f5d3d673-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-8e502b48f1\" (UID: \"57bc0f4980b4dba2bf976b51f5d3d673\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-8e502b48f1" Sep 5 23:55:58.705163 kubelet[2797]: I0905 23:55:58.704559 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57bc0f4980b4dba2bf976b51f5d3d673-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-8e502b48f1\" (UID: \"57bc0f4980b4dba2bf976b51f5d3d673\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-8e502b48f1" Sep 5 23:55:58.705163 kubelet[2797]: I0905 23:55:58.704573 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c76b9af9895addee84c722d50d322ed1-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-8e502b48f1\" (UID: \"c76b9af9895addee84c722d50d322ed1\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-8e502b48f1" Sep 5 23:55:58.711505 systemd[1]: Created slice kubepods-burstable-podc76b9af9895addee84c722d50d322ed1.slice - libcontainer container kubepods-burstable-podc76b9af9895addee84c722d50d322ed1.slice. Sep 5 23:55:58.724428 kubelet[2797]: E0905 23:55:58.724141 2797 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-8e502b48f1\" not found" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:55:58.728021 systemd[1]: Created slice kubepods-burstable-pod57bc0f4980b4dba2bf976b51f5d3d673.slice - libcontainer container kubepods-burstable-pod57bc0f4980b4dba2bf976b51f5d3d673.slice. Sep 5 23:55:58.730571 kubelet[2797]: E0905 23:55:58.730529 2797 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-8e502b48f1\" not found" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:55:58.732443 systemd[1]: Created slice kubepods-burstable-podc25b9e4a8b031aa5d3944d5c4bcb98b9.slice - libcontainer container kubepods-burstable-podc25b9e4a8b031aa5d3944d5c4bcb98b9.slice. Sep 5 23:55:58.734523 kubelet[2797]: E0905 23:55:58.734496 2797 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-8e502b48f1\" not found" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:55:58.771939 kubelet[2797]: I0905 23:55:58.771912 2797 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:55:58.772280 kubelet[2797]: E0905 23:55:58.772249 2797 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:55:58.805026 kubelet[2797]: I0905 23:55:58.804950 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c25b9e4a8b031aa5d3944d5c4bcb98b9-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-8e502b48f1\" (UID: \"c25b9e4a8b031aa5d3944d5c4bcb98b9\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-8e502b48f1" Sep 5 23:55:58.973855 kubelet[2797]: I0905 23:55:58.973804 2797 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:55:58.974199 kubelet[2797]: E0905 23:55:58.974164 2797 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:55:58.983856 kubelet[2797]: W0905 23:55:58.983768 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Sep 5 23:55:58.983856 kubelet[2797]: E0905 23:55:58.983809 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:55:59.026304 containerd[1698]: time="2025-09-05T23:55:59.025681211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-8e502b48f1,Uid:c76b9af9895addee84c722d50d322ed1,Namespace:kube-system,Attempt:0,}" Sep 5 23:55:59.031666 containerd[1698]: time="2025-09-05T23:55:59.031431217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-8e502b48f1,Uid:57bc0f4980b4dba2bf976b51f5d3d673,Namespace:kube-system,Attempt:0,}" Sep 5 23:55:59.035576 containerd[1698]: time="2025-09-05T23:55:59.035389942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-8e502b48f1,Uid:c25b9e4a8b031aa5d3944d5c4bcb98b9,Namespace:kube-system,Attempt:0,}" Sep 5 23:55:59.250693 kubelet[2797]: W0905 23:55:59.250244 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-8e502b48f1&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Sep 5 23:55:59.250693 kubelet[2797]: E0905 23:55:59.250315 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-8e502b48f1&limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:55:59.376602 kubelet[2797]: I0905 23:55:59.376521 2797 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:55:59.377096 kubelet[2797]: E0905 23:55:59.377069 2797 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:55:59.408863 kubelet[2797]: W0905 23:55:59.408712 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Sep 5 23:55:59.408863 kubelet[2797]: E0905 23:55:59.408777 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:55:59.503908 kubelet[2797]: E0905 23:55:59.503758 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-8e502b48f1?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="1.6s" Sep 5 23:55:59.648471 kubelet[2797]: W0905 23:55:59.648407 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Sep 5 23:55:59.648619 kubelet[2797]: E0905 23:55:59.648480 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:55:59.713535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1842458767.mount: Deactivated successfully. Sep 5 23:55:59.748455 containerd[1698]: time="2025-09-05T23:55:59.748403400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:55:59.762676 containerd[1698]: time="2025-09-05T23:55:59.762560855Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 5 23:55:59.767177 containerd[1698]: time="2025-09-05T23:55:59.767132499Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:55:59.770953 containerd[1698]: time="2025-09-05T23:55:59.770882983Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:55:59.776456 containerd[1698]: time="2025-09-05T23:55:59.776400629Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:55:59.780813 containerd[1698]: time="2025-09-05T23:55:59.780775234Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:55:59.785853 containerd[1698]: time="2025-09-05T23:55:59.785617279Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:55:59.793666 containerd[1698]: time="2025-09-05T23:55:59.793596527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:55:59.794742 containerd[1698]: time="2025-09-05T23:55:59.794533688Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 759.072586ms" Sep 5 23:55:59.796810 containerd[1698]: time="2025-09-05T23:55:59.796774010Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 765.266233ms" Sep 5 23:55:59.797350 containerd[1698]: time="2025-09-05T23:55:59.797322291Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 771.556959ms" Sep 5 23:56:00.178920 kubelet[2797]: I0905 23:56:00.178814 2797 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:00.179281 kubelet[2797]: E0905 23:56:00.179185 2797 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:00.199624 kubelet[2797]: E0905 23:56:00.199580 2797 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:56:01.105171 kubelet[2797]: E0905 23:56:01.105127 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-8e502b48f1?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="3.2s" Sep 5 23:56:01.295415 kubelet[2797]: W0905 23:56:01.295356 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Sep 5 23:56:01.295556 kubelet[2797]: E0905 23:56:01.295422 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:56:01.383100 containerd[1698]: time="2025-09-05T23:56:01.382739346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:56:01.383100 containerd[1698]: time="2025-09-05T23:56:01.382791946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:56:01.383100 containerd[1698]: time="2025-09-05T23:56:01.382807426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:01.384373 containerd[1698]: time="2025-09-05T23:56:01.384286148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:56:01.385431 containerd[1698]: time="2025-09-05T23:56:01.385366349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:01.385593 containerd[1698]: time="2025-09-05T23:56:01.385535950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:56:01.385689 containerd[1698]: time="2025-09-05T23:56:01.385582670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:01.385872 containerd[1698]: time="2025-09-05T23:56:01.385791750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:01.386876 containerd[1698]: time="2025-09-05T23:56:01.386786871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:56:01.386952 containerd[1698]: time="2025-09-05T23:56:01.386907951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:56:01.388192 containerd[1698]: time="2025-09-05T23:56:01.387963312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:01.388525 containerd[1698]: time="2025-09-05T23:56:01.388408353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:01.424229 systemd[1]: Started cri-containerd-af4f60d4a5014ccaff2bf0a013f0ee4204a492af5af718745bb7cf16b8f656ca.scope - libcontainer container af4f60d4a5014ccaff2bf0a013f0ee4204a492af5af718745bb7cf16b8f656ca. Sep 5 23:56:01.430003 systemd[1]: Started cri-containerd-95289eba844a077117f635f30cb2247059b929bac73013e297d58bbe73e11f99.scope - libcontainer container 95289eba844a077117f635f30cb2247059b929bac73013e297d58bbe73e11f99. Sep 5 23:56:01.435548 systemd[1]: Started cri-containerd-1e44cd07384793a50198e5d95068757c6c8b6deb9c0a8894d8e6342c47f77536.scope - libcontainer container 1e44cd07384793a50198e5d95068757c6c8b6deb9c0a8894d8e6342c47f77536. Sep 5 23:56:01.471798 containerd[1698]: time="2025-09-05T23:56:01.471752448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-8e502b48f1,Uid:57bc0f4980b4dba2bf976b51f5d3d673,Namespace:kube-system,Attempt:0,} returns sandbox id \"af4f60d4a5014ccaff2bf0a013f0ee4204a492af5af718745bb7cf16b8f656ca\"" Sep 5 23:56:01.479005 containerd[1698]: time="2025-09-05T23:56:01.478852936Z" level=info msg="CreateContainer within sandbox \"af4f60d4a5014ccaff2bf0a013f0ee4204a492af5af718745bb7cf16b8f656ca\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 23:56:01.494313 containerd[1698]: time="2025-09-05T23:56:01.493594033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-8e502b48f1,Uid:c25b9e4a8b031aa5d3944d5c4bcb98b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"95289eba844a077117f635f30cb2247059b929bac73013e297d58bbe73e11f99\"" Sep 5 23:56:01.494313 containerd[1698]: time="2025-09-05T23:56:01.493767033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-8e502b48f1,Uid:c76b9af9895addee84c722d50d322ed1,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e44cd07384793a50198e5d95068757c6c8b6deb9c0a8894d8e6342c47f77536\"" Sep 5 23:56:01.497326 containerd[1698]: time="2025-09-05T23:56:01.497287797Z" level=info msg="CreateContainer within sandbox \"95289eba844a077117f635f30cb2247059b929bac73013e297d58bbe73e11f99\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 23:56:01.497707 containerd[1698]: time="2025-09-05T23:56:01.497290757Z" level=info msg="CreateContainer within sandbox \"1e44cd07384793a50198e5d95068757c6c8b6deb9c0a8894d8e6342c47f77536\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 23:56:01.619909 containerd[1698]: time="2025-09-05T23:56:01.619857137Z" level=info msg="CreateContainer within sandbox \"af4f60d4a5014ccaff2bf0a013f0ee4204a492af5af718745bb7cf16b8f656ca\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e894355ec625bcff62842d1b4253b1c78ef9955014834c4ef179a7f5edd2b2c0\"" Sep 5 23:56:01.620665 containerd[1698]: time="2025-09-05T23:56:01.620633698Z" level=info msg="StartContainer for \"e894355ec625bcff62842d1b4253b1c78ef9955014834c4ef179a7f5edd2b2c0\"" Sep 5 23:56:01.652022 systemd[1]: Started cri-containerd-e894355ec625bcff62842d1b4253b1c78ef9955014834c4ef179a7f5edd2b2c0.scope - libcontainer container e894355ec625bcff62842d1b4253b1c78ef9955014834c4ef179a7f5edd2b2c0. Sep 5 23:56:01.781016 kubelet[2797]: I0905 23:56:01.780962 2797 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:01.781317 kubelet[2797]: E0905 23:56:01.781284 2797 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:02.035115 kubelet[2797]: W0905 23:56:02.035044 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Sep 5 23:56:02.035115 kubelet[2797]: E0905 23:56:02.035091 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:56:02.060801 kubelet[2797]: W0905 23:56:02.060733 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Sep 5 23:56:02.060801 kubelet[2797]: E0905 23:56:02.060773 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:56:02.231867 kubelet[2797]: W0905 23:56:02.231769 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-8e502b48f1&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Sep 5 23:56:02.231867 kubelet[2797]: E0905 23:56:02.231819 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-8e502b48f1&limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:56:03.620864 containerd[1698]: time="2025-09-05T23:56:03.620745265Z" level=info msg="StartContainer for \"e894355ec625bcff62842d1b4253b1c78ef9955014834c4ef179a7f5edd2b2c0\" returns successfully" Sep 5 23:56:03.778435 containerd[1698]: time="2025-09-05T23:56:03.778382645Z" level=info msg="CreateContainer within sandbox \"95289eba844a077117f635f30cb2247059b929bac73013e297d58bbe73e11f99\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"23b60cb146fbdc21db30e6d25cb4360b02f3a4402ad83e975ea03ba51d0d73d4\"" Sep 5 23:56:03.779037 containerd[1698]: time="2025-09-05T23:56:03.779010726Z" level=info msg="StartContainer for \"23b60cb146fbdc21db30e6d25cb4360b02f3a4402ad83e975ea03ba51d0d73d4\"" Sep 5 23:56:03.819034 systemd[1]: Started cri-containerd-23b60cb146fbdc21db30e6d25cb4360b02f3a4402ad83e975ea03ba51d0d73d4.scope - libcontainer container 23b60cb146fbdc21db30e6d25cb4360b02f3a4402ad83e975ea03ba51d0d73d4. Sep 5 23:56:03.874053 containerd[1698]: time="2025-09-05T23:56:03.873794274Z" level=info msg="CreateContainer within sandbox \"1e44cd07384793a50198e5d95068757c6c8b6deb9c0a8894d8e6342c47f77536\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"62d8c3fc62fc8c65dc06481795bc2f3e60f75e7931af11f2540fa3850c91227b\"" Sep 5 23:56:03.874777 containerd[1698]: time="2025-09-05T23:56:03.874750155Z" level=info msg="StartContainer for \"62d8c3fc62fc8c65dc06481795bc2f3e60f75e7931af11f2540fa3850c91227b\"" Sep 5 23:56:03.918996 systemd[1]: Started cri-containerd-62d8c3fc62fc8c65dc06481795bc2f3e60f75e7931af11f2540fa3850c91227b.scope - libcontainer container 62d8c3fc62fc8c65dc06481795bc2f3e60f75e7931af11f2540fa3850c91227b. Sep 5 23:56:03.931879 containerd[1698]: time="2025-09-05T23:56:03.930176339Z" level=info msg="StartContainer for \"23b60cb146fbdc21db30e6d25cb4360b02f3a4402ad83e975ea03ba51d0d73d4\" returns successfully" Sep 5 23:56:03.974854 containerd[1698]: time="2025-09-05T23:56:03.974483709Z" level=info msg="StartContainer for \"62d8c3fc62fc8c65dc06481795bc2f3e60f75e7931af11f2540fa3850c91227b\" returns successfully" Sep 5 23:56:04.631652 kubelet[2797]: E0905 23:56:04.631613 2797 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-8e502b48f1\" not found" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:04.634775 kubelet[2797]: E0905 23:56:04.634278 2797 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-8e502b48f1\" not found" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:04.634775 kubelet[2797]: E0905 23:56:04.634558 2797 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-8e502b48f1\" not found" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:04.983664 kubelet[2797]: I0905 23:56:04.983632 2797 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:05.590456 kubelet[2797]: E0905 23:56:05.590418 2797 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.5-n-8e502b48f1\" not found" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:05.636212 kubelet[2797]: E0905 23:56:05.635570 2797 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-8e502b48f1\" not found" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:05.636212 kubelet[2797]: E0905 23:56:05.635934 2797 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-8e502b48f1\" not found" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:05.636212 kubelet[2797]: E0905 23:56:05.636057 2797 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-8e502b48f1\" not found" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:05.796248 kubelet[2797]: I0905 23:56:05.795880 2797 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:05.796248 kubelet[2797]: E0905 23:56:05.796012 2797 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.5-n-8e502b48f1\": node \"ci-4081.3.5-n-8e502b48f1\" not found" Sep 5 23:56:05.856085 kubelet[2797]: E0905 23:56:05.855960 2797 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-8e502b48f1\" not found" Sep 5 23:56:05.956619 kubelet[2797]: E0905 23:56:05.956564 2797 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-8e502b48f1\" not found" Sep 5 23:56:06.057711 kubelet[2797]: E0905 23:56:06.057640 2797 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-8e502b48f1\" not found" Sep 5 23:56:06.098282 kubelet[2797]: I0905 23:56:06.098124 2797 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:06.152801 kubelet[2797]: E0905 23:56:06.152665 2797 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.5-n-8e502b48f1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:06.152801 kubelet[2797]: I0905 23:56:06.152697 2797 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:06.162208 kubelet[2797]: E0905 23:56:06.161859 2797 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-n-8e502b48f1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:06.162208 kubelet[2797]: I0905 23:56:06.161911 2797 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:06.165349 kubelet[2797]: E0905 23:56:06.165226 2797 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-n-8e502b48f1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:06.635120 kubelet[2797]: I0905 23:56:06.635071 2797 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:06.635638 kubelet[2797]: I0905 23:56:06.635454 2797 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:06.644870 kubelet[2797]: W0905 23:56:06.644242 2797 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 23:56:06.644870 kubelet[2797]: W0905 23:56:06.644294 2797 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 23:56:07.087997 kubelet[2797]: I0905 23:56:07.087967 2797 apiserver.go:52] "Watching apiserver" Sep 5 23:56:07.103731 kubelet[2797]: I0905 23:56:07.103678 2797 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 23:56:07.636801 kubelet[2797]: I0905 23:56:07.636770 2797 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:07.637897 kubelet[2797]: I0905 23:56:07.637109 2797 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:07.651696 kubelet[2797]: W0905 23:56:07.651383 2797 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 23:56:07.651696 kubelet[2797]: E0905 23:56:07.651444 2797 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-n-8e502b48f1\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:07.652727 kubelet[2797]: W0905 23:56:07.652420 2797 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 23:56:07.652727 kubelet[2797]: E0905 23:56:07.652496 2797 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-n-8e502b48f1\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:08.263768 systemd[1]: Reloading requested from client PID 3070 ('systemctl') (unit session-9.scope)... Sep 5 23:56:08.263787 systemd[1]: Reloading... Sep 5 23:56:08.359876 zram_generator::config[3112]: No configuration found. Sep 5 23:56:08.469032 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:56:08.559378 systemd[1]: Reloading finished in 295 ms. Sep 5 23:56:08.593899 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:56:08.606363 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 23:56:08.606774 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:56:08.606926 systemd[1]: kubelet.service: Consumed 1.646s CPU time, 127.4M memory peak, 0B memory swap peak. Sep 5 23:56:08.613488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:56:13.702652 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:56:13.715234 (kubelet)[3174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:56:13.767417 kubelet[3174]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:56:13.767417 kubelet[3174]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 23:56:13.767417 kubelet[3174]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:56:13.767748 kubelet[3174]: I0905 23:56:13.767469 3174 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:56:13.776854 kubelet[3174]: I0905 23:56:13.776801 3174 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 5 23:56:13.776854 kubelet[3174]: I0905 23:56:13.776841 3174 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:56:13.778285 kubelet[3174]: I0905 23:56:13.778257 3174 server.go:954] "Client rotation is on, will bootstrap in background" Sep 5 23:56:13.780815 kubelet[3174]: I0905 23:56:13.780399 3174 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 23:56:13.782683 kubelet[3174]: I0905 23:56:13.782657 3174 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:56:13.785890 kubelet[3174]: E0905 23:56:13.785862 3174 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:56:13.786030 kubelet[3174]: I0905 23:56:13.786019 3174 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:56:13.789055 kubelet[3174]: I0905 23:56:13.789028 3174 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:56:13.789603 kubelet[3174]: I0905 23:56:13.789352 3174 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:56:13.789603 kubelet[3174]: I0905 23:56:13.789383 3174 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-8e502b48f1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 23:56:13.789746 kubelet[3174]: I0905 23:56:13.789561 3174 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:56:13.789798 kubelet[3174]: I0905 23:56:13.789791 3174 container_manager_linux.go:304] "Creating device plugin manager" Sep 5 23:56:13.789911 kubelet[3174]: I0905 23:56:13.789901 3174 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:56:13.790105 kubelet[3174]: I0905 23:56:13.790096 3174 kubelet.go:446] "Attempting to sync node with API server" Sep 5 23:56:13.792917 kubelet[3174]: I0905 23:56:13.792898 3174 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:56:13.793031 kubelet[3174]: I0905 23:56:13.793020 3174 kubelet.go:352] "Adding apiserver pod source" Sep 5 23:56:13.793142 kubelet[3174]: I0905 23:56:13.793081 3174 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:56:13.795754 kubelet[3174]: I0905 23:56:13.794036 3174 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:56:13.795754 kubelet[3174]: I0905 23:56:13.794486 3174 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 23:56:13.796094 kubelet[3174]: I0905 23:56:13.796074 3174 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 23:56:13.796284 kubelet[3174]: I0905 23:56:13.796275 3174 server.go:1287] "Started kubelet" Sep 5 23:56:13.798114 kubelet[3174]: I0905 23:56:13.797718 3174 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:56:13.799217 kubelet[3174]: I0905 23:56:13.798860 3174 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:56:13.799760 kubelet[3174]: I0905 23:56:13.799730 3174 server.go:479] "Adding debug handlers to kubelet server" Sep 5 23:56:13.800247 kubelet[3174]: I0905 23:56:13.800156 3174 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:56:13.803000 kubelet[3174]: I0905 23:56:13.802962 3174 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:56:13.811540 kubelet[3174]: I0905 23:56:13.811429 3174 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:56:13.812897 kubelet[3174]: I0905 23:56:13.812868 3174 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 23:56:13.813184 kubelet[3174]: E0905 23:56:13.813161 3174 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-8e502b48f1\" not found" Sep 5 23:56:13.816298 kubelet[3174]: I0905 23:56:13.816264 3174 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 23:56:13.816751 kubelet[3174]: I0905 23:56:13.816408 3174 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:56:13.818639 kubelet[3174]: I0905 23:56:13.818582 3174 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 23:56:13.827014 kubelet[3174]: I0905 23:56:13.826970 3174 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 23:56:13.827014 kubelet[3174]: I0905 23:56:13.827008 3174 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 5 23:56:13.827159 kubelet[3174]: I0905 23:56:13.827031 3174 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 23:56:13.827159 kubelet[3174]: I0905 23:56:13.827038 3174 kubelet.go:2382] "Starting kubelet main sync loop" Sep 5 23:56:13.827159 kubelet[3174]: E0905 23:56:13.827083 3174 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:56:13.848067 kubelet[3174]: E0905 23:56:13.848038 3174 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:56:13.857991 kubelet[3174]: I0905 23:56:13.857958 3174 factory.go:221] Registration of the containerd container factory successfully Sep 5 23:56:13.857991 kubelet[3174]: I0905 23:56:13.857980 3174 factory.go:221] Registration of the systemd container factory successfully Sep 5 23:56:13.858127 kubelet[3174]: I0905 23:56:13.858061 3174 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:56:13.901504 kubelet[3174]: I0905 23:56:13.901477 3174 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 23:56:13.901892 kubelet[3174]: I0905 23:56:13.901679 3174 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 23:56:13.901892 kubelet[3174]: I0905 23:56:13.901704 3174 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:56:13.902097 kubelet[3174]: I0905 23:56:13.902080 3174 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 23:56:13.902166 kubelet[3174]: I0905 23:56:13.902144 3174 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 23:56:13.902212 kubelet[3174]: I0905 23:56:13.902205 3174 policy_none.go:49] "None policy: Start" Sep 5 23:56:13.902554 kubelet[3174]: I0905 23:56:13.902540 3174 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 23:56:13.902630 kubelet[3174]: I0905 23:56:13.902621 3174 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:56:13.902903 kubelet[3174]: I0905 23:56:13.902890 3174 state_mem.go:75] "Updated machine memory state" Sep 5 23:56:13.907049 kubelet[3174]: I0905 23:56:13.907021 3174 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 23:56:13.907209 kubelet[3174]: I0905 23:56:13.907190 3174 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:56:13.907244 kubelet[3174]: I0905 23:56:13.907211 3174 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:56:13.907893 kubelet[3174]: I0905 23:56:13.907872 3174 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:56:13.911215 kubelet[3174]: E0905 23:56:13.911183 3174 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 23:56:13.928412 kubelet[3174]: I0905 23:56:13.927475 3174 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:13.928412 kubelet[3174]: I0905 23:56:13.927585 3174 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:13.928412 kubelet[3174]: I0905 23:56:13.927787 3174 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:14.005677 kubelet[3174]: W0905 23:56:14.005393 3174 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 23:56:14.011007 kubelet[3174]: W0905 23:56:14.010789 3174 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 23:56:14.011007 kubelet[3174]: E0905 23:56:14.010874 3174 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-n-8e502b48f1\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:14.011007 kubelet[3174]: W0905 23:56:14.010940 3174 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 23:56:14.011216 kubelet[3174]: E0905 23:56:14.011045 3174 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-n-8e502b48f1\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:14.012364 kubelet[3174]: I0905 23:56:14.012322 3174 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:14.029474 kubelet[3174]: I0905 23:56:14.028744 3174 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:14.029474 kubelet[3174]: I0905 23:56:14.028825 3174 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:14.117372 kubelet[3174]: I0905 23:56:14.117335 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c25b9e4a8b031aa5d3944d5c4bcb98b9-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-8e502b48f1\" (UID: \"c25b9e4a8b031aa5d3944d5c4bcb98b9\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:14.117576 kubelet[3174]: I0905 23:56:14.117560 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c76b9af9895addee84c722d50d322ed1-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-8e502b48f1\" (UID: \"c76b9af9895addee84c722d50d322ed1\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:14.117682 kubelet[3174]: I0905 23:56:14.117666 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c76b9af9895addee84c722d50d322ed1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-8e502b48f1\" (UID: \"c76b9af9895addee84c722d50d322ed1\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:14.117919 kubelet[3174]: I0905 23:56:14.117745 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57bc0f4980b4dba2bf976b51f5d3d673-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-8e502b48f1\" (UID: \"57bc0f4980b4dba2bf976b51f5d3d673\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:14.117919 kubelet[3174]: I0905 23:56:14.117771 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/57bc0f4980b4dba2bf976b51f5d3d673-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-8e502b48f1\" (UID: \"57bc0f4980b4dba2bf976b51f5d3d673\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:14.117919 kubelet[3174]: I0905 23:56:14.117795 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57bc0f4980b4dba2bf976b51f5d3d673-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-8e502b48f1\" (UID: \"57bc0f4980b4dba2bf976b51f5d3d673\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:14.117919 kubelet[3174]: I0905 23:56:14.117812 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c76b9af9895addee84c722d50d322ed1-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-8e502b48f1\" (UID: \"c76b9af9895addee84c722d50d322ed1\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:14.117919 kubelet[3174]: I0905 23:56:14.117857 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/57bc0f4980b4dba2bf976b51f5d3d673-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-8e502b48f1\" (UID: \"57bc0f4980b4dba2bf976b51f5d3d673\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:14.118066 kubelet[3174]: I0905 23:56:14.117878 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57bc0f4980b4dba2bf976b51f5d3d673-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-8e502b48f1\" (UID: \"57bc0f4980b4dba2bf976b51f5d3d673\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:14.461066 kubelet[3174]: I0905 23:56:14.460910 3174 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 23:56:14.461967 containerd[1698]: time="2025-09-05T23:56:14.461226175Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 23:56:14.462267 kubelet[3174]: I0905 23:56:14.461471 3174 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 23:56:14.793919 kubelet[3174]: I0905 23:56:14.793536 3174 apiserver.go:52] "Watching apiserver" Sep 5 23:56:14.816978 kubelet[3174]: I0905 23:56:14.816922 3174 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 23:56:14.881875 kubelet[3174]: I0905 23:56:14.881312 3174 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:14.895238 kubelet[3174]: W0905 23:56:14.894986 3174 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 23:56:14.895238 kubelet[3174]: E0905 23:56:14.895043 3174 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-n-8e502b48f1\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:14.927733 kubelet[3174]: I0905 23:56:14.927654 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-n-8e502b48f1" podStartSLOduration=8.927634555000001 podStartE2EDuration="8.927634555s" podCreationTimestamp="2025-09-05 23:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:56:14.913497258 +0000 UTC m=+1.194777584" watchObservedRunningTime="2025-09-05 23:56:14.927634555 +0000 UTC m=+1.208914881" Sep 5 23:56:14.927952 kubelet[3174]: I0905 23:56:14.927792 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-8e502b48f1" podStartSLOduration=1.9277882750000002 podStartE2EDuration="1.927788275s" podCreationTimestamp="2025-09-05 23:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:56:14.926516354 +0000 UTC m=+1.207796680" watchObservedRunningTime="2025-09-05 23:56:14.927788275 +0000 UTC m=+1.209068601" Sep 5 23:56:14.943228 kubelet[3174]: I0905 23:56:14.942936 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-n-8e502b48f1" podStartSLOduration=8.942921373 podStartE2EDuration="8.942921373s" podCreationTimestamp="2025-09-05 23:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:56:14.942606012 +0000 UTC m=+1.223886338" watchObservedRunningTime="2025-09-05 23:56:14.942921373 +0000 UTC m=+1.224201699" Sep 5 23:56:15.079252 systemd[1]: Created slice kubepods-besteffort-poda617c13c_00aa_406b_a3db_4f9098aee614.slice - libcontainer container kubepods-besteffort-poda617c13c_00aa_406b_a3db_4f9098aee614.slice. Sep 5 23:56:15.122482 kubelet[3174]: I0905 23:56:15.122430 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a617c13c-00aa-406b-a3db-4f9098aee614-lib-modules\") pod \"kube-proxy-m4mnp\" (UID: \"a617c13c-00aa-406b-a3db-4f9098aee614\") " pod="kube-system/kube-proxy-m4mnp" Sep 5 23:56:15.122482 kubelet[3174]: I0905 23:56:15.122477 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a617c13c-00aa-406b-a3db-4f9098aee614-xtables-lock\") pod \"kube-proxy-m4mnp\" (UID: \"a617c13c-00aa-406b-a3db-4f9098aee614\") " pod="kube-system/kube-proxy-m4mnp" Sep 5 23:56:15.122668 kubelet[3174]: I0905 23:56:15.122497 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a617c13c-00aa-406b-a3db-4f9098aee614-kube-proxy\") pod \"kube-proxy-m4mnp\" (UID: \"a617c13c-00aa-406b-a3db-4f9098aee614\") " pod="kube-system/kube-proxy-m4mnp" Sep 5 23:56:15.122668 kubelet[3174]: I0905 23:56:15.122514 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b52vl\" (UniqueName: \"kubernetes.io/projected/a617c13c-00aa-406b-a3db-4f9098aee614-kube-api-access-b52vl\") pod \"kube-proxy-m4mnp\" (UID: \"a617c13c-00aa-406b-a3db-4f9098aee614\") " pod="kube-system/kube-proxy-m4mnp" Sep 5 23:56:15.387353 containerd[1698]: time="2025-09-05T23:56:15.387066927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m4mnp,Uid:a617c13c-00aa-406b-a3db-4f9098aee614,Namespace:kube-system,Attempt:0,}" Sep 5 23:56:15.447502 containerd[1698]: time="2025-09-05T23:56:15.447398597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:56:15.448121 containerd[1698]: time="2025-09-05T23:56:15.447816197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:56:15.448121 containerd[1698]: time="2025-09-05T23:56:15.447884957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:15.448121 containerd[1698]: time="2025-09-05T23:56:15.447980357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:15.462930 systemd[1]: run-containerd-runc-k8s.io-35a75dcc4e15d28a7e78d3fc9cf92d795e36a24dd22f3cd5131d6e5edf03efa3-runc.M0rBvf.mount: Deactivated successfully. Sep 5 23:56:15.471011 systemd[1]: Started cri-containerd-35a75dcc4e15d28a7e78d3fc9cf92d795e36a24dd22f3cd5131d6e5edf03efa3.scope - libcontainer container 35a75dcc4e15d28a7e78d3fc9cf92d795e36a24dd22f3cd5131d6e5edf03efa3. Sep 5 23:56:15.503043 containerd[1698]: time="2025-09-05T23:56:15.502660301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m4mnp,Uid:a617c13c-00aa-406b-a3db-4f9098aee614,Namespace:kube-system,Attempt:0,} returns sandbox id \"35a75dcc4e15d28a7e78d3fc9cf92d795e36a24dd22f3cd5131d6e5edf03efa3\"" Sep 5 23:56:15.509737 containerd[1698]: time="2025-09-05T23:56:15.509486989Z" level=info msg="CreateContainer within sandbox \"35a75dcc4e15d28a7e78d3fc9cf92d795e36a24dd22f3cd5131d6e5edf03efa3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 23:56:15.570524 containerd[1698]: time="2025-09-05T23:56:15.570472219Z" level=info msg="CreateContainer within sandbox \"35a75dcc4e15d28a7e78d3fc9cf92d795e36a24dd22f3cd5131d6e5edf03efa3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e7b222acd2ad0f1dfaafbcf426064552bd9bfb763532d34c66ceb1429b630df3\"" Sep 5 23:56:15.571842 containerd[1698]: time="2025-09-05T23:56:15.571796021Z" level=info msg="StartContainer for \"e7b222acd2ad0f1dfaafbcf426064552bd9bfb763532d34c66ceb1429b630df3\"" Sep 5 23:56:15.576713 systemd[1]: Created slice kubepods-besteffort-pod2efd9174_d8d5_4924_a663_0015dbea4b33.slice - libcontainer container kubepods-besteffort-pod2efd9174_d8d5_4924_a663_0015dbea4b33.slice. Sep 5 23:56:15.605192 systemd[1]: Started cri-containerd-e7b222acd2ad0f1dfaafbcf426064552bd9bfb763532d34c66ceb1429b630df3.scope - libcontainer container e7b222acd2ad0f1dfaafbcf426064552bd9bfb763532d34c66ceb1429b630df3. Sep 5 23:56:15.626384 kubelet[3174]: I0905 23:56:15.626332 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2efd9174-d8d5-4924-a663-0015dbea4b33-var-lib-calico\") pod \"tigera-operator-755d956888-7kk8p\" (UID: \"2efd9174-d8d5-4924-a663-0015dbea4b33\") " pod="tigera-operator/tigera-operator-755d956888-7kk8p" Sep 5 23:56:15.626384 kubelet[3174]: I0905 23:56:15.626380 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ftdj\" (UniqueName: \"kubernetes.io/projected/2efd9174-d8d5-4924-a663-0015dbea4b33-kube-api-access-5ftdj\") pod \"tigera-operator-755d956888-7kk8p\" (UID: \"2efd9174-d8d5-4924-a663-0015dbea4b33\") " pod="tigera-operator/tigera-operator-755d956888-7kk8p" Sep 5 23:56:15.638536 containerd[1698]: time="2025-09-05T23:56:15.637852417Z" level=info msg="StartContainer for \"e7b222acd2ad0f1dfaafbcf426064552bd9bfb763532d34c66ceb1429b630df3\" returns successfully" Sep 5 23:56:15.881850 containerd[1698]: time="2025-09-05T23:56:15.881803540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-7kk8p,Uid:2efd9174-d8d5-4924-a663-0015dbea4b33,Namespace:tigera-operator,Attempt:0,}" Sep 5 23:56:15.944260 containerd[1698]: time="2025-09-05T23:56:15.943270051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:56:15.944260 containerd[1698]: time="2025-09-05T23:56:15.943707451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:56:15.944260 containerd[1698]: time="2025-09-05T23:56:15.943982132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:15.945015 containerd[1698]: time="2025-09-05T23:56:15.944879613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:15.966671 systemd[1]: Started cri-containerd-b855ffe01e9e9be8c52e3719f194af38499b810a405445d71ffe078aee9f3c15.scope - libcontainer container b855ffe01e9e9be8c52e3719f194af38499b810a405445d71ffe078aee9f3c15. Sep 5 23:56:15.994589 containerd[1698]: time="2025-09-05T23:56:15.994472030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-7kk8p,Uid:2efd9174-d8d5-4924-a663-0015dbea4b33,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b855ffe01e9e9be8c52e3719f194af38499b810a405445d71ffe078aee9f3c15\"" Sep 5 23:56:15.997320 containerd[1698]: time="2025-09-05T23:56:15.996638713Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 23:56:17.511134 kubelet[3174]: I0905 23:56:17.511029 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m4mnp" podStartSLOduration=2.511010312 podStartE2EDuration="2.511010312s" podCreationTimestamp="2025-09-05 23:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:56:15.89942504 +0000 UTC m=+2.180705366" watchObservedRunningTime="2025-09-05 23:56:17.511010312 +0000 UTC m=+3.792290598" Sep 5 23:56:17.665492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1230760246.mount: Deactivated successfully. Sep 5 23:56:18.386095 containerd[1698]: time="2025-09-05T23:56:18.386044306Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:18.389863 containerd[1698]: time="2025-09-05T23:56:18.389811430Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 5 23:56:18.396818 containerd[1698]: time="2025-09-05T23:56:18.396755119Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:18.405182 containerd[1698]: time="2025-09-05T23:56:18.405129689Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:18.406036 containerd[1698]: time="2025-09-05T23:56:18.405913649Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.409238856s" Sep 5 23:56:18.406036 containerd[1698]: time="2025-09-05T23:56:18.405947450Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 5 23:56:18.409777 containerd[1698]: time="2025-09-05T23:56:18.409624734Z" level=info msg="CreateContainer within sandbox \"b855ffe01e9e9be8c52e3719f194af38499b810a405445d71ffe078aee9f3c15\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 23:56:18.444524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount966141711.mount: Deactivated successfully. Sep 5 23:56:18.455750 containerd[1698]: time="2025-09-05T23:56:18.455703708Z" level=info msg="CreateContainer within sandbox \"b855ffe01e9e9be8c52e3719f194af38499b810a405445d71ffe078aee9f3c15\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7a2bc455b8e9f3f42534d2f8921610e1c321ce34fab4e91eac9e628df29de4a5\"" Sep 5 23:56:18.456448 containerd[1698]: time="2025-09-05T23:56:18.456408789Z" level=info msg="StartContainer for \"7a2bc455b8e9f3f42534d2f8921610e1c321ce34fab4e91eac9e628df29de4a5\"" Sep 5 23:56:18.482999 systemd[1]: Started cri-containerd-7a2bc455b8e9f3f42534d2f8921610e1c321ce34fab4e91eac9e628df29de4a5.scope - libcontainer container 7a2bc455b8e9f3f42534d2f8921610e1c321ce34fab4e91eac9e628df29de4a5. Sep 5 23:56:18.515415 containerd[1698]: time="2025-09-05T23:56:18.515121859Z" level=info msg="StartContainer for \"7a2bc455b8e9f3f42534d2f8921610e1c321ce34fab4e91eac9e628df29de4a5\" returns successfully" Sep 5 23:56:19.754270 kubelet[3174]: I0905 23:56:19.753871 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-7kk8p" podStartSLOduration=2.3431904230000002 podStartE2EDuration="4.753852922s" podCreationTimestamp="2025-09-05 23:56:15 +0000 UTC" firstStartedPulling="2025-09-05 23:56:15.996149832 +0000 UTC m=+2.277430158" lastFinishedPulling="2025-09-05 23:56:18.406812371 +0000 UTC m=+4.688092657" observedRunningTime="2025-09-05 23:56:18.907615002 +0000 UTC m=+5.188895568" watchObservedRunningTime="2025-09-05 23:56:19.753852922 +0000 UTC m=+6.035133328" Sep 5 23:56:24.514494 sudo[2205]: pam_unix(sudo:session): session closed for user root Sep 5 23:56:24.600571 sshd[2202]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:24.606390 systemd[1]: sshd@6-10.200.20.33:22-10.200.16.10:40772.service: Deactivated successfully. Sep 5 23:56:24.613478 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 23:56:24.615312 systemd[1]: session-9.scope: Consumed 6.483s CPU time, 148.7M memory peak, 0B memory swap peak. Sep 5 23:56:24.617328 systemd-logind[1662]: Session 9 logged out. Waiting for processes to exit. Sep 5 23:56:24.618585 systemd-logind[1662]: Removed session 9. Sep 5 23:56:31.646699 systemd[1]: Created slice kubepods-besteffort-pod43016769_b1ed_46b4_bffa_f8644905be0c.slice - libcontainer container kubepods-besteffort-pod43016769_b1ed_46b4_bffa_f8644905be0c.slice. Sep 5 23:56:31.728898 kubelet[3174]: I0905 23:56:31.728840 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43016769-b1ed-46b4-bffa-f8644905be0c-tigera-ca-bundle\") pod \"calico-typha-8556fcfcb6-92ltq\" (UID: \"43016769-b1ed-46b4-bffa-f8644905be0c\") " pod="calico-system/calico-typha-8556fcfcb6-92ltq" Sep 5 23:56:31.728898 kubelet[3174]: I0905 23:56:31.728891 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/43016769-b1ed-46b4-bffa-f8644905be0c-typha-certs\") pod \"calico-typha-8556fcfcb6-92ltq\" (UID: \"43016769-b1ed-46b4-bffa-f8644905be0c\") " pod="calico-system/calico-typha-8556fcfcb6-92ltq" Sep 5 23:56:31.728898 kubelet[3174]: I0905 23:56:31.728912 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c76b4\" (UniqueName: \"kubernetes.io/projected/43016769-b1ed-46b4-bffa-f8644905be0c-kube-api-access-c76b4\") pod \"calico-typha-8556fcfcb6-92ltq\" (UID: \"43016769-b1ed-46b4-bffa-f8644905be0c\") " pod="calico-system/calico-typha-8556fcfcb6-92ltq" Sep 5 23:56:31.785097 systemd[1]: Created slice kubepods-besteffort-poda6b4c5f0_4675_4f39_a55d_8a40ae9b55a7.slice - libcontainer container kubepods-besteffort-poda6b4c5f0_4675_4f39_a55d_8a40ae9b55a7.slice. Sep 5 23:56:31.829095 kubelet[3174]: I0905 23:56:31.829044 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76tmg\" (UniqueName: \"kubernetes.io/projected/a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7-kube-api-access-76tmg\") pod \"calico-node-zxcfs\" (UID: \"a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7\") " pod="calico-system/calico-node-zxcfs" Sep 5 23:56:31.829361 kubelet[3174]: I0905 23:56:31.829140 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7-cni-log-dir\") pod \"calico-node-zxcfs\" (UID: \"a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7\") " pod="calico-system/calico-node-zxcfs" Sep 5 23:56:31.829361 kubelet[3174]: I0905 23:56:31.829161 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7-policysync\") pod \"calico-node-zxcfs\" (UID: \"a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7\") " pod="calico-system/calico-node-zxcfs" Sep 5 23:56:31.829361 kubelet[3174]: I0905 23:56:31.829177 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7-xtables-lock\") pod \"calico-node-zxcfs\" (UID: \"a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7\") " pod="calico-system/calico-node-zxcfs" Sep 5 23:56:31.829361 kubelet[3174]: I0905 23:56:31.829218 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7-flexvol-driver-host\") pod \"calico-node-zxcfs\" (UID: \"a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7\") " pod="calico-system/calico-node-zxcfs" Sep 5 23:56:31.829361 kubelet[3174]: I0905 23:56:31.829233 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7-node-certs\") pod \"calico-node-zxcfs\" (UID: \"a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7\") " pod="calico-system/calico-node-zxcfs" Sep 5 23:56:31.829488 kubelet[3174]: I0905 23:56:31.829247 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7-tigera-ca-bundle\") pod \"calico-node-zxcfs\" (UID: \"a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7\") " pod="calico-system/calico-node-zxcfs" Sep 5 23:56:31.829488 kubelet[3174]: I0905 23:56:31.829261 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7-var-lib-calico\") pod \"calico-node-zxcfs\" (UID: \"a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7\") " pod="calico-system/calico-node-zxcfs" Sep 5 23:56:31.829488 kubelet[3174]: I0905 23:56:31.829276 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7-cni-bin-dir\") pod \"calico-node-zxcfs\" (UID: \"a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7\") " pod="calico-system/calico-node-zxcfs" Sep 5 23:56:31.829488 kubelet[3174]: I0905 23:56:31.829290 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7-lib-modules\") pod \"calico-node-zxcfs\" (UID: \"a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7\") " pod="calico-system/calico-node-zxcfs" Sep 5 23:56:31.829488 kubelet[3174]: I0905 23:56:31.829310 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7-var-run-calico\") pod \"calico-node-zxcfs\" (UID: \"a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7\") " pod="calico-system/calico-node-zxcfs" Sep 5 23:56:31.829593 kubelet[3174]: I0905 23:56:31.829336 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7-cni-net-dir\") pod \"calico-node-zxcfs\" (UID: \"a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7\") " pod="calico-system/calico-node-zxcfs" Sep 5 23:56:31.916451 kubelet[3174]: E0905 23:56:31.915866 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8qjr6" podUID="b765c2c2-113c-4a43-beb8-f69a462337be" Sep 5 23:56:31.930303 kubelet[3174]: I0905 23:56:31.929839 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b765c2c2-113c-4a43-beb8-f69a462337be-socket-dir\") pod \"csi-node-driver-8qjr6\" (UID: \"b765c2c2-113c-4a43-beb8-f69a462337be\") " pod="calico-system/csi-node-driver-8qjr6" Sep 5 23:56:31.930303 kubelet[3174]: I0905 23:56:31.929957 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b765c2c2-113c-4a43-beb8-f69a462337be-varrun\") pod \"csi-node-driver-8qjr6\" (UID: \"b765c2c2-113c-4a43-beb8-f69a462337be\") " pod="calico-system/csi-node-driver-8qjr6" Sep 5 23:56:31.930303 kubelet[3174]: I0905 23:56:31.929974 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fjl4\" (UniqueName: \"kubernetes.io/projected/b765c2c2-113c-4a43-beb8-f69a462337be-kube-api-access-9fjl4\") pod \"csi-node-driver-8qjr6\" (UID: \"b765c2c2-113c-4a43-beb8-f69a462337be\") " pod="calico-system/csi-node-driver-8qjr6" Sep 5 23:56:31.930303 kubelet[3174]: I0905 23:56:31.930018 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b765c2c2-113c-4a43-beb8-f69a462337be-kubelet-dir\") pod \"csi-node-driver-8qjr6\" (UID: \"b765c2c2-113c-4a43-beb8-f69a462337be\") " pod="calico-system/csi-node-driver-8qjr6" Sep 5 23:56:31.930303 kubelet[3174]: I0905 23:56:31.930035 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b765c2c2-113c-4a43-beb8-f69a462337be-registration-dir\") pod \"csi-node-driver-8qjr6\" (UID: \"b765c2c2-113c-4a43-beb8-f69a462337be\") " pod="calico-system/csi-node-driver-8qjr6" Sep 5 23:56:31.933197 kubelet[3174]: E0905 23:56:31.933117 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:31.933574 kubelet[3174]: W0905 23:56:31.933448 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:31.933574 kubelet[3174]: E0905 23:56:31.933477 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:31.936258 kubelet[3174]: E0905 23:56:31.936136 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:31.936258 kubelet[3174]: W0905 23:56:31.936155 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:31.936258 kubelet[3174]: E0905 23:56:31.936195 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:31.938657 kubelet[3174]: E0905 23:56:31.936898 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:31.938657 kubelet[3174]: W0905 23:56:31.936913 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:31.938657 kubelet[3174]: E0905 23:56:31.936943 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:31.939088 kubelet[3174]: E0905 23:56:31.938902 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:31.939088 kubelet[3174]: W0905 23:56:31.938922 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:31.939088 kubelet[3174]: E0905 23:56:31.938954 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:31.941858 kubelet[3174]: E0905 23:56:31.939511 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:31.941858 kubelet[3174]: W0905 23:56:31.939525 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:31.941858 kubelet[3174]: E0905 23:56:31.939537 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:31.944233 kubelet[3174]: E0905 23:56:31.944210 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:31.944343 kubelet[3174]: W0905 23:56:31.944330 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:31.944405 kubelet[3174]: E0905 23:56:31.944394 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:31.953825 containerd[1698]: time="2025-09-05T23:56:31.953739949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8556fcfcb6-92ltq,Uid:43016769-b1ed-46b4-bffa-f8644905be0c,Namespace:calico-system,Attempt:0,}" Sep 5 23:56:31.961067 kubelet[3174]: E0905 23:56:31.960530 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:31.961067 kubelet[3174]: W0905 23:56:31.960557 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:31.961067 kubelet[3174]: E0905 23:56:31.960576 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.032007 kubelet[3174]: E0905 23:56:32.031445 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.032007 kubelet[3174]: W0905 23:56:32.031479 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.032007 kubelet[3174]: E0905 23:56:32.031501 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.032007 kubelet[3174]: E0905 23:56:32.031741 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.032007 kubelet[3174]: W0905 23:56:32.031750 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.032007 kubelet[3174]: E0905 23:56:32.031774 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.032007 kubelet[3174]: E0905 23:56:32.031961 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.032007 kubelet[3174]: W0905 23:56:32.031969 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.032007 kubelet[3174]: E0905 23:56:32.031985 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.032323 kubelet[3174]: E0905 23:56:32.032208 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.032323 kubelet[3174]: W0905 23:56:32.032220 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.032323 kubelet[3174]: E0905 23:56:32.032233 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.033242 kubelet[3174]: E0905 23:56:32.032939 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.033242 kubelet[3174]: W0905 23:56:32.032960 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.033242 kubelet[3174]: E0905 23:56:32.032986 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.033631 kubelet[3174]: E0905 23:56:32.033579 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.033631 kubelet[3174]: W0905 23:56:32.033597 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.034340 kubelet[3174]: E0905 23:56:32.033877 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.034340 kubelet[3174]: W0905 23:56:32.033894 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.034340 kubelet[3174]: E0905 23:56:32.034040 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.034340 kubelet[3174]: E0905 23:56:32.033908 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.036331 kubelet[3174]: E0905 23:56:32.036058 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.036407 kubelet[3174]: W0905 23:56:32.036081 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.036523 kubelet[3174]: E0905 23:56:32.036486 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.037305 kubelet[3174]: E0905 23:56:32.037262 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.037305 kubelet[3174]: W0905 23:56:32.037278 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.037305 kubelet[3174]: E0905 23:56:32.037299 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.037786 kubelet[3174]: E0905 23:56:32.037766 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.037786 kubelet[3174]: W0905 23:56:32.037783 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.037786 kubelet[3174]: E0905 23:56:32.037825 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.038710 containerd[1698]: time="2025-09-05T23:56:32.037465080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:56:32.038710 containerd[1698]: time="2025-09-05T23:56:32.037528440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:56:32.038710 containerd[1698]: time="2025-09-05T23:56:32.037539160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:32.038710 containerd[1698]: time="2025-09-05T23:56:32.038654321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:32.039032 kubelet[3174]: E0905 23:56:32.038777 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.039032 kubelet[3174]: W0905 23:56:32.038792 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.039032 kubelet[3174]: E0905 23:56:32.038827 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.039032 kubelet[3174]: E0905 23:56:32.038971 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.039032 kubelet[3174]: W0905 23:56:32.038980 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.039483 kubelet[3174]: E0905 23:56:32.039167 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.039483 kubelet[3174]: E0905 23:56:32.039283 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.039483 kubelet[3174]: W0905 23:56:32.039294 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.039483 kubelet[3174]: E0905 23:56:32.039422 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.040639 kubelet[3174]: E0905 23:56:32.039757 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.040639 kubelet[3174]: W0905 23:56:32.039776 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.040639 kubelet[3174]: E0905 23:56:32.040604 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.042329 kubelet[3174]: E0905 23:56:32.041940 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.042329 kubelet[3174]: W0905 23:56:32.041957 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.042329 kubelet[3174]: E0905 23:56:32.042191 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.042788 kubelet[3174]: E0905 23:56:32.042531 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.042788 kubelet[3174]: W0905 23:56:32.042545 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.042788 kubelet[3174]: E0905 23:56:32.042591 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.045026 kubelet[3174]: E0905 23:56:32.044897 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.045026 kubelet[3174]: W0905 23:56:32.044919 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.045165 kubelet[3174]: E0905 23:56:32.045138 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.046055 kubelet[3174]: E0905 23:56:32.045933 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.046055 kubelet[3174]: W0905 23:56:32.045956 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.046201 kubelet[3174]: E0905 23:56:32.046187 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.047010 kubelet[3174]: E0905 23:56:32.046918 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.047010 kubelet[3174]: W0905 23:56:32.046932 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.048934 kubelet[3174]: E0905 23:56:32.048898 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.049232 kubelet[3174]: E0905 23:56:32.049217 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.049865 kubelet[3174]: W0905 23:56:32.049338 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.049865 kubelet[3174]: E0905 23:56:32.049559 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.052036 kubelet[3174]: E0905 23:56:32.052016 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.052123 kubelet[3174]: W0905 23:56:32.052111 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.052310 kubelet[3174]: E0905 23:56:32.052232 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.052549 kubelet[3174]: E0905 23:56:32.052424 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.052549 kubelet[3174]: W0905 23:56:32.052444 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.053695 kubelet[3174]: E0905 23:56:32.052672 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.054035 kubelet[3174]: E0905 23:56:32.053907 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.054035 kubelet[3174]: W0905 23:56:32.053931 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.055361 kubelet[3174]: E0905 23:56:32.055295 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.055361 kubelet[3174]: E0905 23:56:32.055333 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.055361 kubelet[3174]: W0905 23:56:32.055345 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.055687 kubelet[3174]: E0905 23:56:32.055578 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.056717 kubelet[3174]: E0905 23:56:32.056688 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.056900 kubelet[3174]: W0905 23:56:32.056791 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.056900 kubelet[3174]: E0905 23:56:32.056811 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.076532 systemd[1]: Started cri-containerd-4d49bf554d83e12044b88ef58c483edeb9b8a9b2e05745fb48277f375bcf5ffe.scope - libcontainer container 4d49bf554d83e12044b88ef58c483edeb9b8a9b2e05745fb48277f375bcf5ffe. Sep 5 23:56:32.091679 containerd[1698]: time="2025-09-05T23:56:32.091635219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zxcfs,Uid:a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7,Namespace:calico-system,Attempt:0,}" Sep 5 23:56:32.114411 containerd[1698]: time="2025-09-05T23:56:32.114368964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8556fcfcb6-92ltq,Uid:43016769-b1ed-46b4-bffa-f8644905be0c,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d49bf554d83e12044b88ef58c483edeb9b8a9b2e05745fb48277f375bcf5ffe\"" Sep 5 23:56:32.116337 containerd[1698]: time="2025-09-05T23:56:32.116268446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 23:56:32.133444 kubelet[3174]: E0905 23:56:32.133336 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:32.133444 kubelet[3174]: W0905 23:56:32.133380 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:32.133444 kubelet[3174]: E0905 23:56:32.133401 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:32.178900 containerd[1698]: time="2025-09-05T23:56:32.176709952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:56:32.178900 containerd[1698]: time="2025-09-05T23:56:32.177237912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:56:32.178900 containerd[1698]: time="2025-09-05T23:56:32.177276792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:32.178900 containerd[1698]: time="2025-09-05T23:56:32.177367192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:32.197035 systemd[1]: Started cri-containerd-56861e30cc1a688ad64a579538a39714855dd7224f6fa6b67c97075b5a417011.scope - libcontainer container 56861e30cc1a688ad64a579538a39714855dd7224f6fa6b67c97075b5a417011. Sep 5 23:56:32.238911 containerd[1698]: time="2025-09-05T23:56:32.238675699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zxcfs,Uid:a6b4c5f0-4675-4f39-a55d-8a40ae9b55a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"56861e30cc1a688ad64a579538a39714855dd7224f6fa6b67c97075b5a417011\"" Sep 5 23:56:33.411050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4197701573.mount: Deactivated successfully. Sep 5 23:56:33.829458 kubelet[3174]: E0905 23:56:33.829407 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8qjr6" podUID="b765c2c2-113c-4a43-beb8-f69a462337be" Sep 5 23:56:34.379935 containerd[1698]: time="2025-09-05T23:56:34.379889005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:34.385970 containerd[1698]: time="2025-09-05T23:56:34.385773530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 5 23:56:34.392462 containerd[1698]: time="2025-09-05T23:56:34.392410737Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:34.400332 containerd[1698]: time="2025-09-05T23:56:34.400281744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:34.400925 containerd[1698]: time="2025-09-05T23:56:34.400785905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.284462819s" Sep 5 23:56:34.400925 containerd[1698]: time="2025-09-05T23:56:34.400819945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 5 23:56:34.405550 containerd[1698]: time="2025-09-05T23:56:34.405503909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 23:56:34.419052 containerd[1698]: time="2025-09-05T23:56:34.419006882Z" level=info msg="CreateContainer within sandbox \"4d49bf554d83e12044b88ef58c483edeb9b8a9b2e05745fb48277f375bcf5ffe\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 23:56:34.494852 containerd[1698]: time="2025-09-05T23:56:34.492244992Z" level=info msg="CreateContainer within sandbox \"4d49bf554d83e12044b88ef58c483edeb9b8a9b2e05745fb48277f375bcf5ffe\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ecccd5881ba7ddabdde441c22bdd41a807770d6722cb54383d7490f8e7db1825\"" Sep 5 23:56:34.495792 containerd[1698]: time="2025-09-05T23:56:34.495156675Z" level=info msg="StartContainer for \"ecccd5881ba7ddabdde441c22bdd41a807770d6722cb54383d7490f8e7db1825\"" Sep 5 23:56:34.530079 systemd[1]: Started cri-containerd-ecccd5881ba7ddabdde441c22bdd41a807770d6722cb54383d7490f8e7db1825.scope - libcontainer container ecccd5881ba7ddabdde441c22bdd41a807770d6722cb54383d7490f8e7db1825. Sep 5 23:56:34.572537 containerd[1698]: time="2025-09-05T23:56:34.572473589Z" level=info msg="StartContainer for \"ecccd5881ba7ddabdde441c22bdd41a807770d6722cb54383d7490f8e7db1825\" returns successfully" Sep 5 23:56:34.938207 kubelet[3174]: E0905 23:56:34.938085 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.938207 kubelet[3174]: W0905 23:56:34.938109 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.938207 kubelet[3174]: E0905 23:56:34.938129 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.938847 kubelet[3174]: E0905 23:56:34.938677 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.938847 kubelet[3174]: W0905 23:56:34.938693 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.938847 kubelet[3174]: E0905 23:56:34.938735 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.939139 kubelet[3174]: E0905 23:56:34.938927 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.939139 kubelet[3174]: W0905 23:56:34.938937 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.939139 kubelet[3174]: E0905 23:56:34.938947 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.939321 kubelet[3174]: E0905 23:56:34.939255 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.939321 kubelet[3174]: W0905 23:56:34.939266 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.939321 kubelet[3174]: E0905 23:56:34.939276 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.939637 kubelet[3174]: E0905 23:56:34.939543 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.939637 kubelet[3174]: W0905 23:56:34.939554 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.939637 kubelet[3174]: E0905 23:56:34.939564 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.939860 kubelet[3174]: E0905 23:56:34.939787 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.939860 kubelet[3174]: W0905 23:56:34.939798 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.939860 kubelet[3174]: E0905 23:56:34.939807 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.940186 kubelet[3174]: E0905 23:56:34.940087 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.940186 kubelet[3174]: W0905 23:56:34.940099 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.940186 kubelet[3174]: E0905 23:56:34.940109 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.940414 kubelet[3174]: E0905 23:56:34.940346 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.940414 kubelet[3174]: W0905 23:56:34.940357 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.940414 kubelet[3174]: E0905 23:56:34.940366 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.940737 kubelet[3174]: E0905 23:56:34.940681 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.940737 kubelet[3174]: W0905 23:56:34.940691 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.940737 kubelet[3174]: E0905 23:56:34.940701 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.941111 kubelet[3174]: E0905 23:56:34.941016 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.941111 kubelet[3174]: W0905 23:56:34.941027 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.941111 kubelet[3174]: E0905 23:56:34.941037 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.941332 kubelet[3174]: E0905 23:56:34.941271 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.941332 kubelet[3174]: W0905 23:56:34.941281 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.941332 kubelet[3174]: E0905 23:56:34.941290 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.941894 kubelet[3174]: E0905 23:56:34.941616 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.942056 kubelet[3174]: W0905 23:56:34.941978 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.942056 kubelet[3174]: E0905 23:56:34.941998 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.942405 kubelet[3174]: E0905 23:56:34.942325 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.942405 kubelet[3174]: W0905 23:56:34.942337 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.942405 kubelet[3174]: E0905 23:56:34.942348 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.942814 kubelet[3174]: E0905 23:56:34.942718 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.942814 kubelet[3174]: W0905 23:56:34.942732 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.942814 kubelet[3174]: E0905 23:56:34.942742 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.943121 kubelet[3174]: E0905 23:56:34.943110 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.943364 kubelet[3174]: W0905 23:56:34.943172 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.943364 kubelet[3174]: E0905 23:56:34.943282 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.958759 kubelet[3174]: E0905 23:56:34.958635 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.958759 kubelet[3174]: W0905 23:56:34.958671 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.958759 kubelet[3174]: E0905 23:56:34.958689 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.959452 kubelet[3174]: E0905 23:56:34.959345 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.959603 kubelet[3174]: W0905 23:56:34.959521 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.959603 kubelet[3174]: E0905 23:56:34.959549 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.960066 kubelet[3174]: E0905 23:56:34.959918 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.960066 kubelet[3174]: W0905 23:56:34.959931 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.960066 kubelet[3174]: E0905 23:56:34.959949 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.960280 kubelet[3174]: E0905 23:56:34.960251 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.960280 kubelet[3174]: W0905 23:56:34.960262 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.960460 kubelet[3174]: E0905 23:56:34.960386 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.960579 kubelet[3174]: E0905 23:56:34.960547 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.960579 kubelet[3174]: W0905 23:56:34.960557 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.960777 kubelet[3174]: E0905 23:56:34.960725 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.960902 kubelet[3174]: E0905 23:56:34.960892 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.961715 kubelet[3174]: W0905 23:56:34.961562 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.961715 kubelet[3174]: E0905 23:56:34.961614 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.962069 kubelet[3174]: E0905 23:56:34.961975 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.962069 kubelet[3174]: W0905 23:56:34.961999 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.962069 kubelet[3174]: E0905 23:56:34.962064 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.963657 kubelet[3174]: I0905 23:56:34.962782 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8556fcfcb6-92ltq" podStartSLOduration=1.674795219 podStartE2EDuration="3.962767401s" podCreationTimestamp="2025-09-05 23:56:31 +0000 UTC" firstStartedPulling="2025-09-05 23:56:32.116007486 +0000 UTC m=+18.397287812" lastFinishedPulling="2025-09-05 23:56:34.403979668 +0000 UTC m=+20.685259994" observedRunningTime="2025-09-05 23:56:34.948941428 +0000 UTC m=+21.230221754" watchObservedRunningTime="2025-09-05 23:56:34.962767401 +0000 UTC m=+21.244047727" Sep 5 23:56:34.965025 kubelet[3174]: E0905 23:56:34.965009 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.965216 kubelet[3174]: W0905 23:56:34.965111 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.965693 kubelet[3174]: E0905 23:56:34.965363 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.965693 kubelet[3174]: E0905 23:56:34.965413 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.965693 kubelet[3174]: W0905 23:56:34.965621 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.965693 kubelet[3174]: E0905 23:56:34.965668 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.967300 kubelet[3174]: E0905 23:56:34.966950 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.967300 kubelet[3174]: W0905 23:56:34.966967 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.967594 kubelet[3174]: E0905 23:56:34.967400 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.968767 kubelet[3174]: E0905 23:56:34.968427 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.968767 kubelet[3174]: W0905 23:56:34.968442 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.970366 kubelet[3174]: E0905 23:56:34.970145 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.970366 kubelet[3174]: W0905 23:56:34.970159 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.970594 kubelet[3174]: E0905 23:56:34.970236 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.970594 kubelet[3174]: E0905 23:56:34.970496 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.971112 kubelet[3174]: E0905 23:56:34.970929 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.971112 kubelet[3174]: W0905 23:56:34.970958 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.971112 kubelet[3174]: E0905 23:56:34.970981 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.971630 kubelet[3174]: E0905 23:56:34.971491 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.971630 kubelet[3174]: W0905 23:56:34.971506 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.971630 kubelet[3174]: E0905 23:56:34.971519 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.971994 kubelet[3174]: E0905 23:56:34.971794 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.971994 kubelet[3174]: W0905 23:56:34.971814 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.971994 kubelet[3174]: E0905 23:56:34.971828 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.973035 kubelet[3174]: E0905 23:56:34.972994 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.973035 kubelet[3174]: W0905 23:56:34.973011 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.973035 kubelet[3174]: E0905 23:56:34.973033 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.973552 kubelet[3174]: E0905 23:56:34.973522 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.973552 kubelet[3174]: W0905 23:56:34.973542 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.973552 kubelet[3174]: E0905 23:56:34.973570 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:34.973782 kubelet[3174]: E0905 23:56:34.973765 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:56:34.973782 kubelet[3174]: W0905 23:56:34.973778 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:56:34.974001 kubelet[3174]: E0905 23:56:34.973789 3174 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:56:35.750238 containerd[1698]: time="2025-09-05T23:56:35.750181273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:35.754074 containerd[1698]: time="2025-09-05T23:56:35.753926757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 5 23:56:35.762394 containerd[1698]: time="2025-09-05T23:56:35.762351645Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:35.768398 containerd[1698]: time="2025-09-05T23:56:35.768123290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:35.768892 containerd[1698]: time="2025-09-05T23:56:35.768859611Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.363313182s" Sep 5 23:56:35.768975 containerd[1698]: time="2025-09-05T23:56:35.768894051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 5 23:56:35.771740 containerd[1698]: time="2025-09-05T23:56:35.771704294Z" level=info msg="CreateContainer within sandbox \"56861e30cc1a688ad64a579538a39714855dd7224f6fa6b67c97075b5a417011\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 23:56:35.829086 kubelet[3174]: E0905 23:56:35.827868 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8qjr6" podUID="b765c2c2-113c-4a43-beb8-f69a462337be" Sep 5 23:56:35.848912 containerd[1698]: time="2025-09-05T23:56:35.848869448Z" level=info msg="CreateContainer within sandbox \"56861e30cc1a688ad64a579538a39714855dd7224f6fa6b67c97075b5a417011\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"14be9c7c758e47e5d6255f1fc2c41c91921500fb9df4b029f59c76ed532e0336\"" Sep 5 23:56:35.851126 containerd[1698]: time="2025-09-05T23:56:35.849939689Z" level=info msg="StartContainer for \"14be9c7c758e47e5d6255f1fc2c41c91921500fb9df4b029f59c76ed532e0336\"" Sep 5 23:56:35.878988 systemd[1]: Started cri-containerd-14be9c7c758e47e5d6255f1fc2c41c91921500fb9df4b029f59c76ed532e0336.scope - libcontainer container 14be9c7c758e47e5d6255f1fc2c41c91921500fb9df4b029f59c76ed532e0336. Sep 5 23:56:35.918401 containerd[1698]: time="2025-09-05T23:56:35.918358914Z" level=info msg="StartContainer for \"14be9c7c758e47e5d6255f1fc2c41c91921500fb9df4b029f59c76ed532e0336\" returns successfully" Sep 5 23:56:35.931246 systemd[1]: cri-containerd-14be9c7c758e47e5d6255f1fc2c41c91921500fb9df4b029f59c76ed532e0336.scope: Deactivated successfully. Sep 5 23:56:35.966484 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14be9c7c758e47e5d6255f1fc2c41c91921500fb9df4b029f59c76ed532e0336-rootfs.mount: Deactivated successfully. Sep 5 23:56:36.942276 containerd[1698]: time="2025-09-05T23:56:36.942169612Z" level=info msg="shim disconnected" id=14be9c7c758e47e5d6255f1fc2c41c91921500fb9df4b029f59c76ed532e0336 namespace=k8s.io Sep 5 23:56:36.942276 containerd[1698]: time="2025-09-05T23:56:36.942250932Z" level=warning msg="cleaning up after shim disconnected" id=14be9c7c758e47e5d6255f1fc2c41c91921500fb9df4b029f59c76ed532e0336 namespace=k8s.io Sep 5 23:56:36.942905 containerd[1698]: time="2025-09-05T23:56:36.942259412Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:56:37.828065 kubelet[3174]: E0905 23:56:37.828024 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8qjr6" podUID="b765c2c2-113c-4a43-beb8-f69a462337be" Sep 5 23:56:37.942786 containerd[1698]: time="2025-09-05T23:56:37.942211287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 23:56:39.828981 kubelet[3174]: E0905 23:56:39.828933 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8qjr6" podUID="b765c2c2-113c-4a43-beb8-f69a462337be" Sep 5 23:56:40.493670 containerd[1698]: time="2025-09-05T23:56:40.493621124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:40.496575 containerd[1698]: time="2025-09-05T23:56:40.496543446Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 5 23:56:40.502937 containerd[1698]: time="2025-09-05T23:56:40.502088412Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:40.511657 containerd[1698]: time="2025-09-05T23:56:40.511589181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:40.512398 containerd[1698]: time="2025-09-05T23:56:40.512239941Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.569987214s" Sep 5 23:56:40.512398 containerd[1698]: time="2025-09-05T23:56:40.512276342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 5 23:56:40.515154 containerd[1698]: time="2025-09-05T23:56:40.514979784Z" level=info msg="CreateContainer within sandbox \"56861e30cc1a688ad64a579538a39714855dd7224f6fa6b67c97075b5a417011\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 23:56:40.571853 containerd[1698]: time="2025-09-05T23:56:40.571787484Z" level=info msg="CreateContainer within sandbox \"56861e30cc1a688ad64a579538a39714855dd7224f6fa6b67c97075b5a417011\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7fdb1d19d56f952f3a255457934418711748f14465236438a50636cebef4887c\"" Sep 5 23:56:40.573292 containerd[1698]: time="2025-09-05T23:56:40.573257485Z" level=info msg="StartContainer for \"7fdb1d19d56f952f3a255457934418711748f14465236438a50636cebef4887c\"" Sep 5 23:56:40.610035 systemd[1]: Started cri-containerd-7fdb1d19d56f952f3a255457934418711748f14465236438a50636cebef4887c.scope - libcontainer container 7fdb1d19d56f952f3a255457934418711748f14465236438a50636cebef4887c. Sep 5 23:56:40.643739 containerd[1698]: time="2025-09-05T23:56:40.643688925Z" level=info msg="StartContainer for \"7fdb1d19d56f952f3a255457934418711748f14465236438a50636cebef4887c\" returns successfully" Sep 5 23:56:41.828218 kubelet[3174]: E0905 23:56:41.828078 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8qjr6" podUID="b765c2c2-113c-4a43-beb8-f69a462337be" Sep 5 23:56:41.987242 containerd[1698]: time="2025-09-05T23:56:41.987142524Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:56:41.989272 systemd[1]: cri-containerd-7fdb1d19d56f952f3a255457934418711748f14465236438a50636cebef4887c.scope: Deactivated successfully. Sep 5 23:56:42.010058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fdb1d19d56f952f3a255457934418711748f14465236438a50636cebef4887c-rootfs.mount: Deactivated successfully. Sep 5 23:56:42.074555 kubelet[3174]: I0905 23:56:42.074531 3174 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 23:56:42.330107 kubelet[3174]: W0905 23:56:42.136239 3174 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4081.3.5-n-8e502b48f1" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.5-n-8e502b48f1' and this object Sep 5 23:56:42.330107 kubelet[3174]: E0905 23:56:42.136281 3174 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4081.3.5-n-8e502b48f1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.5-n-8e502b48f1' and this object" logger="UnhandledError" Sep 5 23:56:42.330107 kubelet[3174]: W0905 23:56:42.137222 3174 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4081.3.5-n-8e502b48f1" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.5-n-8e502b48f1' and this object Sep 5 23:56:42.330107 kubelet[3174]: E0905 23:56:42.137251 3174 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4081.3.5-n-8e502b48f1\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.5-n-8e502b48f1' and this object" logger="UnhandledError" Sep 5 23:56:42.117506 systemd[1]: Created slice kubepods-burstable-podc33313ed_3911_49ca_80e5_a8358a33d55f.slice - libcontainer container kubepods-burstable-podc33313ed_3911_49ca_80e5_a8358a33d55f.slice. Sep 5 23:56:42.330439 kubelet[3174]: W0905 23:56:42.137699 3174 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.5-n-8e502b48f1" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.5-n-8e502b48f1' and this object Sep 5 23:56:42.330439 kubelet[3174]: E0905 23:56:42.137721 3174 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.5-n-8e502b48f1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.5-n-8e502b48f1' and this object" logger="UnhandledError" Sep 5 23:56:42.330439 kubelet[3174]: W0905 23:56:42.137795 3174 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ci-4081.3.5-n-8e502b48f1" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.5-n-8e502b48f1' and this object Sep 5 23:56:42.330439 kubelet[3174]: E0905 23:56:42.137805 3174 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ci-4081.3.5-n-8e502b48f1\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.5-n-8e502b48f1' and this object" logger="UnhandledError" Sep 5 23:56:42.330439 kubelet[3174]: W0905 23:56:42.137850 3174 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081.3.5-n-8e502b48f1" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.5-n-8e502b48f1' and this object Sep 5 23:56:42.131041 systemd[1]: Created slice kubepods-burstable-pod931034f2_80f5_4484_bfdc_20cfabfeeec2.slice - libcontainer container kubepods-burstable-pod931034f2_80f5_4484_bfdc_20cfabfeeec2.slice. Sep 5 23:56:42.330716 kubelet[3174]: E0905 23:56:42.137861 3174 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4081.3.5-n-8e502b48f1\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.5-n-8e502b48f1' and this object" logger="UnhandledError" Sep 5 23:56:42.330716 kubelet[3174]: W0905 23:56:42.137893 3174 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ci-4081.3.5-n-8e502b48f1" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.5-n-8e502b48f1' and this object Sep 5 23:56:42.330716 kubelet[3174]: E0905 23:56:42.137903 3174 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ci-4081.3.5-n-8e502b48f1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.5-n-8e502b48f1' and this object" logger="UnhandledError" Sep 5 23:56:42.330716 kubelet[3174]: W0905 23:56:42.137931 3174 reflector.go:569] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ci-4081.3.5-n-8e502b48f1" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.5-n-8e502b48f1' and this object Sep 5 23:56:42.145331 systemd[1]: Created slice kubepods-besteffort-pod75131aad_a28c_402b_ad91_a268726f8ed5.slice - libcontainer container kubepods-besteffort-pod75131aad_a28c_402b_ad91_a268726f8ed5.slice. Sep 5 23:56:42.330879 kubelet[3174]: E0905 23:56:42.137942 3174 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ci-4081.3.5-n-8e502b48f1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.5-n-8e502b48f1' and this object" logger="UnhandledError" Sep 5 23:56:42.330879 kubelet[3174]: I0905 23:56:42.214879 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ee5a4d86-adeb-4a20-b975-6bb9030118b8-goldmane-key-pair\") pod \"goldmane-54d579b49d-qvx8t\" (UID: \"ee5a4d86-adeb-4a20-b975-6bb9030118b8\") " pod="calico-system/goldmane-54d579b49d-qvx8t" Sep 5 23:56:42.330879 kubelet[3174]: I0905 23:56:42.214922 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/931034f2-80f5-4484-bfdc-20cfabfeeec2-config-volume\") pod \"coredns-668d6bf9bc-rf2fc\" (UID: \"931034f2-80f5-4484-bfdc-20cfabfeeec2\") " pod="kube-system/coredns-668d6bf9bc-rf2fc" Sep 5 23:56:42.330879 kubelet[3174]: I0905 23:56:42.214941 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c33313ed-3911-49ca-80e5-a8358a33d55f-config-volume\") pod \"coredns-668d6bf9bc-tblgm\" (UID: \"c33313ed-3911-49ca-80e5-a8358a33d55f\") " pod="kube-system/coredns-668d6bf9bc-tblgm" Sep 5 23:56:42.330879 kubelet[3174]: I0905 23:56:42.214959 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee5a4d86-adeb-4a20-b975-6bb9030118b8-config\") pod \"goldmane-54d579b49d-qvx8t\" (UID: \"ee5a4d86-adeb-4a20-b975-6bb9030118b8\") " pod="calico-system/goldmane-54d579b49d-qvx8t" Sep 5 23:56:42.153702 systemd[1]: Created slice kubepods-besteffort-pod7a07b96c_cdaf_43ca_a04f_c273da6229dd.slice - libcontainer container kubepods-besteffort-pod7a07b96c_cdaf_43ca_a04f_c273da6229dd.slice. Sep 5 23:56:42.331082 kubelet[3174]: I0905 23:56:42.214974 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/306b9be3-0678-43c7-80e8-c3eff7dac6f2-calico-apiserver-certs\") pod \"calico-apiserver-f87497d48-k8227\" (UID: \"306b9be3-0678-43c7-80e8-c3eff7dac6f2\") " pod="calico-apiserver/calico-apiserver-f87497d48-k8227" Sep 5 23:56:42.331082 kubelet[3174]: I0905 23:56:42.214991 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr6h5\" (UniqueName: \"kubernetes.io/projected/ed64e8c1-23f2-423a-b9fd-51cbac98b68d-kube-api-access-fr6h5\") pod \"calico-apiserver-f87497d48-47nvq\" (UID: \"ed64e8c1-23f2-423a-b9fd-51cbac98b68d\") " pod="calico-apiserver/calico-apiserver-f87497d48-47nvq" Sep 5 23:56:42.331082 kubelet[3174]: I0905 23:56:42.215010 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75131aad-a28c-402b-ad91-a268726f8ed5-tigera-ca-bundle\") pod \"calico-kube-controllers-86c5b95765-hgw7k\" (UID: \"75131aad-a28c-402b-ad91-a268726f8ed5\") " pod="calico-system/calico-kube-controllers-86c5b95765-hgw7k" Sep 5 23:56:42.331082 kubelet[3174]: I0905 23:56:42.215029 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrj4s\" (UniqueName: \"kubernetes.io/projected/931034f2-80f5-4484-bfdc-20cfabfeeec2-kube-api-access-vrj4s\") pod \"coredns-668d6bf9bc-rf2fc\" (UID: \"931034f2-80f5-4484-bfdc-20cfabfeeec2\") " pod="kube-system/coredns-668d6bf9bc-rf2fc" Sep 5 23:56:42.331082 kubelet[3174]: I0905 23:56:42.215044 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7a07b96c-cdaf-43ca-a04f-c273da6229dd-whisker-backend-key-pair\") pod \"whisker-589cbf6db4-hgmcp\" (UID: \"7a07b96c-cdaf-43ca-a04f-c273da6229dd\") " pod="calico-system/whisker-589cbf6db4-hgmcp" Sep 5 23:56:42.164698 systemd[1]: Created slice kubepods-besteffort-pod306b9be3_0678_43c7_80e8_c3eff7dac6f2.slice - libcontainer container kubepods-besteffort-pod306b9be3_0678_43c7_80e8_c3eff7dac6f2.slice. Sep 5 23:56:42.331234 kubelet[3174]: I0905 23:56:42.215059 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a07b96c-cdaf-43ca-a04f-c273da6229dd-whisker-ca-bundle\") pod \"whisker-589cbf6db4-hgmcp\" (UID: \"7a07b96c-cdaf-43ca-a04f-c273da6229dd\") " pod="calico-system/whisker-589cbf6db4-hgmcp" Sep 5 23:56:42.331234 kubelet[3174]: I0905 23:56:42.215079 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd9g5\" (UniqueName: \"kubernetes.io/projected/c33313ed-3911-49ca-80e5-a8358a33d55f-kube-api-access-wd9g5\") pod \"coredns-668d6bf9bc-tblgm\" (UID: \"c33313ed-3911-49ca-80e5-a8358a33d55f\") " pod="kube-system/coredns-668d6bf9bc-tblgm" Sep 5 23:56:42.331234 kubelet[3174]: I0905 23:56:42.215094 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4mwb\" (UniqueName: \"kubernetes.io/projected/306b9be3-0678-43c7-80e8-c3eff7dac6f2-kube-api-access-k4mwb\") pod \"calico-apiserver-f87497d48-k8227\" (UID: \"306b9be3-0678-43c7-80e8-c3eff7dac6f2\") " pod="calico-apiserver/calico-apiserver-f87497d48-k8227" Sep 5 23:56:42.331234 kubelet[3174]: I0905 23:56:42.215116 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6v5n\" (UniqueName: \"kubernetes.io/projected/75131aad-a28c-402b-ad91-a268726f8ed5-kube-api-access-r6v5n\") pod \"calico-kube-controllers-86c5b95765-hgw7k\" (UID: \"75131aad-a28c-402b-ad91-a268726f8ed5\") " pod="calico-system/calico-kube-controllers-86c5b95765-hgw7k" Sep 5 23:56:42.331234 kubelet[3174]: I0905 23:56:42.215133 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmddz\" (UniqueName: \"kubernetes.io/projected/7a07b96c-cdaf-43ca-a04f-c273da6229dd-kube-api-access-nmddz\") pod \"whisker-589cbf6db4-hgmcp\" (UID: \"7a07b96c-cdaf-43ca-a04f-c273da6229dd\") " pod="calico-system/whisker-589cbf6db4-hgmcp" Sep 5 23:56:42.172577 systemd[1]: Created slice kubepods-besteffort-podee5a4d86_adeb_4a20_b975_6bb9030118b8.slice - libcontainer container kubepods-besteffort-podee5a4d86_adeb_4a20_b975_6bb9030118b8.slice. Sep 5 23:56:42.331383 kubelet[3174]: I0905 23:56:42.215153 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ed64e8c1-23f2-423a-b9fd-51cbac98b68d-calico-apiserver-certs\") pod \"calico-apiserver-f87497d48-47nvq\" (UID: \"ed64e8c1-23f2-423a-b9fd-51cbac98b68d\") " pod="calico-apiserver/calico-apiserver-f87497d48-47nvq" Sep 5 23:56:42.331383 kubelet[3174]: I0905 23:56:42.215168 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee5a4d86-adeb-4a20-b975-6bb9030118b8-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-qvx8t\" (UID: \"ee5a4d86-adeb-4a20-b975-6bb9030118b8\") " pod="calico-system/goldmane-54d579b49d-qvx8t" Sep 5 23:56:42.331383 kubelet[3174]: I0905 23:56:42.215185 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlqsf\" (UniqueName: \"kubernetes.io/projected/ee5a4d86-adeb-4a20-b975-6bb9030118b8-kube-api-access-jlqsf\") pod \"goldmane-54d579b49d-qvx8t\" (UID: \"ee5a4d86-adeb-4a20-b975-6bb9030118b8\") " pod="calico-system/goldmane-54d579b49d-qvx8t" Sep 5 23:56:42.178545 systemd[1]: Created slice kubepods-besteffort-poded64e8c1_23f2_423a_b9fd_51cbac98b68d.slice - libcontainer container kubepods-besteffort-poded64e8c1_23f2_423a_b9fd_51cbac98b68d.slice. Sep 5 23:56:42.592986 containerd[1698]: time="2025-09-05T23:56:42.592614721Z" level=info msg="shim disconnected" id=7fdb1d19d56f952f3a255457934418711748f14465236438a50636cebef4887c namespace=k8s.io Sep 5 23:56:42.592986 containerd[1698]: time="2025-09-05T23:56:42.592684601Z" level=warning msg="cleaning up after shim disconnected" id=7fdb1d19d56f952f3a255457934418711748f14465236438a50636cebef4887c namespace=k8s.io Sep 5 23:56:42.592986 containerd[1698]: time="2025-09-05T23:56:42.592693601Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:56:42.639181 containerd[1698]: time="2025-09-05T23:56:42.638777895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tblgm,Uid:c33313ed-3911-49ca-80e5-a8358a33d55f,Namespace:kube-system,Attempt:0,}" Sep 5 23:56:42.639181 containerd[1698]: time="2025-09-05T23:56:42.638918896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86c5b95765-hgw7k,Uid:75131aad-a28c-402b-ad91-a268726f8ed5,Namespace:calico-system,Attempt:0,}" Sep 5 23:56:42.639181 containerd[1698]: time="2025-09-05T23:56:42.639126536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rf2fc,Uid:931034f2-80f5-4484-bfdc-20cfabfeeec2,Namespace:kube-system,Attempt:0,}" Sep 5 23:56:42.839045 containerd[1698]: time="2025-09-05T23:56:42.838909652Z" level=error msg="Failed to destroy network for sandbox \"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:42.839716 containerd[1698]: time="2025-09-05T23:56:42.839393933Z" level=error msg="encountered an error cleaning up failed sandbox \"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:42.839716 containerd[1698]: time="2025-09-05T23:56:42.839451573Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tblgm,Uid:c33313ed-3911-49ca-80e5-a8358a33d55f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:42.840024 kubelet[3174]: E0905 23:56:42.839984 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:42.840298 kubelet[3174]: E0905 23:56:42.840053 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tblgm" Sep 5 23:56:42.840298 kubelet[3174]: E0905 23:56:42.840073 3174 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tblgm" Sep 5 23:56:42.840298 kubelet[3174]: E0905 23:56:42.840111 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tblgm_kube-system(c33313ed-3911-49ca-80e5-a8358a33d55f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tblgm_kube-system(c33313ed-3911-49ca-80e5-a8358a33d55f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tblgm" podUID="c33313ed-3911-49ca-80e5-a8358a33d55f" Sep 5 23:56:42.878343 containerd[1698]: time="2025-09-05T23:56:42.878212019Z" level=error msg="Failed to destroy network for sandbox \"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:42.879407 containerd[1698]: time="2025-09-05T23:56:42.879248300Z" level=error msg="encountered an error cleaning up failed sandbox \"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:42.879615 containerd[1698]: time="2025-09-05T23:56:42.879308260Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86c5b95765-hgw7k,Uid:75131aad-a28c-402b-ad91-a268726f8ed5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:42.879824 kubelet[3174]: E0905 23:56:42.879771 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:42.880804 kubelet[3174]: E0905 23:56:42.879827 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86c5b95765-hgw7k" Sep 5 23:56:42.880804 kubelet[3174]: E0905 23:56:42.880581 3174 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86c5b95765-hgw7k" Sep 5 23:56:42.880804 kubelet[3174]: E0905 23:56:42.880630 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86c5b95765-hgw7k_calico-system(75131aad-a28c-402b-ad91-a268726f8ed5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86c5b95765-hgw7k_calico-system(75131aad-a28c-402b-ad91-a268726f8ed5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86c5b95765-hgw7k" podUID="75131aad-a28c-402b-ad91-a268726f8ed5" Sep 5 23:56:42.887380 containerd[1698]: time="2025-09-05T23:56:42.887284470Z" level=error msg="Failed to destroy network for sandbox \"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:42.888565 containerd[1698]: time="2025-09-05T23:56:42.888459791Z" level=error msg="encountered an error cleaning up failed sandbox \"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:42.888565 containerd[1698]: time="2025-09-05T23:56:42.888524591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rf2fc,Uid:931034f2-80f5-4484-bfdc-20cfabfeeec2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:42.889307 kubelet[3174]: E0905 23:56:42.888805 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:42.889307 kubelet[3174]: E0905 23:56:42.888887 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rf2fc" Sep 5 23:56:42.889307 kubelet[3174]: E0905 23:56:42.889042 3174 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rf2fc" Sep 5 23:56:42.889445 kubelet[3174]: E0905 23:56:42.889107 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rf2fc_kube-system(931034f2-80f5-4484-bfdc-20cfabfeeec2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rf2fc_kube-system(931034f2-80f5-4484-bfdc-20cfabfeeec2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rf2fc" podUID="931034f2-80f5-4484-bfdc-20cfabfeeec2" Sep 5 23:56:42.953548 kubelet[3174]: I0905 23:56:42.953518 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Sep 5 23:56:42.955159 containerd[1698]: time="2025-09-05T23:56:42.954594589Z" level=info msg="StopPodSandbox for \"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\"" Sep 5 23:56:42.955159 containerd[1698]: time="2025-09-05T23:56:42.954784790Z" level=info msg="Ensure that sandbox 7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8 in task-service has been cleanup successfully" Sep 5 23:56:42.960636 containerd[1698]: time="2025-09-05T23:56:42.960608437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 23:56:42.962930 kubelet[3174]: I0905 23:56:42.962233 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Sep 5 23:56:42.963043 containerd[1698]: time="2025-09-05T23:56:42.962603399Z" level=info msg="StopPodSandbox for \"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\"" Sep 5 23:56:42.963043 containerd[1698]: time="2025-09-05T23:56:42.962781159Z" level=info msg="Ensure that sandbox 27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c in task-service has been cleanup successfully" Sep 5 23:56:42.964762 kubelet[3174]: I0905 23:56:42.964499 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Sep 5 23:56:42.965805 containerd[1698]: time="2025-09-05T23:56:42.965772243Z" level=info msg="StopPodSandbox for \"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\"" Sep 5 23:56:42.966805 containerd[1698]: time="2025-09-05T23:56:42.966761964Z" level=info msg="Ensure that sandbox a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97 in task-service has been cleanup successfully" Sep 5 23:56:43.012965 containerd[1698]: time="2025-09-05T23:56:43.008358933Z" level=error msg="StopPodSandbox for \"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\" failed" error="failed to destroy network for sandbox \"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:43.013480 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97-shm.mount: Deactivated successfully. Sep 5 23:56:43.019761 kubelet[3174]: E0905 23:56:43.019716 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Sep 5 23:56:43.019921 kubelet[3174]: E0905 23:56:43.019775 3174 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8"} Sep 5 23:56:43.019921 kubelet[3174]: E0905 23:56:43.019828 3174 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"931034f2-80f5-4484-bfdc-20cfabfeeec2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:56:43.019921 kubelet[3174]: E0905 23:56:43.019871 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"931034f2-80f5-4484-bfdc-20cfabfeeec2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rf2fc" podUID="931034f2-80f5-4484-bfdc-20cfabfeeec2" Sep 5 23:56:43.036377 containerd[1698]: time="2025-09-05T23:56:43.036014046Z" level=error msg="StopPodSandbox for \"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\" failed" error="failed to destroy network for sandbox \"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:43.037364 kubelet[3174]: E0905 23:56:43.037326 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Sep 5 23:56:43.037596 kubelet[3174]: E0905 23:56:43.037567 3174 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c"} Sep 5 23:56:43.037688 kubelet[3174]: E0905 23:56:43.037674 3174 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"75131aad-a28c-402b-ad91-a268726f8ed5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:56:43.037787 kubelet[3174]: E0905 23:56:43.037769 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"75131aad-a28c-402b-ad91-a268726f8ed5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86c5b95765-hgw7k" podUID="75131aad-a28c-402b-ad91-a268726f8ed5" Sep 5 23:56:43.050921 containerd[1698]: time="2025-09-05T23:56:43.050867543Z" level=error msg="StopPodSandbox for \"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\" failed" error="failed to destroy network for sandbox \"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:43.051293 kubelet[3174]: E0905 23:56:43.051257 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Sep 5 23:56:43.051445 kubelet[3174]: E0905 23:56:43.051423 3174 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97"} Sep 5 23:56:43.051530 kubelet[3174]: E0905 23:56:43.051516 3174 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c33313ed-3911-49ca-80e5-a8358a33d55f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:56:43.051637 kubelet[3174]: E0905 23:56:43.051618 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c33313ed-3911-49ca-80e5-a8358a33d55f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tblgm" podUID="c33313ed-3911-49ca-80e5-a8358a33d55f" Sep 5 23:56:43.316980 kubelet[3174]: E0905 23:56:43.316948 3174 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Sep 5 23:56:43.317722 kubelet[3174]: E0905 23:56:43.317224 3174 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7a07b96c-cdaf-43ca-a04f-c273da6229dd-whisker-ca-bundle podName:7a07b96c-cdaf-43ca-a04f-c273da6229dd nodeName:}" failed. No retries permitted until 2025-09-05 23:56:43.817199659 +0000 UTC m=+30.098479985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/7a07b96c-cdaf-43ca-a04f-c273da6229dd-whisker-ca-bundle") pod "whisker-589cbf6db4-hgmcp" (UID: "7a07b96c-cdaf-43ca-a04f-c273da6229dd") : failed to sync configmap cache: timed out waiting for the condition Sep 5 23:56:43.318264 kubelet[3174]: E0905 23:56:43.318235 3174 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Sep 5 23:56:43.318351 kubelet[3174]: E0905 23:56:43.318303 3174 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/306b9be3-0678-43c7-80e8-c3eff7dac6f2-calico-apiserver-certs podName:306b9be3-0678-43c7-80e8-c3eff7dac6f2 nodeName:}" failed. No retries permitted until 2025-09-05 23:56:43.81828742 +0000 UTC m=+30.099567746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/306b9be3-0678-43c7-80e8-c3eff7dac6f2-calico-apiserver-certs") pod "calico-apiserver-f87497d48-k8227" (UID: "306b9be3-0678-43c7-80e8-c3eff7dac6f2") : failed to sync secret cache: timed out waiting for the condition Sep 5 23:56:43.318405 kubelet[3174]: E0905 23:56:43.318361 3174 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Sep 5 23:56:43.318405 kubelet[3174]: E0905 23:56:43.318384 3174 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed64e8c1-23f2-423a-b9fd-51cbac98b68d-calico-apiserver-certs podName:ed64e8c1-23f2-423a-b9fd-51cbac98b68d nodeName:}" failed. No retries permitted until 2025-09-05 23:56:43.81837706 +0000 UTC m=+30.099657386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/ed64e8c1-23f2-423a-b9fd-51cbac98b68d-calico-apiserver-certs") pod "calico-apiserver-f87497d48-47nvq" (UID: "ed64e8c1-23f2-423a-b9fd-51cbac98b68d") : failed to sync secret cache: timed out waiting for the condition Sep 5 23:56:43.318405 kubelet[3174]: E0905 23:56:43.318402 3174 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Sep 5 23:56:43.318485 kubelet[3174]: E0905 23:56:43.318424 3174 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee5a4d86-adeb-4a20-b975-6bb9030118b8-goldmane-ca-bundle podName:ee5a4d86-adeb-4a20-b975-6bb9030118b8 nodeName:}" failed. No retries permitted until 2025-09-05 23:56:43.81841622 +0000 UTC m=+30.099696506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/ee5a4d86-adeb-4a20-b975-6bb9030118b8-goldmane-ca-bundle") pod "goldmane-54d579b49d-qvx8t" (UID: "ee5a4d86-adeb-4a20-b975-6bb9030118b8") : failed to sync configmap cache: timed out waiting for the condition Sep 5 23:56:43.318485 kubelet[3174]: E0905 23:56:43.318436 3174 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Sep 5 23:56:43.318485 kubelet[3174]: E0905 23:56:43.318454 3174 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee5a4d86-adeb-4a20-b975-6bb9030118b8-config podName:ee5a4d86-adeb-4a20-b975-6bb9030118b8 nodeName:}" failed. No retries permitted until 2025-09-05 23:56:43.81844822 +0000 UTC m=+30.099728546 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ee5a4d86-adeb-4a20-b975-6bb9030118b8-config") pod "goldmane-54d579b49d-qvx8t" (UID: "ee5a4d86-adeb-4a20-b975-6bb9030118b8") : failed to sync configmap cache: timed out waiting for the condition Sep 5 23:56:43.838698 systemd[1]: Created slice kubepods-besteffort-podb765c2c2_113c_4a43_beb8_f69a462337be.slice - libcontainer container kubepods-besteffort-podb765c2c2_113c_4a43_beb8_f69a462337be.slice. Sep 5 23:56:43.841117 containerd[1698]: time="2025-09-05T23:56:43.841071719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f87497d48-k8227,Uid:306b9be3-0678-43c7-80e8-c3eff7dac6f2,Namespace:calico-apiserver,Attempt:0,}" Sep 5 23:56:43.842246 containerd[1698]: time="2025-09-05T23:56:43.842035960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8qjr6,Uid:b765c2c2-113c-4a43-beb8-f69a462337be,Namespace:calico-system,Attempt:0,}" Sep 5 23:56:43.843020 containerd[1698]: time="2025-09-05T23:56:43.842534761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f87497d48-47nvq,Uid:ed64e8c1-23f2-423a-b9fd-51cbac98b68d,Namespace:calico-apiserver,Attempt:0,}" Sep 5 23:56:43.843020 containerd[1698]: time="2025-09-05T23:56:43.842726601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-589cbf6db4-hgmcp,Uid:7a07b96c-cdaf-43ca-a04f-c273da6229dd,Namespace:calico-system,Attempt:0,}" Sep 5 23:56:43.843020 containerd[1698]: time="2025-09-05T23:56:43.842899402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qvx8t,Uid:ee5a4d86-adeb-4a20-b975-6bb9030118b8,Namespace:calico-system,Attempt:0,}" Sep 5 23:56:44.153797 containerd[1698]: time="2025-09-05T23:56:44.153686770Z" level=error msg="Failed to destroy network for sandbox \"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:44.155504 containerd[1698]: time="2025-09-05T23:56:44.155467172Z" level=error msg="encountered an error cleaning up failed sandbox \"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:44.155749 containerd[1698]: time="2025-09-05T23:56:44.155639612Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8qjr6,Uid:b765c2c2-113c-4a43-beb8-f69a462337be,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:44.156253 kubelet[3174]: E0905 23:56:44.155893 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:44.156253 kubelet[3174]: E0905 23:56:44.155949 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8qjr6" Sep 5 23:56:44.156253 kubelet[3174]: E0905 23:56:44.155969 3174 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8qjr6" Sep 5 23:56:44.157167 kubelet[3174]: E0905 23:56:44.156007 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8qjr6_calico-system(b765c2c2-113c-4a43-beb8-f69a462337be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8qjr6_calico-system(b765c2c2-113c-4a43-beb8-f69a462337be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8qjr6" podUID="b765c2c2-113c-4a43-beb8-f69a462337be" Sep 5 23:56:44.176194 containerd[1698]: time="2025-09-05T23:56:44.176015316Z" level=error msg="Failed to destroy network for sandbox \"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:44.177295 containerd[1698]: time="2025-09-05T23:56:44.177081997Z" level=error msg="encountered an error cleaning up failed sandbox \"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:44.177295 containerd[1698]: time="2025-09-05T23:56:44.177143037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f87497d48-k8227,Uid:306b9be3-0678-43c7-80e8-c3eff7dac6f2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:44.177463 kubelet[3174]: E0905 23:56:44.177385 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:44.177463 kubelet[3174]: E0905 23:56:44.177438 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f87497d48-k8227" Sep 5 23:56:44.177463 kubelet[3174]: E0905 23:56:44.177457 3174 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f87497d48-k8227" Sep 5 23:56:44.179027 kubelet[3174]: E0905 23:56:44.177497 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f87497d48-k8227_calico-apiserver(306b9be3-0678-43c7-80e8-c3eff7dac6f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f87497d48-k8227_calico-apiserver(306b9be3-0678-43c7-80e8-c3eff7dac6f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f87497d48-k8227" podUID="306b9be3-0678-43c7-80e8-c3eff7dac6f2" Sep 5 23:56:44.182083 containerd[1698]: time="2025-09-05T23:56:44.181982043Z" level=error msg="Failed to destroy network for sandbox \"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:44.182670 containerd[1698]: time="2025-09-05T23:56:44.182263243Z" level=error msg="encountered an error cleaning up failed sandbox \"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:44.182670 containerd[1698]: time="2025-09-05T23:56:44.182322603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f87497d48-47nvq,Uid:ed64e8c1-23f2-423a-b9fd-51cbac98b68d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:44.182782 kubelet[3174]: E0905 23:56:44.182515 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:44.182782 kubelet[3174]: E0905 23:56:44.182566 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f87497d48-47nvq" Sep 5 23:56:44.182782 kubelet[3174]: E0905 23:56:44.182585 3174 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f87497d48-47nvq" Sep 5 23:56:44.182890 kubelet[3174]: E0905 23:56:44.182621 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f87497d48-47nvq_calico-apiserver(ed64e8c1-23f2-423a-b9fd-51cbac98b68d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f87497d48-47nvq_calico-apiserver(ed64e8c1-23f2-423a-b9fd-51cbac98b68d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f87497d48-47nvq" podUID="ed64e8c1-23f2-423a-b9fd-51cbac98b68d" Sep 5 23:56:44.204142 containerd[1698]: time="2025-09-05T23:56:44.204086509Z" level=error msg="Failed to destroy network for sandbox \"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:44.204461 containerd[1698]: time="2025-09-05T23:56:44.204429710Z" level=error msg="encountered an error cleaning up failed sandbox \"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:44.204513 containerd[1698]: time="2025-09-05T23:56:44.204486110Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-589cbf6db4-hgmcp,Uid:7a07b96c-cdaf-43ca-a04f-c273da6229dd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:44.205283 kubelet[3174]: E0905 23:56:44.204853 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:44.205283 kubelet[3174]: E0905 23:56:44.204913 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-589cbf6db4-hgmcp" Sep 5 23:56:44.205283 kubelet[3174]: E0905 23:56:44.204931 3174 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-589cbf6db4-hgmcp" Sep 5 23:56:44.205463 kubelet[3174]: E0905 23:56:44.204979 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-589cbf6db4-hgmcp_calico-system(7a07b96c-cdaf-43ca-a04f-c273da6229dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-589cbf6db4-hgmcp_calico-system(7a07b96c-cdaf-43ca-a04f-c273da6229dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-589cbf6db4-hgmcp" podUID="7a07b96c-cdaf-43ca-a04f-c273da6229dd" Sep 5 23:56:44.208672 containerd[1698]: time="2025-09-05T23:56:44.208564675Z" level=error msg="Failed to destroy network for sandbox \"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:44.209023 containerd[1698]: time="2025-09-05T23:56:44.208996595Z" level=error msg="encountered an error cleaning up failed sandbox \"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:44.209218 containerd[1698]: time="2025-09-05T23:56:44.209115795Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qvx8t,Uid:ee5a4d86-adeb-4a20-b975-6bb9030118b8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:44.209481 kubelet[3174]: E0905 23:56:44.209348 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:44.209481 kubelet[3174]: E0905 23:56:44.209408 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-qvx8t" Sep 5 23:56:44.209481 kubelet[3174]: E0905 23:56:44.209433 3174 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-qvx8t" Sep 5 23:56:44.209691 kubelet[3174]: E0905 23:56:44.209634 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-qvx8t_calico-system(ee5a4d86-adeb-4a20-b975-6bb9030118b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-qvx8t_calico-system(ee5a4d86-adeb-4a20-b975-6bb9030118b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-qvx8t" podUID="ee5a4d86-adeb-4a20-b975-6bb9030118b8" Sep 5 23:56:44.969134 kubelet[3174]: I0905 23:56:44.969098 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Sep 5 23:56:44.970398 containerd[1698]: time="2025-09-05T23:56:44.969999256Z" level=info msg="StopPodSandbox for \"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\"" Sep 5 23:56:44.970398 containerd[1698]: time="2025-09-05T23:56:44.970162337Z" level=info msg="Ensure that sandbox 433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9 in task-service has been cleanup successfully" Sep 5 23:56:44.972139 kubelet[3174]: I0905 23:56:44.972115 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Sep 5 23:56:44.973266 containerd[1698]: time="2025-09-05T23:56:44.973048140Z" level=info msg="StopPodSandbox for \"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\"" Sep 5 23:56:44.974064 containerd[1698]: time="2025-09-05T23:56:44.974015421Z" level=info msg="Ensure that sandbox 86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440 in task-service has been cleanup successfully" Sep 5 23:56:44.976036 kubelet[3174]: I0905 23:56:44.975822 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Sep 5 23:56:44.977262 containerd[1698]: time="2025-09-05T23:56:44.977237985Z" level=info msg="StopPodSandbox for \"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\"" Sep 5 23:56:44.978342 containerd[1698]: time="2025-09-05T23:56:44.978219066Z" level=info msg="Ensure that sandbox 7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2 in task-service has been cleanup successfully" Sep 5 23:56:44.978937 kubelet[3174]: I0905 23:56:44.978907 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Sep 5 23:56:44.981538 containerd[1698]: time="2025-09-05T23:56:44.981421390Z" level=info msg="StopPodSandbox for \"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\"" Sep 5 23:56:44.982128 containerd[1698]: time="2025-09-05T23:56:44.982023311Z" level=info msg="Ensure that sandbox 3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7 in task-service has been cleanup successfully" Sep 5 23:56:44.983252 kubelet[3174]: I0905 23:56:44.983237 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Sep 5 23:56:44.985389 containerd[1698]: time="2025-09-05T23:56:44.985178434Z" level=info msg="StopPodSandbox for \"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\"" Sep 5 23:56:44.985909 containerd[1698]: time="2025-09-05T23:56:44.985742155Z" level=info msg="Ensure that sandbox b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8 in task-service has been cleanup successfully" Sep 5 23:56:45.012008 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8-shm.mount: Deactivated successfully. Sep 5 23:56:45.012095 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2-shm.mount: Deactivated successfully. Sep 5 23:56:45.012143 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440-shm.mount: Deactivated successfully. Sep 5 23:56:45.031619 containerd[1698]: time="2025-09-05T23:56:45.031517489Z" level=error msg="StopPodSandbox for \"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\" failed" error="failed to destroy network for sandbox \"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:45.032249 kubelet[3174]: E0905 23:56:45.032140 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Sep 5 23:56:45.032249 kubelet[3174]: E0905 23:56:45.032219 3174 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2"} Sep 5 23:56:45.032472 kubelet[3174]: E0905 23:56:45.032400 3174 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"306b9be3-0678-43c7-80e8-c3eff7dac6f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:56:45.032472 kubelet[3174]: E0905 23:56:45.032431 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"306b9be3-0678-43c7-80e8-c3eff7dac6f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f87497d48-k8227" podUID="306b9be3-0678-43c7-80e8-c3eff7dac6f2" Sep 5 23:56:45.051368 containerd[1698]: time="2025-09-05T23:56:45.051314073Z" level=error msg="StopPodSandbox for \"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\" failed" error="failed to destroy network for sandbox \"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:45.051919 kubelet[3174]: E0905 23:56:45.051729 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Sep 5 23:56:45.051919 kubelet[3174]: E0905 23:56:45.051773 3174 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9"} Sep 5 23:56:45.051919 kubelet[3174]: E0905 23:56:45.051807 3174 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ee5a4d86-adeb-4a20-b975-6bb9030118b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:56:45.051919 kubelet[3174]: E0905 23:56:45.051852 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ee5a4d86-adeb-4a20-b975-6bb9030118b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-qvx8t" podUID="ee5a4d86-adeb-4a20-b975-6bb9030118b8" Sep 5 23:56:45.054938 containerd[1698]: time="2025-09-05T23:56:45.054900637Z" level=error msg="StopPodSandbox for \"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\" failed" error="failed to destroy network for sandbox \"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:45.055352 containerd[1698]: time="2025-09-05T23:56:45.055326157Z" level=error msg="StopPodSandbox for \"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\" failed" error="failed to destroy network for sandbox \"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:45.055727 kubelet[3174]: E0905 23:56:45.055697 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Sep 5 23:56:45.056299 kubelet[3174]: E0905 23:56:45.055819 3174 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7"} Sep 5 23:56:45.056299 kubelet[3174]: E0905 23:56:45.055877 3174 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a07b96c-cdaf-43ca-a04f-c273da6229dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:56:45.056299 kubelet[3174]: E0905 23:56:45.055907 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a07b96c-cdaf-43ca-a04f-c273da6229dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-589cbf6db4-hgmcp" podUID="7a07b96c-cdaf-43ca-a04f-c273da6229dd" Sep 5 23:56:45.056299 kubelet[3174]: E0905 23:56:45.055478 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Sep 5 23:56:45.056299 kubelet[3174]: E0905 23:56:45.056241 3174 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8"} Sep 5 23:56:45.056494 kubelet[3174]: E0905 23:56:45.056263 3174 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ed64e8c1-23f2-423a-b9fd-51cbac98b68d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:56:45.056565 kubelet[3174]: E0905 23:56:45.056279 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ed64e8c1-23f2-423a-b9fd-51cbac98b68d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f87497d48-47nvq" podUID="ed64e8c1-23f2-423a-b9fd-51cbac98b68d" Sep 5 23:56:45.056848 containerd[1698]: time="2025-09-05T23:56:45.056774319Z" level=error msg="StopPodSandbox for \"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\" failed" error="failed to destroy network for sandbox \"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:56:45.057161 kubelet[3174]: E0905 23:56:45.057056 3174 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Sep 5 23:56:45.057161 kubelet[3174]: E0905 23:56:45.057093 3174 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440"} Sep 5 23:56:45.057161 kubelet[3174]: E0905 23:56:45.057115 3174 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b765c2c2-113c-4a43-beb8-f69a462337be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:56:45.057161 kubelet[3174]: E0905 23:56:45.057134 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b765c2c2-113c-4a43-beb8-f69a462337be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8qjr6" podUID="b765c2c2-113c-4a43-beb8-f69a462337be" Sep 5 23:56:47.340409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount626168587.mount: Deactivated successfully. Sep 5 23:56:52.122476 containerd[1698]: time="2025-09-05T23:56:52.122419178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:52.127154 containerd[1698]: time="2025-09-05T23:56:52.127106664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 5 23:56:52.137977 containerd[1698]: time="2025-09-05T23:56:52.137911036Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:52.143645 containerd[1698]: time="2025-09-05T23:56:52.143585162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:56:52.144604 containerd[1698]: time="2025-09-05T23:56:52.144166323Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 9.183120326s" Sep 5 23:56:52.144604 containerd[1698]: time="2025-09-05T23:56:52.144203283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 5 23:56:52.156824 containerd[1698]: time="2025-09-05T23:56:52.156778698Z" level=info msg="CreateContainer within sandbox \"56861e30cc1a688ad64a579538a39714855dd7224f6fa6b67c97075b5a417011\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 23:56:52.239237 containerd[1698]: time="2025-09-05T23:56:52.239187672Z" level=info msg="CreateContainer within sandbox \"56861e30cc1a688ad64a579538a39714855dd7224f6fa6b67c97075b5a417011\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5c3d139d95a869c06f468da3f562fba38ca2c58e3e7e87c022e46b158f725070\"" Sep 5 23:56:52.240138 containerd[1698]: time="2025-09-05T23:56:52.240079193Z" level=info msg="StartContainer for \"5c3d139d95a869c06f468da3f562fba38ca2c58e3e7e87c022e46b158f725070\"" Sep 5 23:56:52.269055 systemd[1]: Started cri-containerd-5c3d139d95a869c06f468da3f562fba38ca2c58e3e7e87c022e46b158f725070.scope - libcontainer container 5c3d139d95a869c06f468da3f562fba38ca2c58e3e7e87c022e46b158f725070. Sep 5 23:56:52.303871 containerd[1698]: time="2025-09-05T23:56:52.302799105Z" level=info msg="StartContainer for \"5c3d139d95a869c06f468da3f562fba38ca2c58e3e7e87c022e46b158f725070\" returns successfully" Sep 5 23:56:52.601953 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 23:56:52.602080 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 23:56:52.725124 containerd[1698]: time="2025-09-05T23:56:52.725067988Z" level=info msg="StopPodSandbox for \"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\"" Sep 5 23:56:52.885680 containerd[1698]: 2025-09-05 23:56:52.806 [INFO][4337] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Sep 5 23:56:52.885680 containerd[1698]: 2025-09-05 23:56:52.806 [INFO][4337] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" iface="eth0" netns="/var/run/netns/cni-463d8088-13ab-2685-84af-c38083cadb1b" Sep 5 23:56:52.885680 containerd[1698]: 2025-09-05 23:56:52.806 [INFO][4337] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" iface="eth0" netns="/var/run/netns/cni-463d8088-13ab-2685-84af-c38083cadb1b" Sep 5 23:56:52.885680 containerd[1698]: 2025-09-05 23:56:52.807 [INFO][4337] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" iface="eth0" netns="/var/run/netns/cni-463d8088-13ab-2685-84af-c38083cadb1b" Sep 5 23:56:52.885680 containerd[1698]: 2025-09-05 23:56:52.807 [INFO][4337] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Sep 5 23:56:52.885680 containerd[1698]: 2025-09-05 23:56:52.807 [INFO][4337] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Sep 5 23:56:52.885680 containerd[1698]: 2025-09-05 23:56:52.839 [INFO][4346] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" HandleID="k8s-pod-network.3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Workload="ci--4081.3.5--n--8e502b48f1-k8s-whisker--589cbf6db4--hgmcp-eth0" Sep 5 23:56:52.885680 containerd[1698]: 2025-09-05 23:56:52.840 [INFO][4346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:52.885680 containerd[1698]: 2025-09-05 23:56:52.840 [INFO][4346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:52.885680 containerd[1698]: 2025-09-05 23:56:52.878 [WARNING][4346] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" HandleID="k8s-pod-network.3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Workload="ci--4081.3.5--n--8e502b48f1-k8s-whisker--589cbf6db4--hgmcp-eth0" Sep 5 23:56:52.885680 containerd[1698]: 2025-09-05 23:56:52.878 [INFO][4346] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" HandleID="k8s-pod-network.3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Workload="ci--4081.3.5--n--8e502b48f1-k8s-whisker--589cbf6db4--hgmcp-eth0" Sep 5 23:56:52.885680 containerd[1698]: 2025-09-05 23:56:52.881 [INFO][4346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:52.885680 containerd[1698]: 2025-09-05 23:56:52.883 [INFO][4337] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Sep 5 23:56:52.886070 containerd[1698]: time="2025-09-05T23:56:52.886030013Z" level=info msg="TearDown network for sandbox \"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\" successfully" Sep 5 23:56:52.886070 containerd[1698]: time="2025-09-05T23:56:52.886059733Z" level=info msg="StopPodSandbox for \"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\" returns successfully" Sep 5 23:56:52.989543 kubelet[3174]: I0905 23:56:52.989031 3174 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmddz\" (UniqueName: \"kubernetes.io/projected/7a07b96c-cdaf-43ca-a04f-c273da6229dd-kube-api-access-nmddz\") pod \"7a07b96c-cdaf-43ca-a04f-c273da6229dd\" (UID: \"7a07b96c-cdaf-43ca-a04f-c273da6229dd\") " Sep 5 23:56:52.989543 kubelet[3174]: I0905 23:56:52.989082 3174 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a07b96c-cdaf-43ca-a04f-c273da6229dd-whisker-ca-bundle\") pod \"7a07b96c-cdaf-43ca-a04f-c273da6229dd\" (UID: \"7a07b96c-cdaf-43ca-a04f-c273da6229dd\") " Sep 5 23:56:52.989543 kubelet[3174]: I0905 23:56:52.989112 3174 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7a07b96c-cdaf-43ca-a04f-c273da6229dd-whisker-backend-key-pair\") pod \"7a07b96c-cdaf-43ca-a04f-c273da6229dd\" (UID: \"7a07b96c-cdaf-43ca-a04f-c273da6229dd\") " Sep 5 23:56:52.991735 kubelet[3174]: I0905 23:56:52.991699 3174 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a07b96c-cdaf-43ca-a04f-c273da6229dd-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7a07b96c-cdaf-43ca-a04f-c273da6229dd" (UID: "7a07b96c-cdaf-43ca-a04f-c273da6229dd"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 23:56:52.992338 kubelet[3174]: I0905 23:56:52.992309 3174 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a07b96c-cdaf-43ca-a04f-c273da6229dd-kube-api-access-nmddz" (OuterVolumeSpecName: "kube-api-access-nmddz") pod "7a07b96c-cdaf-43ca-a04f-c273da6229dd" (UID: "7a07b96c-cdaf-43ca-a04f-c273da6229dd"). InnerVolumeSpecName "kube-api-access-nmddz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 23:56:52.992391 kubelet[3174]: I0905 23:56:52.992375 3174 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a07b96c-cdaf-43ca-a04f-c273da6229dd-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7a07b96c-cdaf-43ca-a04f-c273da6229dd" (UID: "7a07b96c-cdaf-43ca-a04f-c273da6229dd"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 23:56:53.008575 systemd[1]: Removed slice kubepods-besteffort-pod7a07b96c_cdaf_43ca_a04f_c273da6229dd.slice - libcontainer container kubepods-besteffort-pod7a07b96c_cdaf_43ca_a04f_c273da6229dd.slice. Sep 5 23:56:53.029722 kubelet[3174]: I0905 23:56:53.029662 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zxcfs" podStartSLOduration=2.126313515 podStartE2EDuration="22.029644977s" podCreationTimestamp="2025-09-05 23:56:31 +0000 UTC" firstStartedPulling="2025-09-05 23:56:32.241520102 +0000 UTC m=+18.522800428" lastFinishedPulling="2025-09-05 23:56:52.144851564 +0000 UTC m=+38.426131890" observedRunningTime="2025-09-05 23:56:53.029293337 +0000 UTC m=+39.310573823" watchObservedRunningTime="2025-09-05 23:56:53.029644977 +0000 UTC m=+39.310925263" Sep 5 23:56:53.090897 kubelet[3174]: I0905 23:56:53.090176 3174 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmddz\" (UniqueName: \"kubernetes.io/projected/7a07b96c-cdaf-43ca-a04f-c273da6229dd-kube-api-access-nmddz\") on node \"ci-4081.3.5-n-8e502b48f1\" DevicePath \"\"" Sep 5 23:56:53.091149 kubelet[3174]: I0905 23:56:53.091000 3174 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a07b96c-cdaf-43ca-a04f-c273da6229dd-whisker-ca-bundle\") on node \"ci-4081.3.5-n-8e502b48f1\" DevicePath \"\"" Sep 5 23:56:53.091149 kubelet[3174]: I0905 23:56:53.091023 3174 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7a07b96c-cdaf-43ca-a04f-c273da6229dd-whisker-backend-key-pair\") on node \"ci-4081.3.5-n-8e502b48f1\" DevicePath \"\"" Sep 5 23:56:53.111882 kubelet[3174]: I0905 23:56:53.109595 3174 status_manager.go:890] "Failed to get status for pod" podUID="ed41aaee-e2f1-4726-a824-dfe2a9bca868" pod="calico-system/whisker-8455d55b59-rg6zb" err="pods \"whisker-8455d55b59-rg6zb\" is forbidden: User \"system:node:ci-4081.3.5-n-8e502b48f1\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.5-n-8e502b48f1' and this object" Sep 5 23:56:53.112759 kubelet[3174]: W0905 23:56:53.111634 3174 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4081.3.5-n-8e502b48f1" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.5-n-8e502b48f1' and this object Sep 5 23:56:53.112759 kubelet[3174]: E0905 23:56:53.112500 3174 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4081.3.5-n-8e502b48f1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.5-n-8e502b48f1' and this object" logger="UnhandledError" Sep 5 23:56:53.118757 systemd[1]: Created slice kubepods-besteffort-poded41aaee_e2f1_4726_a824_dfe2a9bca868.slice - libcontainer container kubepods-besteffort-poded41aaee_e2f1_4726_a824_dfe2a9bca868.slice. Sep 5 23:56:53.153184 systemd[1]: run-netns-cni\x2d463d8088\x2d13ab\x2d2685\x2d84af\x2dc38083cadb1b.mount: Deactivated successfully. Sep 5 23:56:53.153293 systemd[1]: var-lib-kubelet-pods-7a07b96c\x2dcdaf\x2d43ca\x2da04f\x2dc273da6229dd-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 23:56:53.153347 systemd[1]: var-lib-kubelet-pods-7a07b96c\x2dcdaf\x2d43ca\x2da04f\x2dc273da6229dd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnmddz.mount: Deactivated successfully. Sep 5 23:56:53.191990 kubelet[3174]: I0905 23:56:53.191856 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ed41aaee-e2f1-4726-a824-dfe2a9bca868-whisker-backend-key-pair\") pod \"whisker-8455d55b59-rg6zb\" (UID: \"ed41aaee-e2f1-4726-a824-dfe2a9bca868\") " pod="calico-system/whisker-8455d55b59-rg6zb" Sep 5 23:56:53.191990 kubelet[3174]: I0905 23:56:53.191906 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed41aaee-e2f1-4726-a824-dfe2a9bca868-whisker-ca-bundle\") pod \"whisker-8455d55b59-rg6zb\" (UID: \"ed41aaee-e2f1-4726-a824-dfe2a9bca868\") " pod="calico-system/whisker-8455d55b59-rg6zb" Sep 5 23:56:53.191990 kubelet[3174]: I0905 23:56:53.191922 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt2cw\" (UniqueName: \"kubernetes.io/projected/ed41aaee-e2f1-4726-a824-dfe2a9bca868-kube-api-access-zt2cw\") pod \"whisker-8455d55b59-rg6zb\" (UID: \"ed41aaee-e2f1-4726-a824-dfe2a9bca868\") " pod="calico-system/whisker-8455d55b59-rg6zb" Sep 5 23:56:53.831319 containerd[1698]: time="2025-09-05T23:56:53.830933615Z" level=info msg="StopPodSandbox for \"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\"" Sep 5 23:56:53.834451 kubelet[3174]: I0905 23:56:53.834230 3174 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a07b96c-cdaf-43ca-a04f-c273da6229dd" path="/var/lib/kubelet/pods/7a07b96c-cdaf-43ca-a04f-c273da6229dd/volumes" Sep 5 23:56:53.916271 containerd[1698]: 2025-09-05 23:56:53.880 [INFO][4398] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Sep 5 23:56:53.916271 containerd[1698]: 2025-09-05 23:56:53.880 [INFO][4398] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" iface="eth0" netns="/var/run/netns/cni-3e6ab75e-6dff-be7d-f295-ba710cd5c0af" Sep 5 23:56:53.916271 containerd[1698]: 2025-09-05 23:56:53.881 [INFO][4398] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" iface="eth0" netns="/var/run/netns/cni-3e6ab75e-6dff-be7d-f295-ba710cd5c0af" Sep 5 23:56:53.916271 containerd[1698]: 2025-09-05 23:56:53.882 [INFO][4398] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" iface="eth0" netns="/var/run/netns/cni-3e6ab75e-6dff-be7d-f295-ba710cd5c0af" Sep 5 23:56:53.916271 containerd[1698]: 2025-09-05 23:56:53.882 [INFO][4398] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Sep 5 23:56:53.916271 containerd[1698]: 2025-09-05 23:56:53.882 [INFO][4398] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Sep 5 23:56:53.916271 containerd[1698]: 2025-09-05 23:56:53.902 [INFO][4405] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" HandleID="k8s-pod-network.27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0" Sep 5 23:56:53.916271 containerd[1698]: 2025-09-05 23:56:53.902 [INFO][4405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:53.916271 containerd[1698]: 2025-09-05 23:56:53.902 [INFO][4405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:53.916271 containerd[1698]: 2025-09-05 23:56:53.911 [WARNING][4405] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" HandleID="k8s-pod-network.27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0" Sep 5 23:56:53.916271 containerd[1698]: 2025-09-05 23:56:53.911 [INFO][4405] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" HandleID="k8s-pod-network.27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0" Sep 5 23:56:53.916271 containerd[1698]: 2025-09-05 23:56:53.912 [INFO][4405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:53.916271 containerd[1698]: 2025-09-05 23:56:53.914 [INFO][4398] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Sep 5 23:56:53.920395 containerd[1698]: time="2025-09-05T23:56:53.918451675Z" level=info msg="TearDown network for sandbox \"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\" successfully" Sep 5 23:56:53.920395 containerd[1698]: time="2025-09-05T23:56:53.918483315Z" level=info msg="StopPodSandbox for \"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\" returns successfully" Sep 5 23:56:53.920395 containerd[1698]: time="2025-09-05T23:56:53.920126757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86c5b95765-hgw7k,Uid:75131aad-a28c-402b-ad91-a268726f8ed5,Namespace:calico-system,Attempt:1,}" Sep 5 23:56:53.919622 systemd[1]: run-netns-cni\x2d3e6ab75e\x2d6dff\x2dbe7d\x2df295\x2dba710cd5c0af.mount: Deactivated successfully. Sep 5 23:56:54.293063 kubelet[3174]: E0905 23:56:54.293015 3174 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Sep 5 23:56:54.293689 kubelet[3174]: E0905 23:56:54.293104 3174 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed41aaee-e2f1-4726-a824-dfe2a9bca868-whisker-ca-bundle podName:ed41aaee-e2f1-4726-a824-dfe2a9bca868 nodeName:}" failed. No retries permitted until 2025-09-05 23:56:54.793085224 +0000 UTC m=+41.074365550 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/ed41aaee-e2f1-4726-a824-dfe2a9bca868-whisker-ca-bundle") pod "whisker-8455d55b59-rg6zb" (UID: "ed41aaee-e2f1-4726-a824-dfe2a9bca868") : failed to sync configmap cache: timed out waiting for the condition Sep 5 23:56:54.445866 kernel: bpftool[4554]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 5 23:56:54.452411 systemd-networkd[1473]: cali408cdc0f56c: Link UP Sep 5 23:56:54.453593 systemd-networkd[1473]: cali408cdc0f56c: Gained carrier Sep 5 23:56:54.491912 containerd[1698]: 2025-09-05 23:56:54.021 [INFO][4411] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 23:56:54.491912 containerd[1698]: 2025-09-05 23:56:54.045 [INFO][4411] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0 calico-kube-controllers-86c5b95765- calico-system 75131aad-a28c-402b-ad91-a268726f8ed5 927 0 2025-09-05 23:56:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86c5b95765 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.5-n-8e502b48f1 calico-kube-controllers-86c5b95765-hgw7k eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali408cdc0f56c [] [] }} ContainerID="11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36" Namespace="calico-system" Pod="calico-kube-controllers-86c5b95765-hgw7k" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-" Sep 5 23:56:54.491912 containerd[1698]: 2025-09-05 23:56:54.045 [INFO][4411] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36" Namespace="calico-system" Pod="calico-kube-controllers-86c5b95765-hgw7k" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0" Sep 5 23:56:54.491912 containerd[1698]: 2025-09-05 23:56:54.083 [INFO][4469] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36" HandleID="k8s-pod-network.11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0" Sep 5 23:56:54.491912 containerd[1698]: 2025-09-05 23:56:54.084 [INFO][4469] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36" HandleID="k8s-pod-network.11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b040), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-8e502b48f1", "pod":"calico-kube-controllers-86c5b95765-hgw7k", "timestamp":"2025-09-05 23:56:54.083962745 +0000 UTC"}, Hostname:"ci-4081.3.5-n-8e502b48f1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:56:54.491912 containerd[1698]: 2025-09-05 23:56:54.084 [INFO][4469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:54.491912 containerd[1698]: 2025-09-05 23:56:54.084 [INFO][4469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:54.491912 containerd[1698]: 2025-09-05 23:56:54.084 [INFO][4469] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-8e502b48f1' Sep 5 23:56:54.491912 containerd[1698]: 2025-09-05 23:56:54.095 [INFO][4469] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:54.491912 containerd[1698]: 2025-09-05 23:56:54.100 [INFO][4469] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:54.491912 containerd[1698]: 2025-09-05 23:56:54.111 [INFO][4469] ipam/ipam.go 511: Trying affinity for 192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:54.491912 containerd[1698]: 2025-09-05 23:56:54.113 [INFO][4469] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:54.491912 containerd[1698]: 2025-09-05 23:56:54.116 [INFO][4469] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:54.491912 containerd[1698]: 2025-09-05 23:56:54.117 [INFO][4469] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.192/26 handle="k8s-pod-network.11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:54.491912 containerd[1698]: 2025-09-05 23:56:54.118 [INFO][4469] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36 Sep 5 23:56:54.491912 containerd[1698]: 2025-09-05 23:56:54.126 [INFO][4469] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.192/26 handle="k8s-pod-network.11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:54.491912 containerd[1698]: 2025-09-05 23:56:54.136 [INFO][4469] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.193/26] block=192.168.123.192/26 handle="k8s-pod-network.11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:54.491912 containerd[1698]: 2025-09-05 23:56:54.137 [INFO][4469] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.193/26] handle="k8s-pod-network.11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:54.491912 containerd[1698]: 2025-09-05 23:56:54.137 [INFO][4469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:54.491912 containerd[1698]: 2025-09-05 23:56:54.137 [INFO][4469] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.193/26] IPv6=[] ContainerID="11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36" HandleID="k8s-pod-network.11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0" Sep 5 23:56:54.495080 containerd[1698]: 2025-09-05 23:56:54.140 [INFO][4411] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36" Namespace="calico-system" Pod="calico-kube-controllers-86c5b95765-hgw7k" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0", GenerateName:"calico-kube-controllers-86c5b95765-", Namespace:"calico-system", SelfLink:"", UID:"75131aad-a28c-402b-ad91-a268726f8ed5", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86c5b95765", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"", Pod:"calico-kube-controllers-86c5b95765-hgw7k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali408cdc0f56c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:54.495080 containerd[1698]: 2025-09-05 23:56:54.140 [INFO][4411] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.193/32] ContainerID="11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36" Namespace="calico-system" Pod="calico-kube-controllers-86c5b95765-hgw7k" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0" Sep 5 23:56:54.495080 containerd[1698]: 2025-09-05 23:56:54.140 [INFO][4411] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali408cdc0f56c ContainerID="11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36" Namespace="calico-system" Pod="calico-kube-controllers-86c5b95765-hgw7k" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0" Sep 5 23:56:54.495080 containerd[1698]: 2025-09-05 23:56:54.460 [INFO][4411] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36" Namespace="calico-system" Pod="calico-kube-controllers-86c5b95765-hgw7k" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0" Sep 5 23:56:54.495080 containerd[1698]: 2025-09-05 23:56:54.461 [INFO][4411] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36" Namespace="calico-system" Pod="calico-kube-controllers-86c5b95765-hgw7k" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0", GenerateName:"calico-kube-controllers-86c5b95765-", Namespace:"calico-system", SelfLink:"", UID:"75131aad-a28c-402b-ad91-a268726f8ed5", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86c5b95765", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36", Pod:"calico-kube-controllers-86c5b95765-hgw7k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali408cdc0f56c", MAC:"12:c7:01:9a:40:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:54.495080 containerd[1698]: 2025-09-05 23:56:54.486 [INFO][4411] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36" Namespace="calico-system" Pod="calico-kube-controllers-86c5b95765-hgw7k" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0" Sep 5 23:56:54.925560 containerd[1698]: time="2025-09-05T23:56:54.925510549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8455d55b59-rg6zb,Uid:ed41aaee-e2f1-4726-a824-dfe2a9bca868,Namespace:calico-system,Attempt:0,}" Sep 5 23:56:55.253619 systemd-networkd[1473]: vxlan.calico: Link UP Sep 5 23:56:55.253626 systemd-networkd[1473]: vxlan.calico: Gained carrier Sep 5 23:56:55.833622 containerd[1698]: time="2025-09-05T23:56:55.833555509Z" level=info msg="StopPodSandbox for \"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\"" Sep 5 23:56:55.900442 containerd[1698]: time="2025-09-05T23:56:55.899705105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:56:55.902152 containerd[1698]: time="2025-09-05T23:56:55.901828587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:56:55.902393 containerd[1698]: time="2025-09-05T23:56:55.902328228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:55.902881 containerd[1698]: time="2025-09-05T23:56:55.902640908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:55.941017 systemd[1]: Started cri-containerd-11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36.scope - libcontainer container 11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36. Sep 5 23:56:55.953699 containerd[1698]: 2025-09-05 23:56:55.885 [INFO][4636] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Sep 5 23:56:55.953699 containerd[1698]: 2025-09-05 23:56:55.885 [INFO][4636] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" iface="eth0" netns="/var/run/netns/cni-39bfe33f-f4c7-adf6-2c11-30e118735d1c" Sep 5 23:56:55.953699 containerd[1698]: 2025-09-05 23:56:55.885 [INFO][4636] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" iface="eth0" netns="/var/run/netns/cni-39bfe33f-f4c7-adf6-2c11-30e118735d1c" Sep 5 23:56:55.953699 containerd[1698]: 2025-09-05 23:56:55.887 [INFO][4636] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" iface="eth0" netns="/var/run/netns/cni-39bfe33f-f4c7-adf6-2c11-30e118735d1c" Sep 5 23:56:55.953699 containerd[1698]: 2025-09-05 23:56:55.887 [INFO][4636] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Sep 5 23:56:55.953699 containerd[1698]: 2025-09-05 23:56:55.887 [INFO][4636] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Sep 5 23:56:55.953699 containerd[1698]: 2025-09-05 23:56:55.936 [INFO][4649] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" HandleID="k8s-pod-network.7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0" Sep 5 23:56:55.953699 containerd[1698]: 2025-09-05 23:56:55.936 [INFO][4649] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:55.953699 containerd[1698]: 2025-09-05 23:56:55.936 [INFO][4649] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:55.953699 containerd[1698]: 2025-09-05 23:56:55.947 [WARNING][4649] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" HandleID="k8s-pod-network.7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0" Sep 5 23:56:55.953699 containerd[1698]: 2025-09-05 23:56:55.947 [INFO][4649] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" HandleID="k8s-pod-network.7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0" Sep 5 23:56:55.953699 containerd[1698]: 2025-09-05 23:56:55.949 [INFO][4649] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:55.953699 containerd[1698]: 2025-09-05 23:56:55.952 [INFO][4636] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Sep 5 23:56:55.955941 containerd[1698]: time="2025-09-05T23:56:55.955261728Z" level=info msg="TearDown network for sandbox \"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\" successfully" Sep 5 23:56:55.955941 containerd[1698]: time="2025-09-05T23:56:55.955606609Z" level=info msg="StopPodSandbox for \"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\" returns successfully" Sep 5 23:56:55.960139 containerd[1698]: time="2025-09-05T23:56:55.960110894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rf2fc,Uid:931034f2-80f5-4484-bfdc-20cfabfeeec2,Namespace:kube-system,Attempt:1,}" Sep 5 23:56:55.960560 systemd[1]: run-netns-cni\x2d39bfe33f\x2df4c7\x2dadf6\x2d2c11\x2d30e118735d1c.mount: Deactivated successfully. Sep 5 23:56:55.983613 containerd[1698]: time="2025-09-05T23:56:55.983499240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86c5b95765-hgw7k,Uid:75131aad-a28c-402b-ad91-a268726f8ed5,Namespace:calico-system,Attempt:1,} returns sandbox id \"11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36\"" Sep 5 23:56:55.985132 containerd[1698]: time="2025-09-05T23:56:55.985002962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 23:56:56.116975 systemd-networkd[1473]: cali408cdc0f56c: Gained IPv6LL Sep 5 23:56:56.308989 systemd-networkd[1473]: vxlan.calico: Gained IPv6LL Sep 5 23:56:56.828551 containerd[1698]: time="2025-09-05T23:56:56.828062048Z" level=info msg="StopPodSandbox for \"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\"" Sep 5 23:56:56.828551 containerd[1698]: time="2025-09-05T23:56:56.828062008Z" level=info msg="StopPodSandbox for \"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\"" Sep 5 23:56:56.943806 containerd[1698]: 2025-09-05 23:56:56.891 [INFO][4728] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Sep 5 23:56:56.943806 containerd[1698]: 2025-09-05 23:56:56.891 [INFO][4728] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" iface="eth0" netns="/var/run/netns/cni-da2a7e64-d467-006d-19ef-e3b2c2de0485" Sep 5 23:56:56.943806 containerd[1698]: 2025-09-05 23:56:56.892 [INFO][4728] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" iface="eth0" netns="/var/run/netns/cni-da2a7e64-d467-006d-19ef-e3b2c2de0485" Sep 5 23:56:56.943806 containerd[1698]: 2025-09-05 23:56:56.892 [INFO][4728] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" iface="eth0" netns="/var/run/netns/cni-da2a7e64-d467-006d-19ef-e3b2c2de0485" Sep 5 23:56:56.943806 containerd[1698]: 2025-09-05 23:56:56.892 [INFO][4728] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Sep 5 23:56:56.943806 containerd[1698]: 2025-09-05 23:56:56.892 [INFO][4728] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Sep 5 23:56:56.943806 containerd[1698]: 2025-09-05 23:56:56.928 [INFO][4745] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" HandleID="k8s-pod-network.b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0" Sep 5 23:56:56.943806 containerd[1698]: 2025-09-05 23:56:56.928 [INFO][4745] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:56.943806 containerd[1698]: 2025-09-05 23:56:56.928 [INFO][4745] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:56.943806 containerd[1698]: 2025-09-05 23:56:56.938 [WARNING][4745] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" HandleID="k8s-pod-network.b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0" Sep 5 23:56:56.943806 containerd[1698]: 2025-09-05 23:56:56.938 [INFO][4745] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" HandleID="k8s-pod-network.b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0" Sep 5 23:56:56.943806 containerd[1698]: 2025-09-05 23:56:56.940 [INFO][4745] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:56.943806 containerd[1698]: 2025-09-05 23:56:56.942 [INFO][4728] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Sep 5 23:56:56.946685 containerd[1698]: time="2025-09-05T23:56:56.946623464Z" level=info msg="TearDown network for sandbox \"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\" successfully" Sep 5 23:56:56.946685 containerd[1698]: time="2025-09-05T23:56:56.946671504Z" level=info msg="StopPodSandbox for \"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\" returns successfully" Sep 5 23:56:56.948631 containerd[1698]: time="2025-09-05T23:56:56.947399985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f87497d48-47nvq,Uid:ed64e8c1-23f2-423a-b9fd-51cbac98b68d,Namespace:calico-apiserver,Attempt:1,}" Sep 5 23:56:56.947908 systemd[1]: run-netns-cni\x2dda2a7e64\x2dd467\x2d006d\x2d19ef\x2de3b2c2de0485.mount: Deactivated successfully. Sep 5 23:56:56.963785 containerd[1698]: 2025-09-05 23:56:56.897 [INFO][4734] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Sep 5 23:56:56.963785 containerd[1698]: 2025-09-05 23:56:56.897 [INFO][4734] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" iface="eth0" netns="/var/run/netns/cni-406d341e-d06b-b55e-52eb-2d1f3b49e703" Sep 5 23:56:56.963785 containerd[1698]: 2025-09-05 23:56:56.898 [INFO][4734] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" iface="eth0" netns="/var/run/netns/cni-406d341e-d06b-b55e-52eb-2d1f3b49e703" Sep 5 23:56:56.963785 containerd[1698]: 2025-09-05 23:56:56.898 [INFO][4734] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" iface="eth0" netns="/var/run/netns/cni-406d341e-d06b-b55e-52eb-2d1f3b49e703" Sep 5 23:56:56.963785 containerd[1698]: 2025-09-05 23:56:56.898 [INFO][4734] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Sep 5 23:56:56.963785 containerd[1698]: 2025-09-05 23:56:56.898 [INFO][4734] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Sep 5 23:56:56.963785 containerd[1698]: 2025-09-05 23:56:56.944 [INFO][4750] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" HandleID="k8s-pod-network.433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Workload="ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0" Sep 5 23:56:56.963785 containerd[1698]: 2025-09-05 23:56:56.947 [INFO][4750] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:56.963785 containerd[1698]: 2025-09-05 23:56:56.947 [INFO][4750] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:56.963785 containerd[1698]: 2025-09-05 23:56:56.958 [WARNING][4750] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" HandleID="k8s-pod-network.433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Workload="ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0" Sep 5 23:56:56.963785 containerd[1698]: 2025-09-05 23:56:56.958 [INFO][4750] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" HandleID="k8s-pod-network.433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Workload="ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0" Sep 5 23:56:56.963785 containerd[1698]: 2025-09-05 23:56:56.960 [INFO][4750] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:56.963785 containerd[1698]: 2025-09-05 23:56:56.961 [INFO][4734] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Sep 5 23:56:56.964361 containerd[1698]: time="2025-09-05T23:56:56.963935883Z" level=info msg="TearDown network for sandbox \"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\" successfully" Sep 5 23:56:56.964361 containerd[1698]: time="2025-09-05T23:56:56.963970723Z" level=info msg="StopPodSandbox for \"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\" returns successfully" Sep 5 23:56:56.965444 containerd[1698]: time="2025-09-05T23:56:56.965403405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qvx8t,Uid:ee5a4d86-adeb-4a20-b975-6bb9030118b8,Namespace:calico-system,Attempt:1,}" Sep 5 23:56:56.966700 systemd[1]: run-netns-cni\x2d406d341e\x2dd06b\x2db55e\x2d52eb\x2d2d1f3b49e703.mount: Deactivated successfully. Sep 5 23:56:57.828949 containerd[1698]: time="2025-09-05T23:56:57.828804713Z" level=info msg="StopPodSandbox for \"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\"" Sep 5 23:56:57.832701 containerd[1698]: time="2025-09-05T23:56:57.831171956Z" level=info msg="StopPodSandbox for \"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\"" Sep 5 23:56:57.988776 containerd[1698]: 2025-09-05 23:56:57.922 [INFO][4778] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Sep 5 23:56:57.988776 containerd[1698]: 2025-09-05 23:56:57.923 [INFO][4778] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" iface="eth0" netns="/var/run/netns/cni-8d7b6d69-9115-3502-6e19-8282c76f796a" Sep 5 23:56:57.988776 containerd[1698]: 2025-09-05 23:56:57.924 [INFO][4778] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" iface="eth0" netns="/var/run/netns/cni-8d7b6d69-9115-3502-6e19-8282c76f796a" Sep 5 23:56:57.988776 containerd[1698]: 2025-09-05 23:56:57.925 [INFO][4778] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" iface="eth0" netns="/var/run/netns/cni-8d7b6d69-9115-3502-6e19-8282c76f796a" Sep 5 23:56:57.988776 containerd[1698]: 2025-09-05 23:56:57.925 [INFO][4778] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Sep 5 23:56:57.988776 containerd[1698]: 2025-09-05 23:56:57.925 [INFO][4778] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Sep 5 23:56:57.988776 containerd[1698]: 2025-09-05 23:56:57.970 [INFO][4793] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" HandleID="k8s-pod-network.a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0" Sep 5 23:56:57.988776 containerd[1698]: 2025-09-05 23:56:57.970 [INFO][4793] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:57.988776 containerd[1698]: 2025-09-05 23:56:57.970 [INFO][4793] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:57.988776 containerd[1698]: 2025-09-05 23:56:57.981 [WARNING][4793] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" HandleID="k8s-pod-network.a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0" Sep 5 23:56:57.988776 containerd[1698]: 2025-09-05 23:56:57.981 [INFO][4793] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" HandleID="k8s-pod-network.a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0" Sep 5 23:56:57.988776 containerd[1698]: 2025-09-05 23:56:57.983 [INFO][4793] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:57.988776 containerd[1698]: 2025-09-05 23:56:57.986 [INFO][4778] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Sep 5 23:56:57.990148 containerd[1698]: time="2025-09-05T23:56:57.989619377Z" level=info msg="TearDown network for sandbox \"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\" successfully" Sep 5 23:56:57.990148 containerd[1698]: time="2025-09-05T23:56:57.989650297Z" level=info msg="StopPodSandbox for \"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\" returns successfully" Sep 5 23:56:57.992560 systemd[1]: run-netns-cni\x2d8d7b6d69\x2d9115\x2d3502\x2d6e19\x2d8282c76f796a.mount: Deactivated successfully. Sep 5 23:56:58.002568 containerd[1698]: time="2025-09-05T23:56:58.002030871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tblgm,Uid:c33313ed-3911-49ca-80e5-a8358a33d55f,Namespace:kube-system,Attempt:1,}" Sep 5 23:56:58.005764 containerd[1698]: 2025-09-05 23:56:57.919 [INFO][4774] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Sep 5 23:56:58.005764 containerd[1698]: 2025-09-05 23:56:57.920 [INFO][4774] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" iface="eth0" netns="/var/run/netns/cni-ab1bb13b-d446-fd9f-f140-a44bd1cbfdd0" Sep 5 23:56:58.005764 containerd[1698]: 2025-09-05 23:56:57.920 [INFO][4774] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" iface="eth0" netns="/var/run/netns/cni-ab1bb13b-d446-fd9f-f140-a44bd1cbfdd0" Sep 5 23:56:58.005764 containerd[1698]: 2025-09-05 23:56:57.924 [INFO][4774] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" iface="eth0" netns="/var/run/netns/cni-ab1bb13b-d446-fd9f-f140-a44bd1cbfdd0" Sep 5 23:56:58.005764 containerd[1698]: 2025-09-05 23:56:57.924 [INFO][4774] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Sep 5 23:56:58.005764 containerd[1698]: 2025-09-05 23:56:57.924 [INFO][4774] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Sep 5 23:56:58.005764 containerd[1698]: 2025-09-05 23:56:57.972 [INFO][4791] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" HandleID="k8s-pod-network.86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Workload="ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0" Sep 5 23:56:58.005764 containerd[1698]: 2025-09-05 23:56:57.972 [INFO][4791] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:58.005764 containerd[1698]: 2025-09-05 23:56:57.983 [INFO][4791] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:58.005764 containerd[1698]: 2025-09-05 23:56:57.999 [WARNING][4791] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" HandleID="k8s-pod-network.86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Workload="ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0" Sep 5 23:56:58.005764 containerd[1698]: 2025-09-05 23:56:57.999 [INFO][4791] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" HandleID="k8s-pod-network.86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Workload="ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0" Sep 5 23:56:58.005764 containerd[1698]: 2025-09-05 23:56:58.002 [INFO][4791] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:58.005764 containerd[1698]: 2025-09-05 23:56:58.004 [INFO][4774] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Sep 5 23:56:58.008309 containerd[1698]: time="2025-09-05T23:56:58.007904878Z" level=info msg="TearDown network for sandbox \"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\" successfully" Sep 5 23:56:58.008309 containerd[1698]: time="2025-09-05T23:56:58.007950438Z" level=info msg="StopPodSandbox for \"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\" returns successfully" Sep 5 23:56:58.009297 containerd[1698]: time="2025-09-05T23:56:58.009068479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8qjr6,Uid:b765c2c2-113c-4a43-beb8-f69a462337be,Namespace:calico-system,Attempt:1,}" Sep 5 23:56:58.009558 systemd[1]: run-netns-cni\x2dab1bb13b\x2dd446\x2dfd9f\x2df140\x2da44bd1cbfdd0.mount: Deactivated successfully. Sep 5 23:56:58.828235 containerd[1698]: time="2025-09-05T23:56:58.828057976Z" level=info msg="StopPodSandbox for \"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\"" Sep 5 23:56:58.951559 containerd[1698]: 2025-09-05 23:56:58.900 [INFO][4813] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Sep 5 23:56:58.951559 containerd[1698]: 2025-09-05 23:56:58.900 [INFO][4813] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" iface="eth0" netns="/var/run/netns/cni-f43d5561-8969-8a16-ed25-5b1a7d74d52c" Sep 5 23:56:58.951559 containerd[1698]: 2025-09-05 23:56:58.900 [INFO][4813] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" iface="eth0" netns="/var/run/netns/cni-f43d5561-8969-8a16-ed25-5b1a7d74d52c" Sep 5 23:56:58.951559 containerd[1698]: 2025-09-05 23:56:58.900 [INFO][4813] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" iface="eth0" netns="/var/run/netns/cni-f43d5561-8969-8a16-ed25-5b1a7d74d52c" Sep 5 23:56:58.951559 containerd[1698]: 2025-09-05 23:56:58.900 [INFO][4813] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Sep 5 23:56:58.951559 containerd[1698]: 2025-09-05 23:56:58.900 [INFO][4813] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Sep 5 23:56:58.951559 containerd[1698]: 2025-09-05 23:56:58.935 [INFO][4830] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" HandleID="k8s-pod-network.7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0" Sep 5 23:56:58.951559 containerd[1698]: 2025-09-05 23:56:58.936 [INFO][4830] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:58.951559 containerd[1698]: 2025-09-05 23:56:58.936 [INFO][4830] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:58.951559 containerd[1698]: 2025-09-05 23:56:58.946 [WARNING][4830] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" HandleID="k8s-pod-network.7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0" Sep 5 23:56:58.951559 containerd[1698]: 2025-09-05 23:56:58.946 [INFO][4830] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" HandleID="k8s-pod-network.7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0" Sep 5 23:56:58.951559 containerd[1698]: 2025-09-05 23:56:58.947 [INFO][4830] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:58.951559 containerd[1698]: 2025-09-05 23:56:58.949 [INFO][4813] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Sep 5 23:56:58.952204 containerd[1698]: time="2025-09-05T23:56:58.951669077Z" level=info msg="TearDown network for sandbox \"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\" successfully" Sep 5 23:56:58.952204 containerd[1698]: time="2025-09-05T23:56:58.951875277Z" level=info msg="StopPodSandbox for \"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\" returns successfully" Sep 5 23:56:58.953744 containerd[1698]: time="2025-09-05T23:56:58.953491519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f87497d48-k8227,Uid:306b9be3-0678-43c7-80e8-c3eff7dac6f2,Namespace:calico-apiserver,Attempt:1,}" Sep 5 23:56:58.995103 systemd[1]: run-netns-cni\x2df43d5561\x2d8969\x2d8a16\x2ded25\x2d5b1a7d74d52c.mount: Deactivated successfully. Sep 5 23:56:59.016750 systemd-networkd[1473]: cali04aca0c11ff: Link UP Sep 5 23:56:59.020267 systemd-networkd[1473]: cali04aca0c11ff: Gained carrier Sep 5 23:56:59.048270 containerd[1698]: 2025-09-05 23:56:58.937 [INFO][4820] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--8e502b48f1-k8s-whisker--8455d55b59--rg6zb-eth0 whisker-8455d55b59- calico-system ed41aaee-e2f1-4726-a824-dfe2a9bca868 926 0 2025-09-05 23:56:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8455d55b59 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.5-n-8e502b48f1 whisker-8455d55b59-rg6zb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali04aca0c11ff [] [] }} ContainerID="ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d" Namespace="calico-system" Pod="whisker-8455d55b59-rg6zb" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-whisker--8455d55b59--rg6zb-" Sep 5 23:56:59.048270 containerd[1698]: 2025-09-05 23:56:58.937 [INFO][4820] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d" Namespace="calico-system" Pod="whisker-8455d55b59-rg6zb" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-whisker--8455d55b59--rg6zb-eth0" Sep 5 23:56:59.048270 containerd[1698]: 2025-09-05 23:56:58.966 [INFO][4840] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d" HandleID="k8s-pod-network.ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d" Workload="ci--4081.3.5--n--8e502b48f1-k8s-whisker--8455d55b59--rg6zb-eth0" Sep 5 23:56:59.048270 containerd[1698]: 2025-09-05 23:56:58.966 [INFO][4840] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d" HandleID="k8s-pod-network.ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d" Workload="ci--4081.3.5--n--8e502b48f1-k8s-whisker--8455d55b59--rg6zb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003319a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-8e502b48f1", "pod":"whisker-8455d55b59-rg6zb", "timestamp":"2025-09-05 23:56:58.966626334 +0000 UTC"}, Hostname:"ci-4081.3.5-n-8e502b48f1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:56:59.048270 containerd[1698]: 2025-09-05 23:56:58.966 [INFO][4840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:59.048270 containerd[1698]: 2025-09-05 23:56:58.966 [INFO][4840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:59.048270 containerd[1698]: 2025-09-05 23:56:58.966 [INFO][4840] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-8e502b48f1' Sep 5 23:56:59.048270 containerd[1698]: 2025-09-05 23:56:58.977 [INFO][4840] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.048270 containerd[1698]: 2025-09-05 23:56:58.981 [INFO][4840] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.048270 containerd[1698]: 2025-09-05 23:56:58.984 [INFO][4840] ipam/ipam.go 511: Trying affinity for 192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.048270 containerd[1698]: 2025-09-05 23:56:58.986 [INFO][4840] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.048270 containerd[1698]: 2025-09-05 23:56:58.988 [INFO][4840] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.048270 containerd[1698]: 2025-09-05 23:56:58.988 [INFO][4840] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.192/26 handle="k8s-pod-network.ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.048270 containerd[1698]: 2025-09-05 23:56:58.989 [INFO][4840] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d Sep 5 23:56:59.048270 containerd[1698]: 2025-09-05 23:56:58.998 [INFO][4840] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.192/26 handle="k8s-pod-network.ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.048270 containerd[1698]: 2025-09-05 23:56:59.009 [INFO][4840] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.194/26] block=192.168.123.192/26 handle="k8s-pod-network.ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.048270 containerd[1698]: 2025-09-05 23:56:59.009 [INFO][4840] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.194/26] handle="k8s-pod-network.ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.048270 containerd[1698]: 2025-09-05 23:56:59.009 [INFO][4840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:59.048270 containerd[1698]: 2025-09-05 23:56:59.009 [INFO][4840] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.194/26] IPv6=[] ContainerID="ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d" HandleID="k8s-pod-network.ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d" Workload="ci--4081.3.5--n--8e502b48f1-k8s-whisker--8455d55b59--rg6zb-eth0" Sep 5 23:56:59.049032 containerd[1698]: 2025-09-05 23:56:59.012 [INFO][4820] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d" Namespace="calico-system" Pod="whisker-8455d55b59-rg6zb" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-whisker--8455d55b59--rg6zb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-whisker--8455d55b59--rg6zb-eth0", GenerateName:"whisker-8455d55b59-", Namespace:"calico-system", SelfLink:"", UID:"ed41aaee-e2f1-4726-a824-dfe2a9bca868", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8455d55b59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"", Pod:"whisker-8455d55b59-rg6zb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.123.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali04aca0c11ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:59.049032 containerd[1698]: 2025-09-05 23:56:59.012 [INFO][4820] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.194/32] ContainerID="ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d" Namespace="calico-system" Pod="whisker-8455d55b59-rg6zb" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-whisker--8455d55b59--rg6zb-eth0" Sep 5 23:56:59.049032 containerd[1698]: 2025-09-05 23:56:59.012 [INFO][4820] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04aca0c11ff ContainerID="ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d" Namespace="calico-system" Pod="whisker-8455d55b59-rg6zb" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-whisker--8455d55b59--rg6zb-eth0" Sep 5 23:56:59.049032 containerd[1698]: 2025-09-05 23:56:59.018 [INFO][4820] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d" Namespace="calico-system" Pod="whisker-8455d55b59-rg6zb" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-whisker--8455d55b59--rg6zb-eth0" Sep 5 23:56:59.049032 containerd[1698]: 2025-09-05 23:56:59.019 [INFO][4820] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d" Namespace="calico-system" Pod="whisker-8455d55b59-rg6zb" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-whisker--8455d55b59--rg6zb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-whisker--8455d55b59--rg6zb-eth0", GenerateName:"whisker-8455d55b59-", Namespace:"calico-system", SelfLink:"", UID:"ed41aaee-e2f1-4726-a824-dfe2a9bca868", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8455d55b59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d", Pod:"whisker-8455d55b59-rg6zb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.123.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali04aca0c11ff", MAC:"d6:10:f8:17:83:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:59.049032 containerd[1698]: 2025-09-05 23:56:59.038 [INFO][4820] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d" Namespace="calico-system" Pod="whisker-8455d55b59-rg6zb" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-whisker--8455d55b59--rg6zb-eth0" Sep 5 23:56:59.157189 systemd-networkd[1473]: cali5c6c7226e85: Link UP Sep 5 23:56:59.158809 systemd-networkd[1473]: cali5c6c7226e85: Gained carrier Sep 5 23:56:59.196753 containerd[1698]: 2025-09-05 23:56:59.078 [INFO][4848] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0 coredns-668d6bf9bc- kube-system 931034f2-80f5-4484-bfdc-20cfabfeeec2 938 0 2025-09-05 23:56:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-8e502b48f1 coredns-668d6bf9bc-rf2fc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5c6c7226e85 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8" Namespace="kube-system" Pod="coredns-668d6bf9bc-rf2fc" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-" Sep 5 23:56:59.196753 containerd[1698]: 2025-09-05 23:56:59.079 [INFO][4848] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8" Namespace="kube-system" Pod="coredns-668d6bf9bc-rf2fc" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0" Sep 5 23:56:59.196753 containerd[1698]: 2025-09-05 23:56:59.102 [INFO][4869] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8" HandleID="k8s-pod-network.ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0" Sep 5 23:56:59.196753 containerd[1698]: 2025-09-05 23:56:59.102 [INFO][4869] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8" HandleID="k8s-pod-network.ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c1000), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-8e502b48f1", "pod":"coredns-668d6bf9bc-rf2fc", "timestamp":"2025-09-05 23:56:59.102317369 +0000 UTC"}, Hostname:"ci-4081.3.5-n-8e502b48f1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:56:59.196753 containerd[1698]: 2025-09-05 23:56:59.102 [INFO][4869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:59.196753 containerd[1698]: 2025-09-05 23:56:59.102 [INFO][4869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:59.196753 containerd[1698]: 2025-09-05 23:56:59.102 [INFO][4869] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-8e502b48f1' Sep 5 23:56:59.196753 containerd[1698]: 2025-09-05 23:56:59.113 [INFO][4869] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.196753 containerd[1698]: 2025-09-05 23:56:59.119 [INFO][4869] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.196753 containerd[1698]: 2025-09-05 23:56:59.123 [INFO][4869] ipam/ipam.go 511: Trying affinity for 192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.196753 containerd[1698]: 2025-09-05 23:56:59.125 [INFO][4869] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.196753 containerd[1698]: 2025-09-05 23:56:59.128 [INFO][4869] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.196753 containerd[1698]: 2025-09-05 23:56:59.128 [INFO][4869] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.192/26 handle="k8s-pod-network.ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.196753 containerd[1698]: 2025-09-05 23:56:59.129 [INFO][4869] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8 Sep 5 23:56:59.196753 containerd[1698]: 2025-09-05 23:56:59.138 [INFO][4869] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.192/26 handle="k8s-pod-network.ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.196753 containerd[1698]: 2025-09-05 23:56:59.146 [INFO][4869] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.195/26] block=192.168.123.192/26 handle="k8s-pod-network.ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.196753 containerd[1698]: 2025-09-05 23:56:59.146 [INFO][4869] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.195/26] handle="k8s-pod-network.ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.196753 containerd[1698]: 2025-09-05 23:56:59.147 [INFO][4869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:59.196753 containerd[1698]: 2025-09-05 23:56:59.147 [INFO][4869] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.195/26] IPv6=[] ContainerID="ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8" HandleID="k8s-pod-network.ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0" Sep 5 23:56:59.198054 containerd[1698]: 2025-09-05 23:56:59.148 [INFO][4848] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8" Namespace="kube-system" Pod="coredns-668d6bf9bc-rf2fc" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"931034f2-80f5-4484-bfdc-20cfabfeeec2", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"", Pod:"coredns-668d6bf9bc-rf2fc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c6c7226e85", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:59.198054 containerd[1698]: 2025-09-05 23:56:59.149 [INFO][4848] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.195/32] ContainerID="ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8" Namespace="kube-system" Pod="coredns-668d6bf9bc-rf2fc" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0" Sep 5 23:56:59.198054 containerd[1698]: 2025-09-05 23:56:59.149 [INFO][4848] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c6c7226e85 ContainerID="ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8" Namespace="kube-system" Pod="coredns-668d6bf9bc-rf2fc" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0" Sep 5 23:56:59.198054 containerd[1698]: 2025-09-05 23:56:59.159 [INFO][4848] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8" Namespace="kube-system" Pod="coredns-668d6bf9bc-rf2fc" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0" Sep 5 23:56:59.198054 containerd[1698]: 2025-09-05 23:56:59.161 [INFO][4848] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8" Namespace="kube-system" Pod="coredns-668d6bf9bc-rf2fc" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"931034f2-80f5-4484-bfdc-20cfabfeeec2", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8", Pod:"coredns-668d6bf9bc-rf2fc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c6c7226e85", MAC:"62:3e:6c:ba:47:0d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:59.198054 containerd[1698]: 2025-09-05 23:56:59.187 [INFO][4848] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8" Namespace="kube-system" Pod="coredns-668d6bf9bc-rf2fc" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0" Sep 5 23:56:59.272556 systemd-networkd[1473]: cali6e03b975789: Link UP Sep 5 23:56:59.273744 systemd-networkd[1473]: cali6e03b975789: Gained carrier Sep 5 23:56:59.302163 containerd[1698]: 2025-09-05 23:56:59.197 [INFO][4877] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0 calico-apiserver-f87497d48- calico-apiserver ed64e8c1-23f2-423a-b9fd-51cbac98b68d 946 0 2025-09-05 23:56:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f87497d48 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-8e502b48f1 calico-apiserver-f87497d48-47nvq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6e03b975789 [] [] }} ContainerID="7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0" Namespace="calico-apiserver" Pod="calico-apiserver-f87497d48-47nvq" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-" Sep 5 23:56:59.302163 containerd[1698]: 2025-09-05 23:56:59.197 [INFO][4877] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0" Namespace="calico-apiserver" Pod="calico-apiserver-f87497d48-47nvq" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0" Sep 5 23:56:59.302163 containerd[1698]: 2025-09-05 23:56:59.226 [INFO][4898] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0" HandleID="k8s-pod-network.7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0" Sep 5 23:56:59.302163 containerd[1698]: 2025-09-05 23:56:59.226 [INFO][4898] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0" HandleID="k8s-pod-network.7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d30a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-8e502b48f1", "pod":"calico-apiserver-f87497d48-47nvq", "timestamp":"2025-09-05 23:56:59.226064111 +0000 UTC"}, Hostname:"ci-4081.3.5-n-8e502b48f1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:56:59.302163 containerd[1698]: 2025-09-05 23:56:59.226 [INFO][4898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:59.302163 containerd[1698]: 2025-09-05 23:56:59.226 [INFO][4898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:59.302163 containerd[1698]: 2025-09-05 23:56:59.226 [INFO][4898] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-8e502b48f1' Sep 5 23:56:59.302163 containerd[1698]: 2025-09-05 23:56:59.237 [INFO][4898] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.302163 containerd[1698]: 2025-09-05 23:56:59.242 [INFO][4898] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.302163 containerd[1698]: 2025-09-05 23:56:59.246 [INFO][4898] ipam/ipam.go 511: Trying affinity for 192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.302163 containerd[1698]: 2025-09-05 23:56:59.248 [INFO][4898] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.302163 containerd[1698]: 2025-09-05 23:56:59.250 [INFO][4898] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.302163 containerd[1698]: 2025-09-05 23:56:59.250 [INFO][4898] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.192/26 handle="k8s-pod-network.7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.302163 containerd[1698]: 2025-09-05 23:56:59.251 [INFO][4898] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0 Sep 5 23:56:59.302163 containerd[1698]: 2025-09-05 23:56:59.256 [INFO][4898] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.192/26 handle="k8s-pod-network.7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.302163 containerd[1698]: 2025-09-05 23:56:59.266 [INFO][4898] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.196/26] block=192.168.123.192/26 handle="k8s-pod-network.7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.302163 containerd[1698]: 2025-09-05 23:56:59.266 [INFO][4898] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.196/26] handle="k8s-pod-network.7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.302163 containerd[1698]: 2025-09-05 23:56:59.266 [INFO][4898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:59.302163 containerd[1698]: 2025-09-05 23:56:59.266 [INFO][4898] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.196/26] IPv6=[] ContainerID="7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0" HandleID="k8s-pod-network.7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0" Sep 5 23:56:59.302717 containerd[1698]: 2025-09-05 23:56:59.268 [INFO][4877] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0" Namespace="calico-apiserver" Pod="calico-apiserver-f87497d48-47nvq" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0", GenerateName:"calico-apiserver-f87497d48-", Namespace:"calico-apiserver", SelfLink:"", UID:"ed64e8c1-23f2-423a-b9fd-51cbac98b68d", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f87497d48", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"", Pod:"calico-apiserver-f87497d48-47nvq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e03b975789", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:59.302717 containerd[1698]: 2025-09-05 23:56:59.269 [INFO][4877] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.196/32] ContainerID="7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0" Namespace="calico-apiserver" Pod="calico-apiserver-f87497d48-47nvq" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0" Sep 5 23:56:59.302717 containerd[1698]: 2025-09-05 23:56:59.269 [INFO][4877] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e03b975789 ContainerID="7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0" Namespace="calico-apiserver" Pod="calico-apiserver-f87497d48-47nvq" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0" Sep 5 23:56:59.302717 containerd[1698]: 2025-09-05 23:56:59.274 [INFO][4877] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0" Namespace="calico-apiserver" Pod="calico-apiserver-f87497d48-47nvq" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0" Sep 5 23:56:59.302717 containerd[1698]: 2025-09-05 23:56:59.277 [INFO][4877] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0" Namespace="calico-apiserver" Pod="calico-apiserver-f87497d48-47nvq" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0", GenerateName:"calico-apiserver-f87497d48-", Namespace:"calico-apiserver", SelfLink:"", UID:"ed64e8c1-23f2-423a-b9fd-51cbac98b68d", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f87497d48", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0", Pod:"calico-apiserver-f87497d48-47nvq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e03b975789", MAC:"fe:45:ea:af:14:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:59.302717 containerd[1698]: 2025-09-05 23:56:59.299 [INFO][4877] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0" Namespace="calico-apiserver" Pod="calico-apiserver-f87497d48-47nvq" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0" Sep 5 23:56:59.399440 systemd-networkd[1473]: cali94c9ef9cf19: Link UP Sep 5 23:56:59.399625 systemd-networkd[1473]: cali94c9ef9cf19: Gained carrier Sep 5 23:56:59.424740 containerd[1698]: 2025-09-05 23:56:59.328 [INFO][4908] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0 goldmane-54d579b49d- calico-system ee5a4d86-adeb-4a20-b975-6bb9030118b8 947 0 2025-09-05 23:56:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.5-n-8e502b48f1 goldmane-54d579b49d-qvx8t eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali94c9ef9cf19 [] [] }} ContainerID="5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010" Namespace="calico-system" Pod="goldmane-54d579b49d-qvx8t" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-" Sep 5 23:56:59.424740 containerd[1698]: 2025-09-05 23:56:59.328 [INFO][4908] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010" Namespace="calico-system" Pod="goldmane-54d579b49d-qvx8t" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0" Sep 5 23:56:59.424740 containerd[1698]: 2025-09-05 23:56:59.354 [INFO][4927] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010" HandleID="k8s-pod-network.5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010" Workload="ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0" Sep 5 23:56:59.424740 containerd[1698]: 2025-09-05 23:56:59.354 [INFO][4927] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010" HandleID="k8s-pod-network.5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010" Workload="ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3950), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-8e502b48f1", "pod":"goldmane-54d579b49d-qvx8t", "timestamp":"2025-09-05 23:56:59.354431577 +0000 UTC"}, Hostname:"ci-4081.3.5-n-8e502b48f1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:56:59.424740 containerd[1698]: 2025-09-05 23:56:59.354 [INFO][4927] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:59.424740 containerd[1698]: 2025-09-05 23:56:59.354 [INFO][4927] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:59.424740 containerd[1698]: 2025-09-05 23:56:59.354 [INFO][4927] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-8e502b48f1' Sep 5 23:56:59.424740 containerd[1698]: 2025-09-05 23:56:59.364 [INFO][4927] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.424740 containerd[1698]: 2025-09-05 23:56:59.368 [INFO][4927] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.424740 containerd[1698]: 2025-09-05 23:56:59.373 [INFO][4927] ipam/ipam.go 511: Trying affinity for 192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.424740 containerd[1698]: 2025-09-05 23:56:59.374 [INFO][4927] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.424740 containerd[1698]: 2025-09-05 23:56:59.376 [INFO][4927] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.424740 containerd[1698]: 2025-09-05 23:56:59.376 [INFO][4927] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.192/26 handle="k8s-pod-network.5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.424740 containerd[1698]: 2025-09-05 23:56:59.377 [INFO][4927] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010 Sep 5 23:56:59.424740 containerd[1698]: 2025-09-05 23:56:59.387 [INFO][4927] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.192/26 handle="k8s-pod-network.5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.424740 containerd[1698]: 2025-09-05 23:56:59.392 [INFO][4927] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.197/26] block=192.168.123.192/26 handle="k8s-pod-network.5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.424740 containerd[1698]: 2025-09-05 23:56:59.393 [INFO][4927] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.197/26] handle="k8s-pod-network.5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.424740 containerd[1698]: 2025-09-05 23:56:59.393 [INFO][4927] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:59.424740 containerd[1698]: 2025-09-05 23:56:59.393 [INFO][4927] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.197/26] IPv6=[] ContainerID="5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010" HandleID="k8s-pod-network.5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010" Workload="ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0" Sep 5 23:56:59.425332 containerd[1698]: 2025-09-05 23:56:59.395 [INFO][4908] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010" Namespace="calico-system" Pod="goldmane-54d579b49d-qvx8t" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ee5a4d86-adeb-4a20-b975-6bb9030118b8", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"", Pod:"goldmane-54d579b49d-qvx8t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.123.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali94c9ef9cf19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:59.425332 containerd[1698]: 2025-09-05 23:56:59.395 [INFO][4908] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.197/32] ContainerID="5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010" Namespace="calico-system" Pod="goldmane-54d579b49d-qvx8t" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0" Sep 5 23:56:59.425332 containerd[1698]: 2025-09-05 23:56:59.395 [INFO][4908] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94c9ef9cf19 ContainerID="5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010" Namespace="calico-system" Pod="goldmane-54d579b49d-qvx8t" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0" Sep 5 23:56:59.425332 containerd[1698]: 2025-09-05 23:56:59.397 [INFO][4908] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010" Namespace="calico-system" Pod="goldmane-54d579b49d-qvx8t" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0" Sep 5 23:56:59.425332 containerd[1698]: 2025-09-05 23:56:59.400 [INFO][4908] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010" Namespace="calico-system" Pod="goldmane-54d579b49d-qvx8t" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ee5a4d86-adeb-4a20-b975-6bb9030118b8", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010", Pod:"goldmane-54d579b49d-qvx8t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.123.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali94c9ef9cf19", MAC:"d2:1b:10:5c:dc:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:59.425332 containerd[1698]: 2025-09-05 23:56:59.419 [INFO][4908] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010" Namespace="calico-system" Pod="goldmane-54d579b49d-qvx8t" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0" Sep 5 23:56:59.450765 containerd[1698]: time="2025-09-05T23:56:59.450617407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:56:59.450765 containerd[1698]: time="2025-09-05T23:56:59.450699968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:56:59.450765 containerd[1698]: time="2025-09-05T23:56:59.450715048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:59.451075 containerd[1698]: time="2025-09-05T23:56:59.450807288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:56:59.468545 systemd[1]: Started cri-containerd-ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d.scope - libcontainer container ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d. Sep 5 23:56:59.516593 containerd[1698]: time="2025-09-05T23:56:59.516493323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8455d55b59-rg6zb,Uid:ed41aaee-e2f1-4726-a824-dfe2a9bca868,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d\"" Sep 5 23:56:59.593578 systemd-networkd[1473]: calic558296a777: Link UP Sep 5 23:56:59.594653 systemd-networkd[1473]: calic558296a777: Gained carrier Sep 5 23:56:59.611068 containerd[1698]: 2025-09-05 23:56:59.525 [INFO][4974] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0 coredns-668d6bf9bc- kube-system c33313ed-3911-49ca-80e5-a8358a33d55f 952 0 2025-09-05 23:56:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-8e502b48f1 coredns-668d6bf9bc-tblgm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic558296a777 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-tblgm" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-" Sep 5 23:56:59.611068 containerd[1698]: 2025-09-05 23:56:59.525 [INFO][4974] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-tblgm" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0" Sep 5 23:56:59.611068 containerd[1698]: 2025-09-05 23:56:59.550 [INFO][4999] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a" HandleID="k8s-pod-network.e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0" Sep 5 23:56:59.611068 containerd[1698]: 2025-09-05 23:56:59.550 [INFO][4999] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a" HandleID="k8s-pod-network.e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3660), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-8e502b48f1", "pod":"coredns-668d6bf9bc-tblgm", "timestamp":"2025-09-05 23:56:59.550548242 +0000 UTC"}, Hostname:"ci-4081.3.5-n-8e502b48f1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:56:59.611068 containerd[1698]: 2025-09-05 23:56:59.550 [INFO][4999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:59.611068 containerd[1698]: 2025-09-05 23:56:59.550 [INFO][4999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:59.611068 containerd[1698]: 2025-09-05 23:56:59.550 [INFO][4999] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-8e502b48f1' Sep 5 23:56:59.611068 containerd[1698]: 2025-09-05 23:56:59.559 [INFO][4999] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.611068 containerd[1698]: 2025-09-05 23:56:59.563 [INFO][4999] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.611068 containerd[1698]: 2025-09-05 23:56:59.566 [INFO][4999] ipam/ipam.go 511: Trying affinity for 192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.611068 containerd[1698]: 2025-09-05 23:56:59.568 [INFO][4999] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.611068 containerd[1698]: 2025-09-05 23:56:59.570 [INFO][4999] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.611068 containerd[1698]: 2025-09-05 23:56:59.570 [INFO][4999] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.192/26 handle="k8s-pod-network.e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.611068 containerd[1698]: 2025-09-05 23:56:59.571 [INFO][4999] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a Sep 5 23:56:59.611068 containerd[1698]: 2025-09-05 23:56:59.578 [INFO][4999] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.192/26 handle="k8s-pod-network.e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.611068 containerd[1698]: 2025-09-05 23:56:59.587 [INFO][4999] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.198/26] block=192.168.123.192/26 handle="k8s-pod-network.e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.611068 containerd[1698]: 2025-09-05 23:56:59.587 [INFO][4999] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.198/26] handle="k8s-pod-network.e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:56:59.611068 containerd[1698]: 2025-09-05 23:56:59.587 [INFO][4999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:59.611068 containerd[1698]: 2025-09-05 23:56:59.587 [INFO][4999] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.198/26] IPv6=[] ContainerID="e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a" HandleID="k8s-pod-network.e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0" Sep 5 23:56:59.611772 containerd[1698]: 2025-09-05 23:56:59.590 [INFO][4974] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-tblgm" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c33313ed-3911-49ca-80e5-a8358a33d55f", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"", Pod:"coredns-668d6bf9bc-tblgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic558296a777", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:59.611772 containerd[1698]: 2025-09-05 23:56:59.590 [INFO][4974] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.198/32] ContainerID="e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-tblgm" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0" Sep 5 23:56:59.611772 containerd[1698]: 2025-09-05 23:56:59.590 [INFO][4974] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic558296a777 ContainerID="e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-tblgm" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0" Sep 5 23:56:59.611772 containerd[1698]: 2025-09-05 23:56:59.591 [INFO][4974] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-tblgm" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0" Sep 5 23:56:59.611772 containerd[1698]: 2025-09-05 23:56:59.592 [INFO][4974] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-tblgm" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c33313ed-3911-49ca-80e5-a8358a33d55f", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a", Pod:"coredns-668d6bf9bc-tblgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic558296a777", MAC:"c2:43:e0:1c:a6:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:59.611772 containerd[1698]: 2025-09-05 23:56:59.607 [INFO][4974] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-tblgm" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0" Sep 5 23:57:00.277017 systemd-networkd[1473]: cali04aca0c11ff: Gained IPv6LL Sep 5 23:57:00.596995 systemd-networkd[1473]: cali94c9ef9cf19: Gained IPv6LL Sep 5 23:57:00.853122 systemd-networkd[1473]: cali5c6c7226e85: Gained IPv6LL Sep 5 23:57:00.853979 systemd-networkd[1473]: cali6e03b975789: Gained IPv6LL Sep 5 23:57:00.916826 systemd-networkd[1473]: calice14dd55aab: Link UP Sep 5 23:57:00.917104 systemd-networkd[1473]: calice14dd55aab: Gained carrier Sep 5 23:57:00.939283 containerd[1698]: 2025-09-05 23:57:00.848 [INFO][5014] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0 csi-node-driver- calico-system b765c2c2-113c-4a43-beb8-f69a462337be 951 0 2025-09-05 23:56:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.5-n-8e502b48f1 csi-node-driver-8qjr6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calice14dd55aab [] [] }} ContainerID="5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8" Namespace="calico-system" Pod="csi-node-driver-8qjr6" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-" Sep 5 23:57:00.939283 containerd[1698]: 2025-09-05 23:57:00.848 [INFO][5014] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8" Namespace="calico-system" Pod="csi-node-driver-8qjr6" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0" Sep 5 23:57:00.939283 containerd[1698]: 2025-09-05 23:57:00.871 [INFO][5026] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8" HandleID="k8s-pod-network.5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0" Sep 5 23:57:00.939283 containerd[1698]: 2025-09-05 23:57:00.871 [INFO][5026] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8" HandleID="k8s-pod-network.5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b7f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-8e502b48f1", "pod":"csi-node-driver-8qjr6", "timestamp":"2025-09-05 23:57:00.871255832 +0000 UTC"}, Hostname:"ci-4081.3.5-n-8e502b48f1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:57:00.939283 containerd[1698]: 2025-09-05 23:57:00.871 [INFO][5026] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:57:00.939283 containerd[1698]: 2025-09-05 23:57:00.871 [INFO][5026] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:57:00.939283 containerd[1698]: 2025-09-05 23:57:00.871 [INFO][5026] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-8e502b48f1' Sep 5 23:57:00.939283 containerd[1698]: 2025-09-05 23:57:00.880 [INFO][5026] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:57:00.939283 containerd[1698]: 2025-09-05 23:57:00.883 [INFO][5026] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:57:00.939283 containerd[1698]: 2025-09-05 23:57:00.887 [INFO][5026] ipam/ipam.go 511: Trying affinity for 192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:57:00.939283 containerd[1698]: 2025-09-05 23:57:00.889 [INFO][5026] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:57:00.939283 containerd[1698]: 2025-09-05 23:57:00.891 [INFO][5026] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:57:00.939283 containerd[1698]: 2025-09-05 23:57:00.891 [INFO][5026] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.192/26 handle="k8s-pod-network.5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:57:00.939283 containerd[1698]: 2025-09-05 23:57:00.892 [INFO][5026] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8 Sep 5 23:57:00.939283 containerd[1698]: 2025-09-05 23:57:00.900 [INFO][5026] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.192/26 handle="k8s-pod-network.5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:57:00.939283 containerd[1698]: 2025-09-05 23:57:00.909 [INFO][5026] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.199/26] block=192.168.123.192/26 handle="k8s-pod-network.5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:57:00.939283 containerd[1698]: 2025-09-05 23:57:00.909 [INFO][5026] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.199/26] handle="k8s-pod-network.5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:57:00.939283 containerd[1698]: 2025-09-05 23:57:00.909 [INFO][5026] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:57:00.939283 containerd[1698]: 2025-09-05 23:57:00.909 [INFO][5026] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.199/26] IPv6=[] ContainerID="5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8" HandleID="k8s-pod-network.5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0" Sep 5 23:57:00.940818 containerd[1698]: 2025-09-05 23:57:00.912 [INFO][5014] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8" Namespace="calico-system" Pod="csi-node-driver-8qjr6" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b765c2c2-113c-4a43-beb8-f69a462337be", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"", Pod:"csi-node-driver-8qjr6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calice14dd55aab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:57:00.940818 containerd[1698]: 2025-09-05 23:57:00.912 [INFO][5014] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.199/32] ContainerID="5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8" Namespace="calico-system" Pod="csi-node-driver-8qjr6" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0" Sep 5 23:57:00.940818 containerd[1698]: 2025-09-05 23:57:00.912 [INFO][5014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice14dd55aab ContainerID="5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8" Namespace="calico-system" Pod="csi-node-driver-8qjr6" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0" Sep 5 23:57:00.940818 containerd[1698]: 2025-09-05 23:57:00.917 [INFO][5014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8" Namespace="calico-system" Pod="csi-node-driver-8qjr6" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0" Sep 5 23:57:00.940818 containerd[1698]: 2025-09-05 23:57:00.917 [INFO][5014] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8" Namespace="calico-system" Pod="csi-node-driver-8qjr6" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b765c2c2-113c-4a43-beb8-f69a462337be", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8", Pod:"csi-node-driver-8qjr6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calice14dd55aab", MAC:"06:0b:31:5f:af:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:57:00.940818 containerd[1698]: 2025-09-05 23:57:00.933 [INFO][5014] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8" Namespace="calico-system" Pod="csi-node-driver-8qjr6" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0" Sep 5 23:57:01.174009 systemd-networkd[1473]: calic558296a777: Gained IPv6LL Sep 5 23:57:01.679769 containerd[1698]: time="2025-09-05T23:57:01.679336276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:57:01.679769 containerd[1698]: time="2025-09-05T23:57:01.679397316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:57:01.679769 containerd[1698]: time="2025-09-05T23:57:01.679412276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:57:01.680488 containerd[1698]: time="2025-09-05T23:57:01.680297397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:57:01.704010 systemd[1]: Started cri-containerd-ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8.scope - libcontainer container ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8. Sep 5 23:57:01.736754 containerd[1698]: time="2025-09-05T23:57:01.736677861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rf2fc,Uid:931034f2-80f5-4484-bfdc-20cfabfeeec2,Namespace:kube-system,Attempt:1,} returns sandbox id \"ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8\"" Sep 5 23:57:01.755262 containerd[1698]: time="2025-09-05T23:57:01.755090442Z" level=info msg="CreateContainer within sandbox \"ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:57:01.793076 containerd[1698]: time="2025-09-05T23:57:01.792986286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:57:01.793679 containerd[1698]: time="2025-09-05T23:57:01.793598206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:57:01.793679 containerd[1698]: time="2025-09-05T23:57:01.793622926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:57:01.793944 containerd[1698]: time="2025-09-05T23:57:01.793717207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:57:01.813381 systemd[1]: run-containerd-runc-k8s.io-7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0-runc.HcIr2G.mount: Deactivated successfully. Sep 5 23:57:01.824047 systemd[1]: Started cri-containerd-7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0.scope - libcontainer container 7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0. Sep 5 23:57:01.889911 containerd[1698]: time="2025-09-05T23:57:01.889847676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f87497d48-47nvq,Uid:ed64e8c1-23f2-423a-b9fd-51cbac98b68d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0\"" Sep 5 23:57:01.941930 containerd[1698]: time="2025-09-05T23:57:01.940802255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:57:01.941930 containerd[1698]: time="2025-09-05T23:57:01.941130455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:57:01.941930 containerd[1698]: time="2025-09-05T23:57:01.941148015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:57:01.943680 containerd[1698]: time="2025-09-05T23:57:01.943285738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:57:01.967273 systemd[1]: Started cri-containerd-5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010.scope - libcontainer container 5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010. Sep 5 23:57:02.008344 containerd[1698]: time="2025-09-05T23:57:02.008034652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qvx8t,Uid:ee5a4d86-adeb-4a20-b975-6bb9030118b8,Namespace:calico-system,Attempt:1,} returns sandbox id \"5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010\"" Sep 5 23:57:02.049787 containerd[1698]: time="2025-09-05T23:57:02.049682819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:57:02.049965 containerd[1698]: time="2025-09-05T23:57:02.049789979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:57:02.050081 containerd[1698]: time="2025-09-05T23:57:02.049817059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:57:02.050171 containerd[1698]: time="2025-09-05T23:57:02.050112620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:57:02.053823 containerd[1698]: time="2025-09-05T23:57:02.053220703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:57:02.054084 containerd[1698]: time="2025-09-05T23:57:02.053799464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:57:02.054084 containerd[1698]: time="2025-09-05T23:57:02.053813024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:57:02.054084 containerd[1698]: time="2025-09-05T23:57:02.053908704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:57:02.072990 systemd[1]: Started cri-containerd-5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8.scope - libcontainer container 5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8. Sep 5 23:57:02.076966 systemd[1]: Started cri-containerd-e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a.scope - libcontainer container e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a. Sep 5 23:57:02.112549 containerd[1698]: time="2025-09-05T23:57:02.112330251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8qjr6,Uid:b765c2c2-113c-4a43-beb8-f69a462337be,Namespace:calico-system,Attempt:1,} returns sandbox id \"5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8\"" Sep 5 23:57:02.120359 containerd[1698]: time="2025-09-05T23:57:02.120247940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tblgm,Uid:c33313ed-3911-49ca-80e5-a8358a33d55f,Namespace:kube-system,Attempt:1,} returns sandbox id \"e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a\"" Sep 5 23:57:02.125892 containerd[1698]: time="2025-09-05T23:57:02.125799426Z" level=info msg="CreateContainer within sandbox \"e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:57:02.901468 systemd-networkd[1473]: calice14dd55aab: Gained IPv6LL Sep 5 23:57:02.923558 systemd-networkd[1473]: cali6d8fea3800b: Link UP Sep 5 23:57:02.925052 systemd-networkd[1473]: cali6d8fea3800b: Gained carrier Sep 5 23:57:02.947445 containerd[1698]: 2025-09-05 23:57:02.853 [INFO][5251] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0 calico-apiserver-f87497d48- calico-apiserver 306b9be3-0678-43c7-80e8-c3eff7dac6f2 957 0 2025-09-05 23:56:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f87497d48 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-8e502b48f1 calico-apiserver-f87497d48-k8227 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6d8fea3800b [] [] }} ContainerID="70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519" Namespace="calico-apiserver" Pod="calico-apiserver-f87497d48-k8227" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-" Sep 5 23:57:02.947445 containerd[1698]: 2025-09-05 23:57:02.853 [INFO][5251] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519" Namespace="calico-apiserver" Pod="calico-apiserver-f87497d48-k8227" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0" Sep 5 23:57:02.947445 containerd[1698]: 2025-09-05 23:57:02.876 [INFO][5264] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519" HandleID="k8s-pod-network.70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0" Sep 5 23:57:02.947445 containerd[1698]: 2025-09-05 23:57:02.876 [INFO][5264] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519" HandleID="k8s-pod-network.70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-8e502b48f1", "pod":"calico-apiserver-f87497d48-k8227", "timestamp":"2025-09-05 23:57:02.876693485 +0000 UTC"}, Hostname:"ci-4081.3.5-n-8e502b48f1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:57:02.947445 containerd[1698]: 2025-09-05 23:57:02.876 [INFO][5264] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:57:02.947445 containerd[1698]: 2025-09-05 23:57:02.876 [INFO][5264] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:57:02.947445 containerd[1698]: 2025-09-05 23:57:02.876 [INFO][5264] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-8e502b48f1' Sep 5 23:57:02.947445 containerd[1698]: 2025-09-05 23:57:02.885 [INFO][5264] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:57:02.947445 containerd[1698]: 2025-09-05 23:57:02.891 [INFO][5264] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:57:02.947445 containerd[1698]: 2025-09-05 23:57:02.894 [INFO][5264] ipam/ipam.go 511: Trying affinity for 192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:57:02.947445 containerd[1698]: 2025-09-05 23:57:02.897 [INFO][5264] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:57:02.947445 containerd[1698]: 2025-09-05 23:57:02.899 [INFO][5264] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.192/26 host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:57:02.947445 containerd[1698]: 2025-09-05 23:57:02.899 [INFO][5264] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.192/26 handle="k8s-pod-network.70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:57:02.947445 containerd[1698]: 2025-09-05 23:57:02.901 [INFO][5264] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519 Sep 5 23:57:02.947445 containerd[1698]: 2025-09-05 23:57:02.906 [INFO][5264] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.192/26 handle="k8s-pod-network.70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:57:02.947445 containerd[1698]: 2025-09-05 23:57:02.918 [INFO][5264] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.200/26] block=192.168.123.192/26 handle="k8s-pod-network.70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:57:02.947445 containerd[1698]: 2025-09-05 23:57:02.918 [INFO][5264] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.200/26] handle="k8s-pod-network.70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519" host="ci-4081.3.5-n-8e502b48f1" Sep 5 23:57:02.947445 containerd[1698]: 2025-09-05 23:57:02.918 [INFO][5264] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:57:02.947445 containerd[1698]: 2025-09-05 23:57:02.918 [INFO][5264] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.200/26] IPv6=[] ContainerID="70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519" HandleID="k8s-pod-network.70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0" Sep 5 23:57:02.948385 containerd[1698]: 2025-09-05 23:57:02.920 [INFO][5251] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519" Namespace="calico-apiserver" Pod="calico-apiserver-f87497d48-k8227" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0", GenerateName:"calico-apiserver-f87497d48-", Namespace:"calico-apiserver", SelfLink:"", UID:"306b9be3-0678-43c7-80e8-c3eff7dac6f2", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f87497d48", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"", Pod:"calico-apiserver-f87497d48-k8227", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d8fea3800b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:57:02.948385 containerd[1698]: 2025-09-05 23:57:02.920 [INFO][5251] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.200/32] ContainerID="70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519" Namespace="calico-apiserver" Pod="calico-apiserver-f87497d48-k8227" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0" Sep 5 23:57:02.948385 containerd[1698]: 2025-09-05 23:57:02.920 [INFO][5251] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d8fea3800b ContainerID="70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519" Namespace="calico-apiserver" Pod="calico-apiserver-f87497d48-k8227" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0" Sep 5 23:57:02.948385 containerd[1698]: 2025-09-05 23:57:02.926 [INFO][5251] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519" Namespace="calico-apiserver" Pod="calico-apiserver-f87497d48-k8227" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0" Sep 5 23:57:02.948385 containerd[1698]: 2025-09-05 23:57:02.926 [INFO][5251] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519" Namespace="calico-apiserver" Pod="calico-apiserver-f87497d48-k8227" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0", GenerateName:"calico-apiserver-f87497d48-", Namespace:"calico-apiserver", SelfLink:"", UID:"306b9be3-0678-43c7-80e8-c3eff7dac6f2", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f87497d48", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519", Pod:"calico-apiserver-f87497d48-k8227", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d8fea3800b", MAC:"46:9f:68:ad:8f:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:57:02.948385 containerd[1698]: 2025-09-05 23:57:02.943 [INFO][5251] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519" Namespace="calico-apiserver" Pod="calico-apiserver-f87497d48-k8227" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0" Sep 5 23:57:03.133382 containerd[1698]: time="2025-09-05T23:57:03.133316058Z" level=info msg="CreateContainer within sandbox \"ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"365391c1fd65a817633e0cf0cd510bffe86194428cae244e5d5571795f306057\"" Sep 5 23:57:03.139224 containerd[1698]: time="2025-09-05T23:57:03.136740102Z" level=info msg="StartContainer for \"365391c1fd65a817633e0cf0cd510bffe86194428cae244e5d5571795f306057\"" Sep 5 23:57:03.173009 systemd[1]: Started cri-containerd-365391c1fd65a817633e0cf0cd510bffe86194428cae244e5d5571795f306057.scope - libcontainer container 365391c1fd65a817633e0cf0cd510bffe86194428cae244e5d5571795f306057. Sep 5 23:57:03.227165 containerd[1698]: time="2025-09-05T23:57:03.226909005Z" level=info msg="StartContainer for \"365391c1fd65a817633e0cf0cd510bffe86194428cae244e5d5571795f306057\" returns successfully" Sep 5 23:57:03.292541 containerd[1698]: time="2025-09-05T23:57:03.292428400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:57:03.292541 containerd[1698]: time="2025-09-05T23:57:03.292491560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:57:03.292541 containerd[1698]: time="2025-09-05T23:57:03.292512440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:57:03.292807 containerd[1698]: time="2025-09-05T23:57:03.292590880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:57:03.323054 systemd[1]: Started cri-containerd-70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519.scope - libcontainer container 70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519. Sep 5 23:57:03.381439 containerd[1698]: time="2025-09-05T23:57:03.381341222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f87497d48-k8227,Uid:306b9be3-0678-43c7-80e8-c3eff7dac6f2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519\"" Sep 5 23:57:04.090943 kubelet[3174]: I0905 23:57:04.090877 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rf2fc" podStartSLOduration=49.090860153 podStartE2EDuration="49.090860153s" podCreationTimestamp="2025-09-05 23:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:57:04.073044013 +0000 UTC m=+50.354324339" watchObservedRunningTime="2025-09-05 23:57:04.090860153 +0000 UTC m=+50.372140479" Sep 5 23:57:04.630308 systemd-networkd[1473]: cali6d8fea3800b: Gained IPv6LL Sep 5 23:57:05.534673 containerd[1698]: time="2025-09-05T23:57:05.534520664Z" level=info msg="CreateContainer within sandbox \"e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a8452065b55d67be42005de6ab97d4991195804ed0f672d5acd9bfbd4350865\"" Sep 5 23:57:05.574991 containerd[1698]: time="2025-09-05T23:57:05.535163065Z" level=info msg="StartContainer for \"4a8452065b55d67be42005de6ab97d4991195804ed0f672d5acd9bfbd4350865\"" Sep 5 23:57:05.606035 systemd[1]: Started cri-containerd-4a8452065b55d67be42005de6ab97d4991195804ed0f672d5acd9bfbd4350865.scope - libcontainer container 4a8452065b55d67be42005de6ab97d4991195804ed0f672d5acd9bfbd4350865. Sep 5 23:57:05.639954 containerd[1698]: time="2025-09-05T23:57:05.639497813Z" level=info msg="StartContainer for \"4a8452065b55d67be42005de6ab97d4991195804ed0f672d5acd9bfbd4350865\" returns successfully" Sep 5 23:57:06.076863 kubelet[3174]: I0905 23:57:06.076717 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tblgm" podStartSLOduration=51.076632946 podStartE2EDuration="51.076632946s" podCreationTimestamp="2025-09-05 23:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:57:06.076243826 +0000 UTC m=+52.357524152" watchObservedRunningTime="2025-09-05 23:57:06.076632946 +0000 UTC m=+52.357913312" Sep 5 23:57:07.684133 containerd[1698]: time="2025-09-05T23:57:07.684016252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:07.687736 containerd[1698]: time="2025-09-05T23:57:07.687605216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 5 23:57:07.693462 containerd[1698]: time="2025-09-05T23:57:07.692642981Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:07.699261 containerd[1698]: time="2025-09-05T23:57:07.699225188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:07.699887 containerd[1698]: time="2025-09-05T23:57:07.699846308Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 11.714799306s" Sep 5 23:57:07.699887 containerd[1698]: time="2025-09-05T23:57:07.699884429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 5 23:57:07.702155 containerd[1698]: time="2025-09-05T23:57:07.702126191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 23:57:07.716629 containerd[1698]: time="2025-09-05T23:57:07.716543246Z" level=info msg="CreateContainer within sandbox \"11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 23:57:07.771822 containerd[1698]: time="2025-09-05T23:57:07.771645623Z" level=info msg="CreateContainer within sandbox \"11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ab50465f333d454be4360362371618b4451e25d13d0ba5638acd77f1dd0bc4ae\"" Sep 5 23:57:07.773468 containerd[1698]: time="2025-09-05T23:57:07.773312705Z" level=info msg="StartContainer for \"ab50465f333d454be4360362371618b4451e25d13d0ba5638acd77f1dd0bc4ae\"" Sep 5 23:57:07.808015 systemd[1]: Started cri-containerd-ab50465f333d454be4360362371618b4451e25d13d0ba5638acd77f1dd0bc4ae.scope - libcontainer container ab50465f333d454be4360362371618b4451e25d13d0ba5638acd77f1dd0bc4ae. Sep 5 23:57:07.849570 containerd[1698]: time="2025-09-05T23:57:07.849426704Z" level=info msg="StartContainer for \"ab50465f333d454be4360362371618b4451e25d13d0ba5638acd77f1dd0bc4ae\" returns successfully" Sep 5 23:57:08.080644 kubelet[3174]: I0905 23:57:08.080579 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-86c5b95765-hgw7k" podStartSLOduration=24.363337554 podStartE2EDuration="36.080560983s" podCreationTimestamp="2025-09-05 23:56:32 +0000 UTC" firstStartedPulling="2025-09-05 23:56:55.984736602 +0000 UTC m=+42.266016928" lastFinishedPulling="2025-09-05 23:57:07.701960031 +0000 UTC m=+53.983240357" observedRunningTime="2025-09-05 23:57:08.078214141 +0000 UTC m=+54.359494467" watchObservedRunningTime="2025-09-05 23:57:08.080560983 +0000 UTC m=+54.361841309" Sep 5 23:57:09.190373 containerd[1698]: time="2025-09-05T23:57:09.190316773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:09.193502 containerd[1698]: time="2025-09-05T23:57:09.193424617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 5 23:57:09.203875 containerd[1698]: time="2025-09-05T23:57:09.202996266Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:09.209493 containerd[1698]: time="2025-09-05T23:57:09.209444993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:09.210366 containerd[1698]: time="2025-09-05T23:57:09.210331674Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.508169243s" Sep 5 23:57:09.210485 containerd[1698]: time="2025-09-05T23:57:09.210467954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 5 23:57:09.212200 containerd[1698]: time="2025-09-05T23:57:09.212146556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 23:57:09.213265 containerd[1698]: time="2025-09-05T23:57:09.213228757Z" level=info msg="CreateContainer within sandbox \"ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 23:57:09.269096 containerd[1698]: time="2025-09-05T23:57:09.269047695Z" level=info msg="CreateContainer within sandbox \"ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"2d5e102f3955c5b1e48678924dba195ea26892601e14458bff2a71214ae3f7d1\"" Sep 5 23:57:09.269915 containerd[1698]: time="2025-09-05T23:57:09.269607295Z" level=info msg="StartContainer for \"2d5e102f3955c5b1e48678924dba195ea26892601e14458bff2a71214ae3f7d1\"" Sep 5 23:57:09.304009 systemd[1]: Started cri-containerd-2d5e102f3955c5b1e48678924dba195ea26892601e14458bff2a71214ae3f7d1.scope - libcontainer container 2d5e102f3955c5b1e48678924dba195ea26892601e14458bff2a71214ae3f7d1. Sep 5 23:57:09.342932 containerd[1698]: time="2025-09-05T23:57:09.342883571Z" level=info msg="StartContainer for \"2d5e102f3955c5b1e48678924dba195ea26892601e14458bff2a71214ae3f7d1\" returns successfully" Sep 5 23:57:12.277073 containerd[1698]: time="2025-09-05T23:57:12.277030293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:12.283042 containerd[1698]: time="2025-09-05T23:57:12.283001739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 5 23:57:12.289680 containerd[1698]: time="2025-09-05T23:57:12.289627306Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:12.296112 containerd[1698]: time="2025-09-05T23:57:12.296059952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:12.296944 containerd[1698]: time="2025-09-05T23:57:12.296803433Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 3.084612397s" Sep 5 23:57:12.296944 containerd[1698]: time="2025-09-05T23:57:12.296857273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 23:57:12.298788 containerd[1698]: time="2025-09-05T23:57:12.298756355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 23:57:12.300453 containerd[1698]: time="2025-09-05T23:57:12.300418197Z" level=info msg="CreateContainer within sandbox \"7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 23:57:12.362135 containerd[1698]: time="2025-09-05T23:57:12.362056981Z" level=info msg="CreateContainer within sandbox \"7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8e3368c165bdf6092d6f5b83dcf76e14840055edc587a1d77ed03b532a5d7ecb\"" Sep 5 23:57:12.363626 containerd[1698]: time="2025-09-05T23:57:12.362976222Z" level=info msg="StartContainer for \"8e3368c165bdf6092d6f5b83dcf76e14840055edc587a1d77ed03b532a5d7ecb\"" Sep 5 23:57:12.408042 systemd[1]: Started cri-containerd-8e3368c165bdf6092d6f5b83dcf76e14840055edc587a1d77ed03b532a5d7ecb.scope - libcontainer container 8e3368c165bdf6092d6f5b83dcf76e14840055edc587a1d77ed03b532a5d7ecb. Sep 5 23:57:12.445357 containerd[1698]: time="2025-09-05T23:57:12.445236587Z" level=info msg="StartContainer for \"8e3368c165bdf6092d6f5b83dcf76e14840055edc587a1d77ed03b532a5d7ecb\" returns successfully" Sep 5 23:57:13.859546 containerd[1698]: time="2025-09-05T23:57:13.859290416Z" level=info msg="StopPodSandbox for \"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\"" Sep 5 23:57:14.084941 kubelet[3174]: I0905 23:57:14.084038 3174 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:57:14.087816 containerd[1698]: 2025-09-05 23:57:13.989 [WARNING][5593] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c33313ed-3911-49ca-80e5-a8358a33d55f", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a", Pod:"coredns-668d6bf9bc-tblgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic558296a777", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:57:14.087816 containerd[1698]: 2025-09-05 23:57:13.989 [INFO][5593] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Sep 5 23:57:14.087816 containerd[1698]: 2025-09-05 23:57:13.989 [INFO][5593] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" iface="eth0" netns="" Sep 5 23:57:14.087816 containerd[1698]: 2025-09-05 23:57:13.989 [INFO][5593] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Sep 5 23:57:14.087816 containerd[1698]: 2025-09-05 23:57:13.989 [INFO][5593] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Sep 5 23:57:14.087816 containerd[1698]: 2025-09-05 23:57:14.045 [INFO][5604] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" HandleID="k8s-pod-network.a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0" Sep 5 23:57:14.087816 containerd[1698]: 2025-09-05 23:57:14.050 [INFO][5604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:57:14.087816 containerd[1698]: 2025-09-05 23:57:14.050 [INFO][5604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:57:14.087816 containerd[1698]: 2025-09-05 23:57:14.073 [WARNING][5604] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" HandleID="k8s-pod-network.a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0" Sep 5 23:57:14.087816 containerd[1698]: 2025-09-05 23:57:14.073 [INFO][5604] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" HandleID="k8s-pod-network.a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0" Sep 5 23:57:14.087816 containerd[1698]: 2025-09-05 23:57:14.075 [INFO][5604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:57:14.087816 containerd[1698]: 2025-09-05 23:57:14.079 [INFO][5593] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Sep 5 23:57:14.089013 containerd[1698]: time="2025-09-05T23:57:14.087875353Z" level=info msg="TearDown network for sandbox \"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\" successfully" Sep 5 23:57:14.089013 containerd[1698]: time="2025-09-05T23:57:14.087914073Z" level=info msg="StopPodSandbox for \"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\" returns successfully" Sep 5 23:57:14.089013 containerd[1698]: time="2025-09-05T23:57:14.088365754Z" level=info msg="RemovePodSandbox for \"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\"" Sep 5 23:57:14.089013 containerd[1698]: time="2025-09-05T23:57:14.088398714Z" level=info msg="Forcibly stopping sandbox \"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\"" Sep 5 23:57:14.196399 containerd[1698]: 2025-09-05 23:57:14.156 [WARNING][5619] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c33313ed-3911-49ca-80e5-a8358a33d55f", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"e0c590b3ec46f3022ed1c4f0d2978aeb04b361855d907984e29723eb603a2b0a", Pod:"coredns-668d6bf9bc-tblgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic558296a777", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:57:14.196399 containerd[1698]: 2025-09-05 23:57:14.157 [INFO][5619] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Sep 5 23:57:14.196399 containerd[1698]: 2025-09-05 23:57:14.157 [INFO][5619] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" iface="eth0" netns="" Sep 5 23:57:14.196399 containerd[1698]: 2025-09-05 23:57:14.157 [INFO][5619] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Sep 5 23:57:14.196399 containerd[1698]: 2025-09-05 23:57:14.157 [INFO][5619] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Sep 5 23:57:14.196399 containerd[1698]: 2025-09-05 23:57:14.180 [INFO][5626] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" HandleID="k8s-pod-network.a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0" Sep 5 23:57:14.196399 containerd[1698]: 2025-09-05 23:57:14.180 [INFO][5626] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:57:14.196399 containerd[1698]: 2025-09-05 23:57:14.181 [INFO][5626] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:57:14.196399 containerd[1698]: 2025-09-05 23:57:14.191 [WARNING][5626] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" HandleID="k8s-pod-network.a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0" Sep 5 23:57:14.196399 containerd[1698]: 2025-09-05 23:57:14.191 [INFO][5626] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" HandleID="k8s-pod-network.a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--tblgm-eth0" Sep 5 23:57:14.196399 containerd[1698]: 2025-09-05 23:57:14.192 [INFO][5626] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:57:14.196399 containerd[1698]: 2025-09-05 23:57:14.194 [INFO][5619] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97" Sep 5 23:57:14.197345 containerd[1698]: time="2025-09-05T23:57:14.197000996Z" level=info msg="TearDown network for sandbox \"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\" successfully" Sep 5 23:57:14.216760 containerd[1698]: time="2025-09-05T23:57:14.216689338Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:57:14.216928 containerd[1698]: time="2025-09-05T23:57:14.216774378Z" level=info msg="RemovePodSandbox \"a8267ccc66618056b9383605d45c9e0010bd0e74dc951536af326599359dae97\" returns successfully" Sep 5 23:57:14.218511 containerd[1698]: time="2025-09-05T23:57:14.218477740Z" level=info msg="StopPodSandbox for \"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\"" Sep 5 23:57:14.365455 containerd[1698]: 2025-09-05 23:57:14.295 [WARNING][5640] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"931034f2-80f5-4484-bfdc-20cfabfeeec2", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8", Pod:"coredns-668d6bf9bc-rf2fc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c6c7226e85", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:57:14.365455 containerd[1698]: 2025-09-05 23:57:14.296 [INFO][5640] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Sep 5 23:57:14.365455 containerd[1698]: 2025-09-05 23:57:14.296 [INFO][5640] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" iface="eth0" netns="" Sep 5 23:57:14.365455 containerd[1698]: 2025-09-05 23:57:14.296 [INFO][5640] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Sep 5 23:57:14.365455 containerd[1698]: 2025-09-05 23:57:14.296 [INFO][5640] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Sep 5 23:57:14.365455 containerd[1698]: 2025-09-05 23:57:14.341 [INFO][5647] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" HandleID="k8s-pod-network.7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0" Sep 5 23:57:14.365455 containerd[1698]: 2025-09-05 23:57:14.341 [INFO][5647] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:57:14.365455 containerd[1698]: 2025-09-05 23:57:14.341 [INFO][5647] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:57:14.365455 containerd[1698]: 2025-09-05 23:57:14.356 [WARNING][5647] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" HandleID="k8s-pod-network.7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0" Sep 5 23:57:14.365455 containerd[1698]: 2025-09-05 23:57:14.357 [INFO][5647] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" HandleID="k8s-pod-network.7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0" Sep 5 23:57:14.365455 containerd[1698]: 2025-09-05 23:57:14.360 [INFO][5647] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:57:14.365455 containerd[1698]: 2025-09-05 23:57:14.363 [INFO][5640] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Sep 5 23:57:14.366318 containerd[1698]: time="2025-09-05T23:57:14.365502585Z" level=info msg="TearDown network for sandbox \"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\" successfully" Sep 5 23:57:14.366318 containerd[1698]: time="2025-09-05T23:57:14.365530265Z" level=info msg="StopPodSandbox for \"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\" returns successfully" Sep 5 23:57:14.366417 containerd[1698]: time="2025-09-05T23:57:14.366386346Z" level=info msg="RemovePodSandbox for \"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\"" Sep 5 23:57:14.366444 containerd[1698]: time="2025-09-05T23:57:14.366423586Z" level=info msg="Forcibly stopping sandbox \"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\"" Sep 5 23:57:14.462363 containerd[1698]: 2025-09-05 23:57:14.413 [WARNING][5661] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"931034f2-80f5-4484-bfdc-20cfabfeeec2", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"ccd69428b930ed5706ca76336034f44422a814bfc454c31254091d8197a27bc8", Pod:"coredns-668d6bf9bc-rf2fc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c6c7226e85", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:57:14.462363 containerd[1698]: 2025-09-05 23:57:14.413 [INFO][5661] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Sep 5 23:57:14.462363 containerd[1698]: 2025-09-05 23:57:14.413 [INFO][5661] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" iface="eth0" netns="" Sep 5 23:57:14.462363 containerd[1698]: 2025-09-05 23:57:14.413 [INFO][5661] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Sep 5 23:57:14.462363 containerd[1698]: 2025-09-05 23:57:14.413 [INFO][5661] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Sep 5 23:57:14.462363 containerd[1698]: 2025-09-05 23:57:14.443 [INFO][5668] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" HandleID="k8s-pod-network.7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0" Sep 5 23:57:14.462363 containerd[1698]: 2025-09-05 23:57:14.443 [INFO][5668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:57:14.462363 containerd[1698]: 2025-09-05 23:57:14.443 [INFO][5668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:57:14.462363 containerd[1698]: 2025-09-05 23:57:14.455 [WARNING][5668] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" HandleID="k8s-pod-network.7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0" Sep 5 23:57:14.462363 containerd[1698]: 2025-09-05 23:57:14.455 [INFO][5668] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" HandleID="k8s-pod-network.7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-coredns--668d6bf9bc--rf2fc-eth0" Sep 5 23:57:14.462363 containerd[1698]: 2025-09-05 23:57:14.456 [INFO][5668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:57:14.462363 containerd[1698]: 2025-09-05 23:57:14.458 [INFO][5661] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8" Sep 5 23:57:14.462363 containerd[1698]: time="2025-09-05T23:57:14.461028292Z" level=info msg="TearDown network for sandbox \"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\" successfully" Sep 5 23:57:14.475164 containerd[1698]: time="2025-09-05T23:57:14.475122828Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:57:14.475585 containerd[1698]: time="2025-09-05T23:57:14.475549269Z" level=info msg="RemovePodSandbox \"7ad959ab3252a0db74f2eca9a95eab220a5a8a047e4756afb1989d9318bd2de8\" returns successfully" Sep 5 23:57:14.476108 containerd[1698]: time="2025-09-05T23:57:14.476086869Z" level=info msg="StopPodSandbox for \"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\"" Sep 5 23:57:14.600222 containerd[1698]: 2025-09-05 23:57:14.547 [WARNING][5682] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0", GenerateName:"calico-kube-controllers-86c5b95765-", Namespace:"calico-system", SelfLink:"", UID:"75131aad-a28c-402b-ad91-a268726f8ed5", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86c5b95765", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36", Pod:"calico-kube-controllers-86c5b95765-hgw7k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali408cdc0f56c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:57:14.600222 containerd[1698]: 2025-09-05 23:57:14.547 [INFO][5682] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Sep 5 23:57:14.600222 containerd[1698]: 2025-09-05 23:57:14.547 [INFO][5682] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" iface="eth0" netns="" Sep 5 23:57:14.600222 containerd[1698]: 2025-09-05 23:57:14.547 [INFO][5682] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Sep 5 23:57:14.600222 containerd[1698]: 2025-09-05 23:57:14.547 [INFO][5682] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Sep 5 23:57:14.600222 containerd[1698]: 2025-09-05 23:57:14.578 [INFO][5689] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" HandleID="k8s-pod-network.27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0" Sep 5 23:57:14.600222 containerd[1698]: 2025-09-05 23:57:14.578 [INFO][5689] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:57:14.600222 containerd[1698]: 2025-09-05 23:57:14.578 [INFO][5689] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:57:14.600222 containerd[1698]: 2025-09-05 23:57:14.593 [WARNING][5689] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" HandleID="k8s-pod-network.27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0" Sep 5 23:57:14.600222 containerd[1698]: 2025-09-05 23:57:14.593 [INFO][5689] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" HandleID="k8s-pod-network.27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0" Sep 5 23:57:14.600222 containerd[1698]: 2025-09-05 23:57:14.595 [INFO][5689] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:57:14.600222 containerd[1698]: 2025-09-05 23:57:14.597 [INFO][5682] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Sep 5 23:57:14.600696 containerd[1698]: time="2025-09-05T23:57:14.600268009Z" level=info msg="TearDown network for sandbox \"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\" successfully" Sep 5 23:57:14.600696 containerd[1698]: time="2025-09-05T23:57:14.600295649Z" level=info msg="StopPodSandbox for \"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\" returns successfully" Sep 5 23:57:14.601559 containerd[1698]: time="2025-09-05T23:57:14.601455930Z" level=info msg="RemovePodSandbox for \"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\"" Sep 5 23:57:14.601728 containerd[1698]: time="2025-09-05T23:57:14.601710851Z" level=info msg="Forcibly stopping sandbox \"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\"" Sep 5 23:57:14.726220 containerd[1698]: 2025-09-05 23:57:14.665 [WARNING][5704] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0", GenerateName:"calico-kube-controllers-86c5b95765-", Namespace:"calico-system", SelfLink:"", UID:"75131aad-a28c-402b-ad91-a268726f8ed5", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86c5b95765", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"11b79a0fe6ab3b3bbc446d7cc2d08dcb26c88429847d708d17402ad801933b36", Pod:"calico-kube-controllers-86c5b95765-hgw7k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali408cdc0f56c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:57:14.726220 containerd[1698]: 2025-09-05 23:57:14.665 [INFO][5704] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Sep 5 23:57:14.726220 containerd[1698]: 2025-09-05 23:57:14.665 [INFO][5704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" iface="eth0" netns="" Sep 5 23:57:14.726220 containerd[1698]: 2025-09-05 23:57:14.665 [INFO][5704] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Sep 5 23:57:14.726220 containerd[1698]: 2025-09-05 23:57:14.665 [INFO][5704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Sep 5 23:57:14.726220 containerd[1698]: 2025-09-05 23:57:14.705 [INFO][5711] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" HandleID="k8s-pod-network.27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0" Sep 5 23:57:14.726220 containerd[1698]: 2025-09-05 23:57:14.705 [INFO][5711] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:57:14.726220 containerd[1698]: 2025-09-05 23:57:14.705 [INFO][5711] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:57:14.726220 containerd[1698]: 2025-09-05 23:57:14.717 [WARNING][5711] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" HandleID="k8s-pod-network.27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0" Sep 5 23:57:14.726220 containerd[1698]: 2025-09-05 23:57:14.717 [INFO][5711] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" HandleID="k8s-pod-network.27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--kube--controllers--86c5b95765--hgw7k-eth0" Sep 5 23:57:14.726220 containerd[1698]: 2025-09-05 23:57:14.720 [INFO][5711] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:57:14.726220 containerd[1698]: 2025-09-05 23:57:14.723 [INFO][5704] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c" Sep 5 23:57:14.727793 containerd[1698]: time="2025-09-05T23:57:14.726193671Z" level=info msg="TearDown network for sandbox \"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\" successfully" Sep 5 23:57:14.738857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3748423899.mount: Deactivated successfully. Sep 5 23:57:14.739908 containerd[1698]: time="2025-09-05T23:57:14.739361925Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:57:14.740035 containerd[1698]: time="2025-09-05T23:57:14.740001486Z" level=info msg="RemovePodSandbox \"27c20f786478a6c6fa8366d53b8567ed622028f4232879bc6696933457514e6c\" returns successfully" Sep 5 23:57:14.740591 containerd[1698]: time="2025-09-05T23:57:14.740567847Z" level=info msg="StopPodSandbox for \"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\"" Sep 5 23:57:14.813132 containerd[1698]: 2025-09-05 23:57:14.772 [WARNING][5726] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0", GenerateName:"calico-apiserver-f87497d48-", Namespace:"calico-apiserver", SelfLink:"", UID:"306b9be3-0678-43c7-80e8-c3eff7dac6f2", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f87497d48", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519", Pod:"calico-apiserver-f87497d48-k8227", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d8fea3800b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:57:14.813132 containerd[1698]: 2025-09-05 23:57:14.775 [INFO][5726] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Sep 5 23:57:14.813132 containerd[1698]: 2025-09-05 23:57:14.775 [INFO][5726] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" iface="eth0" netns="" Sep 5 23:57:14.813132 containerd[1698]: 2025-09-05 23:57:14.775 [INFO][5726] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Sep 5 23:57:14.813132 containerd[1698]: 2025-09-05 23:57:14.775 [INFO][5726] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Sep 5 23:57:14.813132 containerd[1698]: 2025-09-05 23:57:14.796 [INFO][5733] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" HandleID="k8s-pod-network.7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0" Sep 5 23:57:14.813132 containerd[1698]: 2025-09-05 23:57:14.796 [INFO][5733] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:57:14.813132 containerd[1698]: 2025-09-05 23:57:14.796 [INFO][5733] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:57:14.813132 containerd[1698]: 2025-09-05 23:57:14.806 [WARNING][5733] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" HandleID="k8s-pod-network.7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0" Sep 5 23:57:14.813132 containerd[1698]: 2025-09-05 23:57:14.806 [INFO][5733] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" HandleID="k8s-pod-network.7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0" Sep 5 23:57:14.813132 containerd[1698]: 2025-09-05 23:57:14.808 [INFO][5733] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:57:14.813132 containerd[1698]: 2025-09-05 23:57:14.809 [INFO][5726] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Sep 5 23:57:14.813795 containerd[1698]: time="2025-09-05T23:57:14.813655329Z" level=info msg="TearDown network for sandbox \"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\" successfully" Sep 5 23:57:14.813795 containerd[1698]: time="2025-09-05T23:57:14.813687249Z" level=info msg="StopPodSandbox for \"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\" returns successfully" Sep 5 23:57:14.814508 containerd[1698]: time="2025-09-05T23:57:14.814468010Z" level=info msg="RemovePodSandbox for \"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\"" Sep 5 23:57:14.814624 containerd[1698]: time="2025-09-05T23:57:14.814604170Z" level=info msg="Forcibly stopping sandbox \"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\"" Sep 5 23:57:14.921560 containerd[1698]: 2025-09-05 23:57:14.864 [WARNING][5751] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0", GenerateName:"calico-apiserver-f87497d48-", Namespace:"calico-apiserver", SelfLink:"", UID:"306b9be3-0678-43c7-80e8-c3eff7dac6f2", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f87497d48", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519", Pod:"calico-apiserver-f87497d48-k8227", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d8fea3800b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:57:14.921560 containerd[1698]: 2025-09-05 23:57:14.864 [INFO][5751] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Sep 5 23:57:14.921560 containerd[1698]: 2025-09-05 23:57:14.864 [INFO][5751] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" iface="eth0" netns="" Sep 5 23:57:14.921560 containerd[1698]: 2025-09-05 23:57:14.864 [INFO][5751] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Sep 5 23:57:14.921560 containerd[1698]: 2025-09-05 23:57:14.864 [INFO][5751] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Sep 5 23:57:14.921560 containerd[1698]: 2025-09-05 23:57:14.895 [INFO][5759] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" HandleID="k8s-pod-network.7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0" Sep 5 23:57:14.921560 containerd[1698]: 2025-09-05 23:57:14.895 [INFO][5759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:57:14.921560 containerd[1698]: 2025-09-05 23:57:14.895 [INFO][5759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:57:14.921560 containerd[1698]: 2025-09-05 23:57:14.914 [WARNING][5759] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" HandleID="k8s-pod-network.7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0" Sep 5 23:57:14.921560 containerd[1698]: 2025-09-05 23:57:14.914 [INFO][5759] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" HandleID="k8s-pod-network.7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--k8227-eth0" Sep 5 23:57:14.921560 containerd[1698]: 2025-09-05 23:57:14.916 [INFO][5759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:57:14.921560 containerd[1698]: 2025-09-05 23:57:14.919 [INFO][5751] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2" Sep 5 23:57:14.921560 containerd[1698]: time="2025-09-05T23:57:14.921367850Z" level=info msg="TearDown network for sandbox \"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\" successfully" Sep 5 23:57:14.958303 containerd[1698]: time="2025-09-05T23:57:14.958251011Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:57:14.959188 containerd[1698]: time="2025-09-05T23:57:14.958790772Z" level=info msg="RemovePodSandbox \"7d6c73e72504a16eafcfde6f7860e5ab43fcd86fd2f4274c3f36728c42c7a8d2\" returns successfully" Sep 5 23:57:14.960340 containerd[1698]: time="2025-09-05T23:57:14.960309974Z" level=info msg="StopPodSandbox for \"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\"" Sep 5 23:57:15.066770 containerd[1698]: 2025-09-05 23:57:15.015 [WARNING][5773] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0", GenerateName:"calico-apiserver-f87497d48-", Namespace:"calico-apiserver", SelfLink:"", UID:"ed64e8c1-23f2-423a-b9fd-51cbac98b68d", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f87497d48", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0", Pod:"calico-apiserver-f87497d48-47nvq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e03b975789", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:57:15.066770 containerd[1698]: 2025-09-05 23:57:15.015 [INFO][5773] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Sep 5 23:57:15.066770 containerd[1698]: 2025-09-05 23:57:15.015 [INFO][5773] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" iface="eth0" netns="" Sep 5 23:57:15.066770 containerd[1698]: 2025-09-05 23:57:15.015 [INFO][5773] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Sep 5 23:57:15.066770 containerd[1698]: 2025-09-05 23:57:15.015 [INFO][5773] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Sep 5 23:57:15.066770 containerd[1698]: 2025-09-05 23:57:15.046 [INFO][5780] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" HandleID="k8s-pod-network.b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0" Sep 5 23:57:15.066770 containerd[1698]: 2025-09-05 23:57:15.047 [INFO][5780] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:57:15.066770 containerd[1698]: 2025-09-05 23:57:15.047 [INFO][5780] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:57:15.066770 containerd[1698]: 2025-09-05 23:57:15.061 [WARNING][5780] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" HandleID="k8s-pod-network.b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0" Sep 5 23:57:15.066770 containerd[1698]: 2025-09-05 23:57:15.061 [INFO][5780] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" HandleID="k8s-pod-network.b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0" Sep 5 23:57:15.066770 containerd[1698]: 2025-09-05 23:57:15.063 [INFO][5780] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:57:15.066770 containerd[1698]: 2025-09-05 23:57:15.064 [INFO][5773] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Sep 5 23:57:15.066770 containerd[1698]: time="2025-09-05T23:57:15.066746613Z" level=info msg="TearDown network for sandbox \"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\" successfully" Sep 5 23:57:15.067240 containerd[1698]: time="2025-09-05T23:57:15.066779973Z" level=info msg="StopPodSandbox for \"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\" returns successfully" Sep 5 23:57:15.068679 containerd[1698]: time="2025-09-05T23:57:15.068624175Z" level=info msg="RemovePodSandbox for \"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\"" Sep 5 23:57:15.068679 containerd[1698]: time="2025-09-05T23:57:15.068658895Z" level=info msg="Forcibly stopping sandbox \"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\"" Sep 5 23:57:15.197861 containerd[1698]: 2025-09-05 23:57:15.139 [WARNING][5794] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0", GenerateName:"calico-apiserver-f87497d48-", Namespace:"calico-apiserver", SelfLink:"", UID:"ed64e8c1-23f2-423a-b9fd-51cbac98b68d", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f87497d48", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"7a3e650798afd71fc502fbe25ea37bdb144021c920f519594dc1a1bb4dff41f0", Pod:"calico-apiserver-f87497d48-47nvq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e03b975789", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:57:15.197861 containerd[1698]: 2025-09-05 23:57:15.140 [INFO][5794] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Sep 5 23:57:15.197861 containerd[1698]: 2025-09-05 23:57:15.140 [INFO][5794] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" iface="eth0" netns="" Sep 5 23:57:15.197861 containerd[1698]: 2025-09-05 23:57:15.140 [INFO][5794] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Sep 5 23:57:15.197861 containerd[1698]: 2025-09-05 23:57:15.140 [INFO][5794] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Sep 5 23:57:15.197861 containerd[1698]: 2025-09-05 23:57:15.175 [INFO][5802] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" HandleID="k8s-pod-network.b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0" Sep 5 23:57:15.197861 containerd[1698]: 2025-09-05 23:57:15.175 [INFO][5802] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:57:15.197861 containerd[1698]: 2025-09-05 23:57:15.175 [INFO][5802] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:57:15.197861 containerd[1698]: 2025-09-05 23:57:15.191 [WARNING][5802] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" HandleID="k8s-pod-network.b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0" Sep 5 23:57:15.197861 containerd[1698]: 2025-09-05 23:57:15.191 [INFO][5802] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" HandleID="k8s-pod-network.b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Workload="ci--4081.3.5--n--8e502b48f1-k8s-calico--apiserver--f87497d48--47nvq-eth0" Sep 5 23:57:15.197861 containerd[1698]: 2025-09-05 23:57:15.194 [INFO][5802] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:57:15.197861 containerd[1698]: 2025-09-05 23:57:15.196 [INFO][5794] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8" Sep 5 23:57:15.198367 containerd[1698]: time="2025-09-05T23:57:15.197929601Z" level=info msg="TearDown network for sandbox \"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\" successfully" Sep 5 23:57:15.210265 containerd[1698]: time="2025-09-05T23:57:15.209557574Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:57:15.210265 containerd[1698]: time="2025-09-05T23:57:15.209664494Z" level=info msg="RemovePodSandbox \"b9a8843cfbe78f122b5063469dfadab50462319a27c0bab9f20d37e8871931f8\" returns successfully" Sep 5 23:57:15.210982 containerd[1698]: time="2025-09-05T23:57:15.210931695Z" level=info msg="StopPodSandbox for \"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\"" Sep 5 23:57:15.359268 containerd[1698]: 2025-09-05 23:57:15.281 [WARNING][5817] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-whisker--589cbf6db4--hgmcp-eth0" Sep 5 23:57:15.359268 containerd[1698]: 2025-09-05 23:57:15.282 [INFO][5817] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Sep 5 23:57:15.359268 containerd[1698]: 2025-09-05 23:57:15.282 [INFO][5817] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" iface="eth0" netns="" Sep 5 23:57:15.359268 containerd[1698]: 2025-09-05 23:57:15.282 [INFO][5817] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Sep 5 23:57:15.359268 containerd[1698]: 2025-09-05 23:57:15.282 [INFO][5817] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Sep 5 23:57:15.359268 containerd[1698]: 2025-09-05 23:57:15.336 [INFO][5824] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" HandleID="k8s-pod-network.3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Workload="ci--4081.3.5--n--8e502b48f1-k8s-whisker--589cbf6db4--hgmcp-eth0" Sep 5 23:57:15.359268 containerd[1698]: 2025-09-05 23:57:15.336 [INFO][5824] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:57:15.359268 containerd[1698]: 2025-09-05 23:57:15.336 [INFO][5824] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:57:15.359268 containerd[1698]: 2025-09-05 23:57:15.351 [WARNING][5824] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" HandleID="k8s-pod-network.3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Workload="ci--4081.3.5--n--8e502b48f1-k8s-whisker--589cbf6db4--hgmcp-eth0" Sep 5 23:57:15.359268 containerd[1698]: 2025-09-05 23:57:15.351 [INFO][5824] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" HandleID="k8s-pod-network.3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Workload="ci--4081.3.5--n--8e502b48f1-k8s-whisker--589cbf6db4--hgmcp-eth0" Sep 5 23:57:15.359268 containerd[1698]: 2025-09-05 23:57:15.353 [INFO][5824] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:57:15.359268 containerd[1698]: 2025-09-05 23:57:15.356 [INFO][5817] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Sep 5 23:57:15.359681 containerd[1698]: time="2025-09-05T23:57:15.359440142Z" level=info msg="TearDown network for sandbox \"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\" successfully" Sep 5 23:57:15.359681 containerd[1698]: time="2025-09-05T23:57:15.359471422Z" level=info msg="StopPodSandbox for \"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\" returns successfully" Sep 5 23:57:15.360625 containerd[1698]: time="2025-09-05T23:57:15.360205143Z" level=info msg="RemovePodSandbox for \"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\"" Sep 5 23:57:15.360625 containerd[1698]: time="2025-09-05T23:57:15.360245223Z" level=info msg="Forcibly stopping sandbox \"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\"" Sep 5 23:57:15.442226 containerd[1698]: 2025-09-05 23:57:15.407 [WARNING][5838] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" WorkloadEndpoint="ci--4081.3.5--n--8e502b48f1-k8s-whisker--589cbf6db4--hgmcp-eth0" Sep 5 23:57:15.442226 containerd[1698]: 2025-09-05 23:57:15.407 [INFO][5838] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Sep 5 23:57:15.442226 containerd[1698]: 2025-09-05 23:57:15.407 [INFO][5838] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" iface="eth0" netns="" Sep 5 23:57:15.442226 containerd[1698]: 2025-09-05 23:57:15.407 [INFO][5838] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Sep 5 23:57:15.442226 containerd[1698]: 2025-09-05 23:57:15.407 [INFO][5838] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Sep 5 23:57:15.442226 containerd[1698]: 2025-09-05 23:57:15.429 [INFO][5845] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" HandleID="k8s-pod-network.3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Workload="ci--4081.3.5--n--8e502b48f1-k8s-whisker--589cbf6db4--hgmcp-eth0" Sep 5 23:57:15.442226 containerd[1698]: 2025-09-05 23:57:15.429 [INFO][5845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:57:15.442226 containerd[1698]: 2025-09-05 23:57:15.429 [INFO][5845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:57:15.442226 containerd[1698]: 2025-09-05 23:57:15.437 [WARNING][5845] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" HandleID="k8s-pod-network.3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Workload="ci--4081.3.5--n--8e502b48f1-k8s-whisker--589cbf6db4--hgmcp-eth0" Sep 5 23:57:15.442226 containerd[1698]: 2025-09-05 23:57:15.437 [INFO][5845] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" HandleID="k8s-pod-network.3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Workload="ci--4081.3.5--n--8e502b48f1-k8s-whisker--589cbf6db4--hgmcp-eth0" Sep 5 23:57:15.442226 containerd[1698]: 2025-09-05 23:57:15.439 [INFO][5845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:57:15.442226 containerd[1698]: 2025-09-05 23:57:15.440 [INFO][5838] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7" Sep 5 23:57:15.442592 containerd[1698]: time="2025-09-05T23:57:15.442344716Z" level=info msg="TearDown network for sandbox \"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\" successfully" Sep 5 23:57:15.664705 containerd[1698]: time="2025-09-05T23:57:15.664212005Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:57:15.664705 containerd[1698]: time="2025-09-05T23:57:15.664286645Z" level=info msg="RemovePodSandbox \"3df0cb7b68578b730d0e4f07031fc69af38e22e4e8d8f3d0f38881d1d9f6b3c7\" returns successfully" Sep 5 23:57:15.665149 containerd[1698]: time="2025-09-05T23:57:15.665116366Z" level=info msg="StopPodSandbox for \"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\"" Sep 5 23:57:15.737288 containerd[1698]: time="2025-09-05T23:57:15.737239447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:15.743576 containerd[1698]: time="2025-09-05T23:57:15.743499014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 5 23:57:15.750876 containerd[1698]: time="2025-09-05T23:57:15.750418382Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:15.756685 containerd[1698]: 2025-09-05 23:57:15.719 [WARNING][5859] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b765c2c2-113c-4a43-beb8-f69a462337be", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8", Pod:"csi-node-driver-8qjr6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calice14dd55aab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:57:15.756685 containerd[1698]: 2025-09-05 23:57:15.719 [INFO][5859] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Sep 5 23:57:15.756685 containerd[1698]: 2025-09-05 23:57:15.720 [INFO][5859] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" iface="eth0" netns="" Sep 5 23:57:15.756685 containerd[1698]: 2025-09-05 23:57:15.720 [INFO][5859] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Sep 5 23:57:15.756685 containerd[1698]: 2025-09-05 23:57:15.720 [INFO][5859] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Sep 5 23:57:15.756685 containerd[1698]: 2025-09-05 23:57:15.741 [INFO][5871] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" HandleID="k8s-pod-network.86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Workload="ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0" Sep 5 23:57:15.756685 containerd[1698]: 2025-09-05 23:57:15.742 [INFO][5871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:57:15.756685 containerd[1698]: 2025-09-05 23:57:15.742 [INFO][5871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:57:15.756685 containerd[1698]: 2025-09-05 23:57:15.752 [WARNING][5871] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" HandleID="k8s-pod-network.86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Workload="ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0" Sep 5 23:57:15.756685 containerd[1698]: 2025-09-05 23:57:15.752 [INFO][5871] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" HandleID="k8s-pod-network.86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Workload="ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0" Sep 5 23:57:15.756685 containerd[1698]: 2025-09-05 23:57:15.753 [INFO][5871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:57:15.756685 containerd[1698]: 2025-09-05 23:57:15.755 [INFO][5859] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Sep 5 23:57:15.757669 containerd[1698]: time="2025-09-05T23:57:15.756724189Z" level=info msg="TearDown network for sandbox \"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\" successfully" Sep 5 23:57:15.757669 containerd[1698]: time="2025-09-05T23:57:15.756750629Z" level=info msg="StopPodSandbox for \"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\" returns successfully" Sep 5 23:57:15.757669 containerd[1698]: time="2025-09-05T23:57:15.757237909Z" level=info msg="RemovePodSandbox for \"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\"" Sep 5 23:57:15.757669 containerd[1698]: time="2025-09-05T23:57:15.757266949Z" level=info msg="Forcibly stopping sandbox \"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\"" Sep 5 23:57:15.759059 containerd[1698]: time="2025-09-05T23:57:15.759027831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:15.759732 containerd[1698]: time="2025-09-05T23:57:15.759697312Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 3.460900517s" Sep 5 23:57:15.760033 containerd[1698]: time="2025-09-05T23:57:15.759978313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 5 23:57:15.762004 containerd[1698]: time="2025-09-05T23:57:15.761874475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 23:57:15.765124 containerd[1698]: time="2025-09-05T23:57:15.764938518Z" level=info msg="CreateContainer within sandbox \"5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 23:57:15.819241 containerd[1698]: time="2025-09-05T23:57:15.819096699Z" level=info msg="CreateContainer within sandbox \"5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ef796e3239be696d5a86e65b675a8402466d6b8431f8b9d7e5722315bdaf0815\"" Sep 5 23:57:15.820268 containerd[1698]: time="2025-09-05T23:57:15.820220140Z" level=info msg="StartContainer for \"ef796e3239be696d5a86e65b675a8402466d6b8431f8b9d7e5722315bdaf0815\"" Sep 5 23:57:15.844888 containerd[1698]: 2025-09-05 23:57:15.797 [WARNING][5885] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b765c2c2-113c-4a43-beb8-f69a462337be", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8", Pod:"csi-node-driver-8qjr6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calice14dd55aab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:57:15.844888 containerd[1698]: 2025-09-05 23:57:15.798 [INFO][5885] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Sep 5 23:57:15.844888 containerd[1698]: 2025-09-05 23:57:15.798 [INFO][5885] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" iface="eth0" netns="" Sep 5 23:57:15.844888 containerd[1698]: 2025-09-05 23:57:15.798 [INFO][5885] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Sep 5 23:57:15.844888 containerd[1698]: 2025-09-05 23:57:15.798 [INFO][5885] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Sep 5 23:57:15.844888 containerd[1698]: 2025-09-05 23:57:15.825 [INFO][5894] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" HandleID="k8s-pod-network.86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Workload="ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0" Sep 5 23:57:15.844888 containerd[1698]: 2025-09-05 23:57:15.826 [INFO][5894] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:57:15.844888 containerd[1698]: 2025-09-05 23:57:15.826 [INFO][5894] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:57:15.844888 containerd[1698]: 2025-09-05 23:57:15.839 [WARNING][5894] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" HandleID="k8s-pod-network.86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Workload="ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0" Sep 5 23:57:15.844888 containerd[1698]: 2025-09-05 23:57:15.839 [INFO][5894] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" HandleID="k8s-pod-network.86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Workload="ci--4081.3.5--n--8e502b48f1-k8s-csi--node--driver--8qjr6-eth0" Sep 5 23:57:15.844888 containerd[1698]: 2025-09-05 23:57:15.841 [INFO][5894] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:57:15.844888 containerd[1698]: 2025-09-05 23:57:15.842 [INFO][5885] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440" Sep 5 23:57:15.844888 containerd[1698]: time="2025-09-05T23:57:15.844102047Z" level=info msg="TearDown network for sandbox \"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\" successfully" Sep 5 23:57:15.863505 containerd[1698]: time="2025-09-05T23:57:15.863121708Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:57:15.863505 containerd[1698]: time="2025-09-05T23:57:15.863211709Z" level=info msg="RemovePodSandbox \"86a0fe372f7a7c4d65e6a3a7f8fd16c22f4725e0f2698ad6566fb5f23122c440\" returns successfully" Sep 5 23:57:15.865033 containerd[1698]: time="2025-09-05T23:57:15.864700710Z" level=info msg="StopPodSandbox for \"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\"" Sep 5 23:57:15.892000 systemd[1]: Started cri-containerd-ef796e3239be696d5a86e65b675a8402466d6b8431f8b9d7e5722315bdaf0815.scope - libcontainer container ef796e3239be696d5a86e65b675a8402466d6b8431f8b9d7e5722315bdaf0815. Sep 5 23:57:15.945612 containerd[1698]: time="2025-09-05T23:57:15.945563481Z" level=info msg="StartContainer for \"ef796e3239be696d5a86e65b675a8402466d6b8431f8b9d7e5722315bdaf0815\" returns successfully" Sep 5 23:57:15.990818 containerd[1698]: 2025-09-05 23:57:15.937 [WARNING][5923] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ee5a4d86-adeb-4a20-b975-6bb9030118b8", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010", Pod:"goldmane-54d579b49d-qvx8t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.123.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali94c9ef9cf19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:57:15.990818 containerd[1698]: 2025-09-05 23:57:15.937 [INFO][5923] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Sep 5 23:57:15.990818 containerd[1698]: 2025-09-05 23:57:15.938 [INFO][5923] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" iface="eth0" netns="" Sep 5 23:57:15.990818 containerd[1698]: 2025-09-05 23:57:15.938 [INFO][5923] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Sep 5 23:57:15.990818 containerd[1698]: 2025-09-05 23:57:15.938 [INFO][5923] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Sep 5 23:57:15.990818 containerd[1698]: 2025-09-05 23:57:15.965 [INFO][5948] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" HandleID="k8s-pod-network.433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Workload="ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0" Sep 5 23:57:15.990818 containerd[1698]: 2025-09-05 23:57:15.966 [INFO][5948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:57:15.990818 containerd[1698]: 2025-09-05 23:57:15.966 [INFO][5948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:57:15.990818 containerd[1698]: 2025-09-05 23:57:15.978 [WARNING][5948] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" HandleID="k8s-pod-network.433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Workload="ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0" Sep 5 23:57:15.990818 containerd[1698]: 2025-09-05 23:57:15.978 [INFO][5948] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" HandleID="k8s-pod-network.433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Workload="ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0" Sep 5 23:57:15.990818 containerd[1698]: 2025-09-05 23:57:15.985 [INFO][5948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:57:15.990818 containerd[1698]: 2025-09-05 23:57:15.988 [INFO][5923] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Sep 5 23:57:15.992070 containerd[1698]: time="2025-09-05T23:57:15.990903652Z" level=info msg="TearDown network for sandbox \"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\" successfully" Sep 5 23:57:15.992070 containerd[1698]: time="2025-09-05T23:57:15.990927972Z" level=info msg="StopPodSandbox for \"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\" returns successfully" Sep 5 23:57:15.992070 containerd[1698]: time="2025-09-05T23:57:15.991472213Z" level=info msg="RemovePodSandbox for \"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\"" Sep 5 23:57:15.992070 containerd[1698]: time="2025-09-05T23:57:15.991500693Z" level=info msg="Forcibly stopping sandbox \"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\"" Sep 5 23:57:16.066125 containerd[1698]: 2025-09-05 23:57:16.033 [WARNING][5966] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ee5a4d86-adeb-4a20-b975-6bb9030118b8", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 56, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-8e502b48f1", ContainerID:"5d4a3a433aae3d217a830baa0d517a8b82a869d8e97fffd074c0b3eb28267010", Pod:"goldmane-54d579b49d-qvx8t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.123.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali94c9ef9cf19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:57:16.066125 containerd[1698]: 2025-09-05 23:57:16.033 [INFO][5966] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Sep 5 23:57:16.066125 containerd[1698]: 2025-09-05 23:57:16.033 [INFO][5966] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" iface="eth0" netns="" Sep 5 23:57:16.066125 containerd[1698]: 2025-09-05 23:57:16.033 [INFO][5966] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Sep 5 23:57:16.066125 containerd[1698]: 2025-09-05 23:57:16.033 [INFO][5966] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Sep 5 23:57:16.066125 containerd[1698]: 2025-09-05 23:57:16.053 [INFO][5974] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" HandleID="k8s-pod-network.433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Workload="ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0" Sep 5 23:57:16.066125 containerd[1698]: 2025-09-05 23:57:16.053 [INFO][5974] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:57:16.066125 containerd[1698]: 2025-09-05 23:57:16.053 [INFO][5974] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:57:16.066125 containerd[1698]: 2025-09-05 23:57:16.062 [WARNING][5974] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" HandleID="k8s-pod-network.433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Workload="ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0" Sep 5 23:57:16.066125 containerd[1698]: 2025-09-05 23:57:16.062 [INFO][5974] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" HandleID="k8s-pod-network.433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Workload="ci--4081.3.5--n--8e502b48f1-k8s-goldmane--54d579b49d--qvx8t-eth0" Sep 5 23:57:16.066125 containerd[1698]: 2025-09-05 23:57:16.063 [INFO][5974] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:57:16.066125 containerd[1698]: 2025-09-05 23:57:16.065 [INFO][5966] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9" Sep 5 23:57:16.066528 containerd[1698]: time="2025-09-05T23:57:16.066167257Z" level=info msg="TearDown network for sandbox \"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\" successfully" Sep 5 23:57:16.129084 kubelet[3174]: I0905 23:57:16.128402 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f87497d48-47nvq" podStartSLOduration=39.723615733 podStartE2EDuration="50.128056726s" podCreationTimestamp="2025-09-05 23:56:26 +0000 UTC" firstStartedPulling="2025-09-05 23:57:01.893504641 +0000 UTC m=+48.174784967" lastFinishedPulling="2025-09-05 23:57:12.297945634 +0000 UTC m=+58.579225960" observedRunningTime="2025-09-05 23:57:13.10381727 +0000 UTC m=+59.385097596" watchObservedRunningTime="2025-09-05 23:57:16.128056726 +0000 UTC m=+62.409337012" Sep 5 23:57:16.204478 containerd[1698]: time="2025-09-05T23:57:16.203538051Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:57:16.204478 containerd[1698]: time="2025-09-05T23:57:16.203635451Z" level=info msg="RemovePodSandbox \"433f1ecd15efdc5e590be15ccd8006b284ecf003e863e707fa0890e87b8162c9\" returns successfully" Sep 5 23:57:16.241229 kubelet[3174]: I0905 23:57:16.240034 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-qvx8t" podStartSLOduration=31.488334952 podStartE2EDuration="45.240014812s" podCreationTimestamp="2025-09-05 23:56:31 +0000 UTC" firstStartedPulling="2025-09-05 23:57:02.009739494 +0000 UTC m=+48.291019780" lastFinishedPulling="2025-09-05 23:57:15.761419314 +0000 UTC m=+62.042699640" observedRunningTime="2025-09-05 23:57:16.127661966 +0000 UTC m=+62.408942292" watchObservedRunningTime="2025-09-05 23:57:16.240014812 +0000 UTC m=+62.521295138" Sep 5 23:57:17.138817 containerd[1698]: time="2025-09-05T23:57:17.138120582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:17.141802 containerd[1698]: time="2025-09-05T23:57:17.141754626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 5 23:57:17.147757 containerd[1698]: time="2025-09-05T23:57:17.147697032Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:17.154347 containerd[1698]: time="2025-09-05T23:57:17.154273840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:17.155512 containerd[1698]: time="2025-09-05T23:57:17.154980361Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.393064846s" Sep 5 23:57:17.155512 containerd[1698]: time="2025-09-05T23:57:17.155017361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 5 23:57:17.156441 containerd[1698]: time="2025-09-05T23:57:17.156413082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 23:57:17.160983 containerd[1698]: time="2025-09-05T23:57:17.160949327Z" level=info msg="CreateContainer within sandbox \"5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 23:57:17.212188 containerd[1698]: time="2025-09-05T23:57:17.212134945Z" level=info msg="CreateContainer within sandbox \"5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"74cbda7de5780cac9b27ab439f380acdc99945a808894278f24b1fb49121fcfd\"" Sep 5 23:57:17.214079 containerd[1698]: time="2025-09-05T23:57:17.214044947Z" level=info msg="StartContainer for \"74cbda7de5780cac9b27ab439f380acdc99945a808894278f24b1fb49121fcfd\"" Sep 5 23:57:17.253004 systemd[1]: Started cri-containerd-74cbda7de5780cac9b27ab439f380acdc99945a808894278f24b1fb49121fcfd.scope - libcontainer container 74cbda7de5780cac9b27ab439f380acdc99945a808894278f24b1fb49121fcfd. Sep 5 23:57:17.282927 containerd[1698]: time="2025-09-05T23:57:17.282887704Z" level=info msg="StartContainer for \"74cbda7de5780cac9b27ab439f380acdc99945a808894278f24b1fb49121fcfd\" returns successfully" Sep 5 23:57:17.551405 containerd[1698]: time="2025-09-05T23:57:17.551207326Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:17.559899 containerd[1698]: time="2025-09-05T23:57:17.557864653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 23:57:17.560105 containerd[1698]: time="2025-09-05T23:57:17.560072896Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 402.584653ms" Sep 5 23:57:17.560105 containerd[1698]: time="2025-09-05T23:57:17.560107056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 23:57:17.561458 containerd[1698]: time="2025-09-05T23:57:17.561421057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 23:57:17.563846 containerd[1698]: time="2025-09-05T23:57:17.563798420Z" level=info msg="CreateContainer within sandbox \"70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 23:57:17.624633 containerd[1698]: time="2025-09-05T23:57:17.624489048Z" level=info msg="CreateContainer within sandbox \"70526592802ddf4f39d21c5d7b4ff78d88f1fcffad41704377a81626b5fc5519\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"60c1349a801e6f4d580879b3b7757a4c7c6d333bf435c76b75596791a4561602\"" Sep 5 23:57:17.625599 containerd[1698]: time="2025-09-05T23:57:17.625397289Z" level=info msg="StartContainer for \"60c1349a801e6f4d580879b3b7757a4c7c6d333bf435c76b75596791a4561602\"" Sep 5 23:57:17.655027 systemd[1]: Started cri-containerd-60c1349a801e6f4d580879b3b7757a4c7c6d333bf435c76b75596791a4561602.scope - libcontainer container 60c1349a801e6f4d580879b3b7757a4c7c6d333bf435c76b75596791a4561602. Sep 5 23:57:17.690321 containerd[1698]: time="2025-09-05T23:57:17.690199002Z" level=info msg="StartContainer for \"60c1349a801e6f4d580879b3b7757a4c7c6d333bf435c76b75596791a4561602\" returns successfully" Sep 5 23:57:18.133725 kubelet[3174]: I0905 23:57:18.133569 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f87497d48-k8227" podStartSLOduration=37.972000645 podStartE2EDuration="52.13345258s" podCreationTimestamp="2025-09-05 23:56:26 +0000 UTC" firstStartedPulling="2025-09-05 23:57:03.399398322 +0000 UTC m=+49.680678648" lastFinishedPulling="2025-09-05 23:57:17.560850257 +0000 UTC m=+63.842130583" observedRunningTime="2025-09-05 23:57:18.130603017 +0000 UTC m=+64.411883343" watchObservedRunningTime="2025-09-05 23:57:18.13345258 +0000 UTC m=+64.414732866" Sep 5 23:57:19.427109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2605018656.mount: Deactivated successfully. Sep 5 23:57:20.366996 containerd[1698]: time="2025-09-05T23:57:20.366944691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:20.373027 containerd[1698]: time="2025-09-05T23:57:20.372828377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 5 23:57:20.380861 containerd[1698]: time="2025-09-05T23:57:20.380773906Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:20.386852 containerd[1698]: time="2025-09-05T23:57:20.386584073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:20.387349 containerd[1698]: time="2025-09-05T23:57:20.387312234Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 2.825773936s" Sep 5 23:57:20.387349 containerd[1698]: time="2025-09-05T23:57:20.387346194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 5 23:57:20.388530 containerd[1698]: time="2025-09-05T23:57:20.388511035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 23:57:20.391173 containerd[1698]: time="2025-09-05T23:57:20.391134598Z" level=info msg="CreateContainer within sandbox \"ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 23:57:20.456501 containerd[1698]: time="2025-09-05T23:57:20.456419031Z" level=info msg="CreateContainer within sandbox \"ca846cef8bc6a788d3257f35c83fd5338336ef68a53e6520c9a3b8cc8577be7d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5fc8ff0542241846e88d575d0ac4feff5c50936ea8fed9cf265abb0d08f47e7c\"" Sep 5 23:57:20.457867 containerd[1698]: time="2025-09-05T23:57:20.457071552Z" level=info msg="StartContainer for \"5fc8ff0542241846e88d575d0ac4feff5c50936ea8fed9cf265abb0d08f47e7c\"" Sep 5 23:57:20.491998 systemd[1]: Started cri-containerd-5fc8ff0542241846e88d575d0ac4feff5c50936ea8fed9cf265abb0d08f47e7c.scope - libcontainer container 5fc8ff0542241846e88d575d0ac4feff5c50936ea8fed9cf265abb0d08f47e7c. Sep 5 23:57:20.538875 containerd[1698]: time="2025-09-05T23:57:20.538805244Z" level=info msg="StartContainer for \"5fc8ff0542241846e88d575d0ac4feff5c50936ea8fed9cf265abb0d08f47e7c\" returns successfully" Sep 5 23:57:21.147739 kubelet[3174]: I0905 23:57:21.147662 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-8455d55b59-rg6zb" podStartSLOduration=7.278085618 podStartE2EDuration="28.147627128s" podCreationTimestamp="2025-09-05 23:56:53 +0000 UTC" firstStartedPulling="2025-09-05 23:56:59.518854405 +0000 UTC m=+45.800134731" lastFinishedPulling="2025-09-05 23:57:20.388395915 +0000 UTC m=+66.669676241" observedRunningTime="2025-09-05 23:57:21.147552808 +0000 UTC m=+67.428833134" watchObservedRunningTime="2025-09-05 23:57:21.147627128 +0000 UTC m=+67.428907454" Sep 5 23:57:24.030084 systemd[1]: run-containerd-runc-k8s.io-5c3d139d95a869c06f468da3f562fba38ca2c58e3e7e87c022e46b158f725070-runc.E3YPBI.mount: Deactivated successfully. Sep 5 23:57:24.729145 containerd[1698]: time="2025-09-05T23:57:24.729085305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:24.773147 containerd[1698]: time="2025-09-05T23:57:24.773098834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 5 23:57:24.778525 containerd[1698]: time="2025-09-05T23:57:24.778476800Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:24.828875 containerd[1698]: time="2025-09-05T23:57:24.828769456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:57:24.829990 containerd[1698]: time="2025-09-05T23:57:24.829475617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 4.440853862s" Sep 5 23:57:24.829990 containerd[1698]: time="2025-09-05T23:57:24.829516537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 5 23:57:24.832223 containerd[1698]: time="2025-09-05T23:57:24.832155380Z" level=info msg="CreateContainer within sandbox \"5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 23:57:25.189675 containerd[1698]: time="2025-09-05T23:57:25.189554701Z" level=info msg="CreateContainer within sandbox \"5527b2874ebaab7e1a75759e8fdd2e02d9e81c8da179715f1cb5d6e9bd4be6c8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d646d9f5564a01aa40079b6d6d8271a74ed0035fba96d1b461fb348f63afeff5\"" Sep 5 23:57:25.190435 containerd[1698]: time="2025-09-05T23:57:25.190385902Z" level=info msg="StartContainer for \"d646d9f5564a01aa40079b6d6d8271a74ed0035fba96d1b461fb348f63afeff5\"" Sep 5 23:57:25.223047 systemd[1]: Started cri-containerd-d646d9f5564a01aa40079b6d6d8271a74ed0035fba96d1b461fb348f63afeff5.scope - libcontainer container d646d9f5564a01aa40079b6d6d8271a74ed0035fba96d1b461fb348f63afeff5. Sep 5 23:57:25.254383 containerd[1698]: time="2025-09-05T23:57:25.254340174Z" level=info msg="StartContainer for \"d646d9f5564a01aa40079b6d6d8271a74ed0035fba96d1b461fb348f63afeff5\" returns successfully" Sep 5 23:57:25.963684 kubelet[3174]: I0905 23:57:25.963645 3174 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 23:57:25.968088 kubelet[3174]: I0905 23:57:25.968059 3174 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 23:57:26.166433 kubelet[3174]: I0905 23:57:26.166357 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8qjr6" podStartSLOduration=32.450927432 podStartE2EDuration="55.166341036s" podCreationTimestamp="2025-09-05 23:56:31 +0000 UTC" firstStartedPulling="2025-09-05 23:57:02.114740494 +0000 UTC m=+48.396020820" lastFinishedPulling="2025-09-05 23:57:24.830154138 +0000 UTC m=+71.111434424" observedRunningTime="2025-09-05 23:57:26.163282633 +0000 UTC m=+72.444562999" watchObservedRunningTime="2025-09-05 23:57:26.166341036 +0000 UTC m=+72.447621362" Sep 5 23:57:32.444143 update_engine[1665]: I20250905 23:57:32.443814 1665 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 5 23:57:32.444143 update_engine[1665]: I20250905 23:57:32.443886 1665 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 5 23:57:32.444143 update_engine[1665]: I20250905 23:57:32.444087 1665 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 5 23:57:32.779779 update_engine[1665]: I20250905 23:57:32.778980 1665 omaha_request_params.cc:62] Current group set to lts Sep 5 23:57:32.779779 update_engine[1665]: I20250905 23:57:32.779088 1665 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 5 23:57:32.779779 update_engine[1665]: I20250905 23:57:32.779098 1665 update_attempter.cc:643] Scheduling an action processor start. Sep 5 23:57:32.779779 update_engine[1665]: I20250905 23:57:32.779116 1665 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 5 23:57:32.781771 update_engine[1665]: I20250905 23:57:32.780344 1665 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 5 23:57:32.781771 update_engine[1665]: I20250905 23:57:32.780417 1665 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 5 23:57:32.781771 update_engine[1665]: I20250905 23:57:32.780426 1665 omaha_request_action.cc:272] Request: Sep 5 23:57:32.781771 update_engine[1665]: Sep 5 23:57:32.781771 update_engine[1665]: Sep 5 23:57:32.781771 update_engine[1665]: Sep 5 23:57:32.781771 update_engine[1665]: Sep 5 23:57:32.781771 update_engine[1665]: Sep 5 23:57:32.781771 update_engine[1665]: Sep 5 23:57:32.781771 update_engine[1665]: Sep 5 23:57:32.781771 update_engine[1665]: Sep 5 23:57:32.781771 update_engine[1665]: I20250905 23:57:32.780432 1665 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 23:57:32.786906 locksmithd[1717]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 5 23:57:32.788280 update_engine[1665]: I20250905 23:57:32.787940 1665 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 23:57:32.788280 update_engine[1665]: I20250905 23:57:32.788239 1665 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 23:57:32.896447 update_engine[1665]: E20250905 23:57:32.896322 1665 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 23:57:32.896447 update_engine[1665]: I20250905 23:57:32.896416 1665 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 5 23:57:38.960397 kubelet[3174]: I0905 23:57:38.960351 3174 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:57:43.449384 update_engine[1665]: I20250905 23:57:43.448885 1665 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 23:57:43.449384 update_engine[1665]: I20250905 23:57:43.449107 1665 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 23:57:43.449384 update_engine[1665]: I20250905 23:57:43.449332 1665 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 23:57:43.524506 update_engine[1665]: E20250905 23:57:43.524365 1665 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 23:57:43.524688 update_engine[1665]: I20250905 23:57:43.524466 1665 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 5 23:57:46.126003 systemd[1]: run-containerd-runc-k8s.io-ef796e3239be696d5a86e65b675a8402466d6b8431f8b9d7e5722315bdaf0815-runc.hEi74d.mount: Deactivated successfully. Sep 5 23:57:53.449862 update_engine[1665]: I20250905 23:57:53.448283 1665 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 23:57:53.449862 update_engine[1665]: I20250905 23:57:53.448529 1665 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 23:57:53.449862 update_engine[1665]: I20250905 23:57:53.448761 1665 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 23:57:53.461721 update_engine[1665]: E20250905 23:57:53.461606 1665 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 23:57:53.461721 update_engine[1665]: I20250905 23:57:53.461690 1665 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 5 23:58:03.448379 update_engine[1665]: I20250905 23:58:03.447875 1665 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 23:58:03.448379 update_engine[1665]: I20250905 23:58:03.448104 1665 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 23:58:03.448379 update_engine[1665]: I20250905 23:58:03.448330 1665 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 23:58:03.455030 update_engine[1665]: E20250905 23:58:03.454169 1665 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 23:58:03.455030 update_engine[1665]: I20250905 23:58:03.454238 1665 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 5 23:58:03.455030 update_engine[1665]: I20250905 23:58:03.454247 1665 omaha_request_action.cc:617] Omaha request response: Sep 5 23:58:03.455030 update_engine[1665]: E20250905 23:58:03.454329 1665 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 5 23:58:03.455030 update_engine[1665]: I20250905 23:58:03.454345 1665 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 5 23:58:03.455030 update_engine[1665]: I20250905 23:58:03.454351 1665 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 5 23:58:03.455030 update_engine[1665]: I20250905 23:58:03.454355 1665 update_attempter.cc:306] Processing Done. Sep 5 23:58:03.455030 update_engine[1665]: E20250905 23:58:03.454370 1665 update_attempter.cc:619] Update failed. Sep 5 23:58:03.455030 update_engine[1665]: I20250905 23:58:03.454375 1665 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 5 23:58:03.455030 update_engine[1665]: I20250905 23:58:03.454380 1665 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 5 23:58:03.455030 update_engine[1665]: I20250905 23:58:03.454384 1665 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 5 23:58:03.455030 update_engine[1665]: I20250905 23:58:03.454452 1665 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 5 23:58:03.455030 update_engine[1665]: I20250905 23:58:03.454475 1665 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 5 23:58:03.455030 update_engine[1665]: I20250905 23:58:03.454480 1665 omaha_request_action.cc:272] Request: Sep 5 23:58:03.455030 update_engine[1665]: Sep 5 23:58:03.455030 update_engine[1665]: Sep 5 23:58:03.455434 locksmithd[1717]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 5 23:58:03.455675 update_engine[1665]: Sep 5 23:58:03.455675 update_engine[1665]: Sep 5 23:58:03.455675 update_engine[1665]: Sep 5 23:58:03.455675 update_engine[1665]: Sep 5 23:58:03.455675 update_engine[1665]: I20250905 23:58:03.454486 1665 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 23:58:03.455675 update_engine[1665]: I20250905 23:58:03.454651 1665 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 23:58:03.455675 update_engine[1665]: I20250905 23:58:03.454852 1665 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 23:58:03.461944 update_engine[1665]: E20250905 23:58:03.461905 1665 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 23:58:03.462222 update_engine[1665]: I20250905 23:58:03.462090 1665 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 5 23:58:03.462222 update_engine[1665]: I20250905 23:58:03.462105 1665 omaha_request_action.cc:617] Omaha request response: Sep 5 23:58:03.462222 update_engine[1665]: I20250905 23:58:03.462113 1665 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 5 23:58:03.462222 update_engine[1665]: I20250905 23:58:03.462117 1665 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 5 23:58:03.462222 update_engine[1665]: I20250905 23:58:03.462122 1665 update_attempter.cc:306] Processing Done. Sep 5 23:58:03.462222 update_engine[1665]: I20250905 23:58:03.462128 1665 update_attempter.cc:310] Error event sent. Sep 5 23:58:03.462222 update_engine[1665]: I20250905 23:58:03.462138 1665 update_check_scheduler.cc:74] Next update check in 45m41s Sep 5 23:58:03.462535 locksmithd[1717]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 5 23:58:28.024264 systemd[1]: Started sshd@7-10.200.20.33:22-10.200.16.10:56328.service - OpenSSH per-connection server daemon (10.200.16.10:56328). Sep 5 23:58:28.473800 sshd[6403]: Accepted publickey for core from 10.200.16.10 port 56328 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:58:28.475793 sshd[6403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:58:28.481344 systemd-logind[1662]: New session 10 of user core. Sep 5 23:58:28.488997 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 23:58:28.905815 sshd[6403]: pam_unix(sshd:session): session closed for user core Sep 5 23:58:28.910177 systemd[1]: sshd@7-10.200.20.33:22-10.200.16.10:56328.service: Deactivated successfully. Sep 5 23:58:28.912581 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 23:58:28.913819 systemd-logind[1662]: Session 10 logged out. Waiting for processes to exit. Sep 5 23:58:28.915855 systemd-logind[1662]: Removed session 10. Sep 5 23:58:33.988312 systemd[1]: Started sshd@8-10.200.20.33:22-10.200.16.10:33456.service - OpenSSH per-connection server daemon (10.200.16.10:33456). Sep 5 23:58:34.448031 sshd[6459]: Accepted publickey for core from 10.200.16.10 port 33456 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:58:34.449944 sshd[6459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:58:34.456037 systemd-logind[1662]: New session 11 of user core. Sep 5 23:58:34.460006 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 23:58:34.898190 sshd[6459]: pam_unix(sshd:session): session closed for user core Sep 5 23:58:34.904563 systemd-logind[1662]: Session 11 logged out. Waiting for processes to exit. Sep 5 23:58:34.907638 systemd[1]: sshd@8-10.200.20.33:22-10.200.16.10:33456.service: Deactivated successfully. Sep 5 23:58:34.913183 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 23:58:34.915088 systemd-logind[1662]: Removed session 11. Sep 5 23:58:39.973141 systemd[1]: Started sshd@9-10.200.20.33:22-10.200.16.10:51734.service - OpenSSH per-connection server daemon (10.200.16.10:51734). Sep 5 23:58:40.392955 sshd[6492]: Accepted publickey for core from 10.200.16.10 port 51734 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:58:40.394407 sshd[6492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:58:40.398304 systemd-logind[1662]: New session 12 of user core. Sep 5 23:58:40.403995 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 23:58:40.778452 sshd[6492]: pam_unix(sshd:session): session closed for user core Sep 5 23:58:40.781864 systemd[1]: sshd@9-10.200.20.33:22-10.200.16.10:51734.service: Deactivated successfully. Sep 5 23:58:40.783720 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 23:58:40.786132 systemd-logind[1662]: Session 12 logged out. Waiting for processes to exit. Sep 5 23:58:40.787394 systemd-logind[1662]: Removed session 12. Sep 5 23:58:40.865751 systemd[1]: Started sshd@10-10.200.20.33:22-10.200.16.10:51746.service - OpenSSH per-connection server daemon (10.200.16.10:51746). Sep 5 23:58:41.327032 sshd[6506]: Accepted publickey for core from 10.200.16.10 port 51746 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:58:41.328421 sshd[6506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:58:41.332368 systemd-logind[1662]: New session 13 of user core. Sep 5 23:58:41.339024 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 23:58:41.757745 sshd[6506]: pam_unix(sshd:session): session closed for user core Sep 5 23:58:41.761439 systemd[1]: sshd@10-10.200.20.33:22-10.200.16.10:51746.service: Deactivated successfully. Sep 5 23:58:41.763634 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 23:58:41.764634 systemd-logind[1662]: Session 13 logged out. Waiting for processes to exit. Sep 5 23:58:41.765572 systemd-logind[1662]: Removed session 13. Sep 5 23:58:41.845271 systemd[1]: Started sshd@11-10.200.20.33:22-10.200.16.10:51756.service - OpenSSH per-connection server daemon (10.200.16.10:51756). Sep 5 23:58:42.304814 sshd[6517]: Accepted publickey for core from 10.200.16.10 port 51756 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:58:42.306611 sshd[6517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:58:42.311684 systemd-logind[1662]: New session 14 of user core. Sep 5 23:58:42.322019 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 23:58:42.719043 sshd[6517]: pam_unix(sshd:session): session closed for user core Sep 5 23:58:42.723984 systemd-logind[1662]: Session 14 logged out. Waiting for processes to exit. Sep 5 23:58:42.724714 systemd[1]: sshd@11-10.200.20.33:22-10.200.16.10:51756.service: Deactivated successfully. Sep 5 23:58:42.727583 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 23:58:42.728795 systemd-logind[1662]: Removed session 14. Sep 5 23:58:47.823546 systemd[1]: Started sshd@12-10.200.20.33:22-10.200.16.10:51760.service - OpenSSH per-connection server daemon (10.200.16.10:51760). Sep 5 23:58:48.279943 sshd[6574]: Accepted publickey for core from 10.200.16.10 port 51760 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:58:48.281319 sshd[6574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:58:48.285204 systemd-logind[1662]: New session 15 of user core. Sep 5 23:58:48.289983 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 23:58:48.687889 sshd[6574]: pam_unix(sshd:session): session closed for user core Sep 5 23:58:48.691397 systemd[1]: sshd@12-10.200.20.33:22-10.200.16.10:51760.service: Deactivated successfully. Sep 5 23:58:48.693172 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 23:58:48.693916 systemd-logind[1662]: Session 15 logged out. Waiting for processes to exit. Sep 5 23:58:48.694965 systemd-logind[1662]: Removed session 15. Sep 5 23:58:53.760537 systemd[1]: Started sshd@13-10.200.20.33:22-10.200.16.10:56370.service - OpenSSH per-connection server daemon (10.200.16.10:56370). Sep 5 23:58:54.183226 sshd[6587]: Accepted publickey for core from 10.200.16.10 port 56370 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:58:54.184528 sshd[6587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:58:54.189399 systemd-logind[1662]: New session 16 of user core. Sep 5 23:58:54.196005 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 23:58:54.576529 sshd[6587]: pam_unix(sshd:session): session closed for user core Sep 5 23:58:54.579894 systemd-logind[1662]: Session 16 logged out. Waiting for processes to exit. Sep 5 23:58:54.581763 systemd[1]: sshd@13-10.200.20.33:22-10.200.16.10:56370.service: Deactivated successfully. Sep 5 23:58:54.584205 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 23:58:54.586877 systemd-logind[1662]: Removed session 16. Sep 5 23:58:59.663976 systemd[1]: Started sshd@14-10.200.20.33:22-10.200.16.10:56378.service - OpenSSH per-connection server daemon (10.200.16.10:56378). Sep 5 23:59:00.082109 sshd[6620]: Accepted publickey for core from 10.200.16.10 port 56378 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:59:00.084683 sshd[6620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:59:00.091403 systemd-logind[1662]: New session 17 of user core. Sep 5 23:59:00.100140 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 23:59:00.492420 sshd[6620]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:00.498381 systemd-logind[1662]: Session 17 logged out. Waiting for processes to exit. Sep 5 23:59:00.499004 systemd[1]: sshd@14-10.200.20.33:22-10.200.16.10:56378.service: Deactivated successfully. Sep 5 23:59:00.503638 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 23:59:00.506300 systemd-logind[1662]: Removed session 17. Sep 5 23:59:05.575240 systemd[1]: Started sshd@15-10.200.20.33:22-10.200.16.10:56324.service - OpenSSH per-connection server daemon (10.200.16.10:56324). Sep 5 23:59:05.997709 sshd[6633]: Accepted publickey for core from 10.200.16.10 port 56324 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:59:05.999228 sshd[6633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:59:06.004781 systemd-logind[1662]: New session 18 of user core. Sep 5 23:59:06.011017 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 23:59:06.390465 sshd[6633]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:06.394117 systemd[1]: sshd@15-10.200.20.33:22-10.200.16.10:56324.service: Deactivated successfully. Sep 5 23:59:06.396049 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 23:59:06.396738 systemd-logind[1662]: Session 18 logged out. Waiting for processes to exit. Sep 5 23:59:06.397689 systemd-logind[1662]: Removed session 18. Sep 5 23:59:06.489070 systemd[1]: Started sshd@16-10.200.20.33:22-10.200.16.10:56336.service - OpenSSH per-connection server daemon (10.200.16.10:56336). Sep 5 23:59:06.939702 sshd[6646]: Accepted publickey for core from 10.200.16.10 port 56336 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:59:06.941163 sshd[6646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:59:06.945372 systemd-logind[1662]: New session 19 of user core. Sep 5 23:59:06.953003 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 23:59:07.454443 sshd[6646]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:07.458337 systemd[1]: sshd@16-10.200.20.33:22-10.200.16.10:56336.service: Deactivated successfully. Sep 5 23:59:07.460567 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 23:59:07.461629 systemd-logind[1662]: Session 19 logged out. Waiting for processes to exit. Sep 5 23:59:07.463627 systemd-logind[1662]: Removed session 19. Sep 5 23:59:07.540370 systemd[1]: Started sshd@17-10.200.20.33:22-10.200.16.10:56338.service - OpenSSH per-connection server daemon (10.200.16.10:56338). Sep 5 23:59:07.992716 sshd[6657]: Accepted publickey for core from 10.200.16.10 port 56338 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:59:07.994106 sshd[6657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:59:07.997930 systemd-logind[1662]: New session 20 of user core. Sep 5 23:59:08.006004 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 23:59:08.081272 systemd[1]: run-containerd-runc-k8s.io-ab50465f333d454be4360362371618b4451e25d13d0ba5638acd77f1dd0bc4ae-runc.jPP2q9.mount: Deactivated successfully. Sep 5 23:59:08.941229 sshd[6657]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:08.945891 systemd-logind[1662]: Session 20 logged out. Waiting for processes to exit. Sep 5 23:59:08.946479 systemd[1]: sshd@17-10.200.20.33:22-10.200.16.10:56338.service: Deactivated successfully. Sep 5 23:59:08.948775 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 23:59:08.950451 systemd-logind[1662]: Removed session 20. Sep 5 23:59:09.032161 systemd[1]: Started sshd@18-10.200.20.33:22-10.200.16.10:56348.service - OpenSSH per-connection server daemon (10.200.16.10:56348). Sep 5 23:59:09.517743 sshd[6696]: Accepted publickey for core from 10.200.16.10 port 56348 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:59:09.519228 sshd[6696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:59:09.523545 systemd-logind[1662]: New session 21 of user core. Sep 5 23:59:09.533028 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 23:59:10.055112 sshd[6696]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:10.058959 systemd[1]: sshd@18-10.200.20.33:22-10.200.16.10:56348.service: Deactivated successfully. Sep 5 23:59:10.063071 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 23:59:10.064077 systemd-logind[1662]: Session 21 logged out. Waiting for processes to exit. Sep 5 23:59:10.065019 systemd-logind[1662]: Removed session 21. Sep 5 23:59:10.129004 systemd[1]: Started sshd@19-10.200.20.33:22-10.200.16.10:49764.service - OpenSSH per-connection server daemon (10.200.16.10:49764). Sep 5 23:59:10.555958 sshd[6707]: Accepted publickey for core from 10.200.16.10 port 49764 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:59:10.557346 sshd[6707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:59:10.561189 systemd-logind[1662]: New session 22 of user core. Sep 5 23:59:10.567994 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 23:59:10.943368 sshd[6707]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:10.947589 systemd[1]: sshd@19-10.200.20.33:22-10.200.16.10:49764.service: Deactivated successfully. Sep 5 23:59:10.950129 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 23:59:10.950991 systemd-logind[1662]: Session 22 logged out. Waiting for processes to exit. Sep 5 23:59:10.952380 systemd-logind[1662]: Removed session 22. Sep 5 23:59:16.028259 systemd[1]: Started sshd@20-10.200.20.33:22-10.200.16.10:49766.service - OpenSSH per-connection server daemon (10.200.16.10:49766). Sep 5 23:59:16.450753 sshd[6725]: Accepted publickey for core from 10.200.16.10 port 49766 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:59:16.452182 sshd[6725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:59:16.456494 systemd-logind[1662]: New session 23 of user core. Sep 5 23:59:16.462985 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 23:59:16.843087 sshd[6725]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:16.847464 systemd[1]: sshd@20-10.200.20.33:22-10.200.16.10:49766.service: Deactivated successfully. Sep 5 23:59:16.850634 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 23:59:16.851361 systemd-logind[1662]: Session 23 logged out. Waiting for processes to exit. Sep 5 23:59:16.852822 systemd-logind[1662]: Removed session 23. Sep 5 23:59:21.926183 systemd[1]: Started sshd@21-10.200.20.33:22-10.200.16.10:34892.service - OpenSSH per-connection server daemon (10.200.16.10:34892). Sep 5 23:59:22.344994 sshd[6759]: Accepted publickey for core from 10.200.16.10 port 34892 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:59:22.346403 sshd[6759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:59:22.350806 systemd-logind[1662]: New session 24 of user core. Sep 5 23:59:22.356028 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 23:59:22.731567 sshd[6759]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:22.735911 systemd-logind[1662]: Session 24 logged out. Waiting for processes to exit. Sep 5 23:59:22.737082 systemd[1]: sshd@21-10.200.20.33:22-10.200.16.10:34892.service: Deactivated successfully. Sep 5 23:59:22.739504 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 23:59:22.742635 systemd-logind[1662]: Removed session 24. Sep 5 23:59:27.821143 systemd[1]: Started sshd@22-10.200.20.33:22-10.200.16.10:34894.service - OpenSSH per-connection server daemon (10.200.16.10:34894). Sep 5 23:59:28.243364 sshd[6793]: Accepted publickey for core from 10.200.16.10 port 34894 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:59:28.244915 sshd[6793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:59:28.249470 systemd-logind[1662]: New session 25 of user core. Sep 5 23:59:28.257996 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 23:59:28.641874 sshd[6793]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:28.645793 systemd[1]: sshd@22-10.200.20.33:22-10.200.16.10:34894.service: Deactivated successfully. Sep 5 23:59:28.647878 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 23:59:28.648689 systemd-logind[1662]: Session 25 logged out. Waiting for processes to exit. Sep 5 23:59:28.649707 systemd-logind[1662]: Removed session 25. Sep 5 23:59:33.727042 systemd[1]: Started sshd@23-10.200.20.33:22-10.200.16.10:47012.service - OpenSSH per-connection server daemon (10.200.16.10:47012). Sep 5 23:59:34.188980 sshd[6827]: Accepted publickey for core from 10.200.16.10 port 47012 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:59:34.190380 sshd[6827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:59:34.196666 systemd-logind[1662]: New session 26 of user core. Sep 5 23:59:34.200017 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 5 23:59:34.592205 sshd[6827]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:34.595669 systemd-logind[1662]: Session 26 logged out. Waiting for processes to exit. Sep 5 23:59:34.595816 systemd[1]: sshd@23-10.200.20.33:22-10.200.16.10:47012.service: Deactivated successfully. Sep 5 23:59:34.597684 systemd[1]: session-26.scope: Deactivated successfully. Sep 5 23:59:34.601722 systemd-logind[1662]: Removed session 26. Sep 5 23:59:39.694207 systemd[1]: Started sshd@24-10.200.20.33:22-10.200.16.10:47024.service - OpenSSH per-connection server daemon (10.200.16.10:47024). Sep 5 23:59:40.177854 sshd[6868]: Accepted publickey for core from 10.200.16.10 port 47024 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:59:40.179300 sshd[6868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:59:40.185374 systemd-logind[1662]: New session 27 of user core. Sep 5 23:59:40.188991 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 5 23:59:40.602989 sshd[6868]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:40.607243 systemd[1]: sshd@24-10.200.20.33:22-10.200.16.10:47024.service: Deactivated successfully. Sep 5 23:59:40.609445 systemd[1]: session-27.scope: Deactivated successfully. Sep 5 23:59:40.611404 systemd-logind[1662]: Session 27 logged out. Waiting for processes to exit. Sep 5 23:59:40.612453 systemd-logind[1662]: Removed session 27. Sep 5 23:59:45.686010 systemd[1]: Started sshd@25-10.200.20.33:22-10.200.16.10:43562.service - OpenSSH per-connection server daemon (10.200.16.10:43562). Sep 5 23:59:46.119282 systemd[1]: run-containerd-runc-k8s.io-ef796e3239be696d5a86e65b675a8402466d6b8431f8b9d7e5722315bdaf0815-runc.AvwoyD.mount: Deactivated successfully. Sep 5 23:59:46.140703 sshd[6883]: Accepted publickey for core from 10.200.16.10 port 43562 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:59:46.142794 sshd[6883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:59:46.148008 systemd-logind[1662]: New session 28 of user core. Sep 5 23:59:46.153177 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 5 23:59:46.544687 sshd[6883]: pam_unix(sshd:session): session closed for user core Sep 5 23:59:46.548379 systemd-logind[1662]: Session 28 logged out. Waiting for processes to exit. Sep 5 23:59:46.549448 systemd[1]: sshd@25-10.200.20.33:22-10.200.16.10:43562.service: Deactivated successfully. Sep 5 23:59:46.551436 systemd[1]: session-28.scope: Deactivated successfully. Sep 5 23:59:46.552677 systemd-logind[1662]: Removed session 28.