Aug 13 00:20:44.420559 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 13 00:20:44.420582 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Aug 12 22:21:53 -00 2025 Aug 13 00:20:44.420591 kernel: KASLR enabled Aug 13 00:20:44.420597 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Aug 13 00:20:44.420605 kernel: printk: bootconsole [pl11] enabled Aug 13 00:20:44.420611 kernel: efi: EFI v2.7 by EDK II Aug 13 00:20:44.420618 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Aug 13 00:20:44.420624 kernel: random: crng init done Aug 13 00:20:44.420630 kernel: ACPI: Early table checksum verification disabled Aug 13 00:20:44.420636 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Aug 13 00:20:44.420643 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.420649 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.420657 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Aug 13 00:20:44.420664 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.420671 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.420678 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.420685 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.420693 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.420700 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.420706 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Aug 13 00:20:44.420713 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.420719 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Aug 13 00:20:44.420726 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Aug 13 00:20:44.420732 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Aug 13 00:20:44.420738 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Aug 13 00:20:44.420745 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Aug 13 00:20:44.420751 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Aug 13 00:20:44.420757 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Aug 13 00:20:44.420765 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Aug 13 00:20:44.420772 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Aug 13 00:20:44.420778 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Aug 13 00:20:44.420784 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Aug 13 00:20:44.420791 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Aug 13 00:20:44.420797 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Aug 13 00:20:44.420803 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Aug 13 00:20:44.420810 kernel: Zone ranges: Aug 13 00:20:44.420816 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Aug 13 00:20:44.420822 kernel: DMA32 empty Aug 13 00:20:44.420829 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Aug 13 00:20:44.423907 kernel: Movable zone start for each node Aug 13 00:20:44.423926 kernel: Early memory node ranges Aug 13 00:20:44.423934 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Aug 13 00:20:44.423941 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Aug 13 00:20:44.423947 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Aug 13 00:20:44.423954 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Aug 13 00:20:44.423963 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Aug 13 00:20:44.423969 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Aug 13 00:20:44.423977 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Aug 13 00:20:44.423985 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Aug 13 00:20:44.423992 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Aug 13 00:20:44.423999 kernel: psci: probing for conduit method from ACPI. Aug 13 00:20:44.424006 kernel: psci: PSCIv1.1 detected in firmware. Aug 13 00:20:44.424013 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 00:20:44.424020 kernel: psci: MIGRATE_INFO_TYPE not supported. Aug 13 00:20:44.424027 kernel: psci: SMC Calling Convention v1.4 Aug 13 00:20:44.424033 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Aug 13 00:20:44.424040 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Aug 13 00:20:44.424049 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Aug 13 00:20:44.424055 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Aug 13 00:20:44.424062 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 13 00:20:44.424069 kernel: Detected PIPT I-cache on CPU0 Aug 13 00:20:44.424076 kernel: CPU features: detected: GIC system register CPU interface Aug 13 00:20:44.424083 kernel: CPU features: detected: Hardware dirty bit management Aug 13 00:20:44.424090 kernel: CPU features: detected: Spectre-BHB Aug 13 00:20:44.424097 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 13 00:20:44.424105 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 13 00:20:44.424112 kernel: CPU features: detected: ARM erratum 1418040 Aug 13 00:20:44.424119 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Aug 13 00:20:44.424128 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 13 00:20:44.424135 kernel: alternatives: applying boot alternatives Aug 13 00:20:44.424144 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:20:44.424151 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:20:44.424159 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:20:44.424166 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:20:44.424173 kernel: Fallback order for Node 0: 0 Aug 13 00:20:44.424180 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Aug 13 00:20:44.424187 kernel: Policy zone: Normal Aug 13 00:20:44.424194 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:20:44.424201 kernel: software IO TLB: area num 2. Aug 13 00:20:44.424210 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Aug 13 00:20:44.424217 kernel: Memory: 3982624K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 211536K reserved, 0K cma-reserved) Aug 13 00:20:44.424224 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:20:44.424231 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:20:44.424239 kernel: rcu: RCU event tracing is enabled. Aug 13 00:20:44.424246 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:20:44.424253 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:20:44.424260 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:20:44.424267 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:20:44.424274 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:20:44.424281 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 00:20:44.424289 kernel: GICv3: 960 SPIs implemented Aug 13 00:20:44.424296 kernel: GICv3: 0 Extended SPIs implemented Aug 13 00:20:44.424303 kernel: Root IRQ handler: gic_handle_irq Aug 13 00:20:44.424310 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 13 00:20:44.424318 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Aug 13 00:20:44.424325 kernel: ITS: No ITS available, not enabling LPIs Aug 13 00:20:44.424332 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:20:44.424339 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:20:44.424346 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 13 00:20:44.424353 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 13 00:20:44.424361 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 13 00:20:44.424369 kernel: Console: colour dummy device 80x25 Aug 13 00:20:44.424377 kernel: printk: console [tty1] enabled Aug 13 00:20:44.424384 kernel: ACPI: Core revision 20230628 Aug 13 00:20:44.424391 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 13 00:20:44.424399 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:20:44.424406 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:20:44.424413 kernel: landlock: Up and running. Aug 13 00:20:44.424421 kernel: SELinux: Initializing. Aug 13 00:20:44.424428 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:20:44.424435 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:20:44.424444 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:20:44.424451 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:20:44.424458 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Aug 13 00:20:44.424465 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Aug 13 00:20:44.424472 kernel: Hyper-V: enabling crash_kexec_post_notifiers Aug 13 00:20:44.424480 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:20:44.424487 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:20:44.424502 kernel: Remapping and enabling EFI services. Aug 13 00:20:44.424510 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:20:44.424517 kernel: Detected PIPT I-cache on CPU1 Aug 13 00:20:44.424525 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Aug 13 00:20:44.424557 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:20:44.424565 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 13 00:20:44.424574 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:20:44.424582 kernel: SMP: Total of 2 processors activated. Aug 13 00:20:44.424592 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 00:20:44.424605 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Aug 13 00:20:44.424612 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 13 00:20:44.424620 kernel: CPU features: detected: CRC32 instructions Aug 13 00:20:44.424627 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 13 00:20:44.424635 kernel: CPU features: detected: LSE atomic instructions Aug 13 00:20:44.424643 kernel: CPU features: detected: Privileged Access Never Aug 13 00:20:44.424650 kernel: CPU: All CPU(s) started at EL1 Aug 13 00:20:44.424657 kernel: alternatives: applying system-wide alternatives Aug 13 00:20:44.424665 kernel: devtmpfs: initialized Aug 13 00:20:44.424674 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:20:44.424682 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:20:44.424689 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:20:44.424696 kernel: SMBIOS 3.1.0 present. Aug 13 00:20:44.424704 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Aug 13 00:20:44.424712 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:20:44.424719 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 00:20:44.424727 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 00:20:44.424735 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 00:20:44.424744 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:20:44.424752 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Aug 13 00:20:44.424759 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:20:44.424767 kernel: cpuidle: using governor menu Aug 13 00:20:44.424775 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 00:20:44.424788 kernel: ASID allocator initialised with 32768 entries Aug 13 00:20:44.424796 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:20:44.424803 kernel: Serial: AMBA PL011 UART driver Aug 13 00:20:44.424811 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 13 00:20:44.424820 kernel: Modules: 0 pages in range for non-PLT usage Aug 13 00:20:44.424827 kernel: Modules: 509008 pages in range for PLT usage Aug 13 00:20:44.424953 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:20:44.424961 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:20:44.424968 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 00:20:44.424976 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 13 00:20:44.424983 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:20:44.424991 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:20:44.424998 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 00:20:44.425008 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 13 00:20:44.425015 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:20:44.425023 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:20:44.425030 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:20:44.425038 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:20:44.425045 kernel: ACPI: Interpreter enabled Aug 13 00:20:44.425052 kernel: ACPI: Using GIC for interrupt routing Aug 13 00:20:44.425060 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Aug 13 00:20:44.425067 kernel: printk: console [ttyAMA0] enabled Aug 13 00:20:44.425076 kernel: printk: bootconsole [pl11] disabled Aug 13 00:20:44.425084 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Aug 13 00:20:44.425091 kernel: iommu: Default domain type: Translated Aug 13 00:20:44.425099 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 00:20:44.425106 kernel: efivars: Registered efivars operations Aug 13 00:20:44.425113 kernel: vgaarb: loaded Aug 13 00:20:44.425121 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 00:20:44.425128 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:20:44.425136 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:20:44.425145 kernel: pnp: PnP ACPI init Aug 13 00:20:44.425152 kernel: pnp: PnP ACPI: found 0 devices Aug 13 00:20:44.425160 kernel: NET: Registered PF_INET protocol family Aug 13 00:20:44.425168 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:20:44.425175 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:20:44.425183 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:20:44.425190 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:20:44.425197 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:20:44.425205 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:20:44.425214 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:20:44.425221 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:20:44.425229 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:20:44.425236 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:20:44.425243 kernel: kvm [1]: HYP mode not available Aug 13 00:20:44.425251 kernel: Initialise system trusted keyrings Aug 13 00:20:44.425258 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:20:44.425265 kernel: Key type asymmetric registered Aug 13 00:20:44.425273 kernel: Asymmetric key parser 'x509' registered Aug 13 00:20:44.425281 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:20:44.425289 kernel: io scheduler mq-deadline registered Aug 13 00:20:44.425296 kernel: io scheduler kyber registered Aug 13 00:20:44.425303 kernel: io scheduler bfq registered Aug 13 00:20:44.425311 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:20:44.425318 kernel: thunder_xcv, ver 1.0 Aug 13 00:20:44.425325 kernel: thunder_bgx, ver 1.0 Aug 13 00:20:44.425333 kernel: nicpf, ver 1.0 Aug 13 00:20:44.425340 kernel: nicvf, ver 1.0 Aug 13 00:20:44.425502 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 00:20:44.425580 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T00:20:43 UTC (1755044443) Aug 13 00:20:44.425590 kernel: efifb: probing for efifb Aug 13 00:20:44.425598 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 13 00:20:44.425605 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 13 00:20:44.425612 kernel: efifb: scrolling: redraw Aug 13 00:20:44.425620 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 00:20:44.425628 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:20:44.425637 kernel: fb0: EFI VGA frame buffer device Aug 13 00:20:44.425645 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Aug 13 00:20:44.425652 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:20:44.425660 kernel: No ACPI PMU IRQ for CPU0 Aug 13 00:20:44.425667 kernel: No ACPI PMU IRQ for CPU1 Aug 13 00:20:44.425675 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Aug 13 00:20:44.425682 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 13 00:20:44.425690 kernel: watchdog: Hard watchdog permanently disabled Aug 13 00:20:44.425697 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:20:44.425706 kernel: Segment Routing with IPv6 Aug 13 00:20:44.425713 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:20:44.425721 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:20:44.425728 kernel: Key type dns_resolver registered Aug 13 00:20:44.425735 kernel: registered taskstats version 1 Aug 13 00:20:44.425743 kernel: Loading compiled-in X.509 certificates Aug 13 00:20:44.425750 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 7263800c6d21650660e2b030c1023dce09b1e8b6' Aug 13 00:20:44.425758 kernel: Key type .fscrypt registered Aug 13 00:20:44.425765 kernel: Key type fscrypt-provisioning registered Aug 13 00:20:44.425774 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:20:44.425782 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:20:44.425789 kernel: ima: No architecture policies found Aug 13 00:20:44.425796 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 00:20:44.425804 kernel: clk: Disabling unused clocks Aug 13 00:20:44.425811 kernel: Freeing unused kernel memory: 39424K Aug 13 00:20:44.425819 kernel: Run /init as init process Aug 13 00:20:44.425827 kernel: with arguments: Aug 13 00:20:44.426396 kernel: /init Aug 13 00:20:44.426414 kernel: with environment: Aug 13 00:20:44.426421 kernel: HOME=/ Aug 13 00:20:44.426429 kernel: TERM=linux Aug 13 00:20:44.426436 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:20:44.426447 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:20:44.426457 systemd[1]: Detected virtualization microsoft. Aug 13 00:20:44.426465 systemd[1]: Detected architecture arm64. Aug 13 00:20:44.426473 systemd[1]: Running in initrd. Aug 13 00:20:44.426482 systemd[1]: No hostname configured, using default hostname. Aug 13 00:20:44.426490 systemd[1]: Hostname set to . Aug 13 00:20:44.426498 systemd[1]: Initializing machine ID from random generator. Aug 13 00:20:44.426506 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:20:44.426514 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:20:44.426522 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:20:44.426531 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:20:44.426539 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:20:44.426549 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:20:44.426557 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:20:44.426566 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:20:44.426574 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:20:44.426582 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:20:44.426591 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:20:44.426600 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:20:44.426608 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:20:44.426616 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:20:44.426625 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:20:44.426633 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:20:44.426641 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:20:44.426649 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:20:44.426657 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 00:20:44.426665 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:20:44.426675 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:20:44.426682 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:20:44.426690 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:20:44.426698 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:20:44.426706 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:20:44.426714 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:20:44.426722 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:20:44.426730 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:20:44.426738 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:20:44.426794 systemd-journald[217]: Collecting audit messages is disabled. Aug 13 00:20:44.426816 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:20:44.426826 systemd-journald[217]: Journal started Aug 13 00:20:44.426858 systemd-journald[217]: Runtime Journal (/run/log/journal/535032f3cf3c4b22a1d3c3a408843484) is 8.0M, max 78.5M, 70.5M free. Aug 13 00:20:44.424906 systemd-modules-load[218]: Inserted module 'overlay' Aug 13 00:20:44.459332 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:20:44.459386 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:20:44.465848 kernel: Bridge firewalling registered Aug 13 00:20:44.469688 systemd-modules-load[218]: Inserted module 'br_netfilter' Aug 13 00:20:44.470733 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:20:44.490676 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:20:44.500380 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:20:44.513660 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:20:44.525994 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:44.551379 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:20:44.565017 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:20:44.581756 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:20:44.614580 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:20:44.633378 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:20:44.652468 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:20:44.663992 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:20:44.682531 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:20:44.712179 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:20:44.731002 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:20:44.748656 dracut-cmdline[251]: dracut-dracut-053 Aug 13 00:20:44.748656 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:20:44.752135 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:20:44.767018 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:20:44.844019 systemd-resolved[259]: Positive Trust Anchors: Aug 13 00:20:44.844038 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:20:44.844070 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:20:44.847293 systemd-resolved[259]: Defaulting to hostname 'linux'. Aug 13 00:20:44.848284 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:20:44.859672 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:20:44.981859 kernel: SCSI subsystem initialized Aug 13 00:20:44.989863 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:20:45.000865 kernel: iscsi: registered transport (tcp) Aug 13 00:20:45.019965 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:20:45.020049 kernel: QLogic iSCSI HBA Driver Aug 13 00:20:45.057559 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:20:45.083175 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:20:45.117846 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:20:45.117904 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:20:45.117916 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:20:45.174855 kernel: raid6: neonx8 gen() 15770 MB/s Aug 13 00:20:45.195844 kernel: raid6: neonx4 gen() 15673 MB/s Aug 13 00:20:45.216843 kernel: raid6: neonx2 gen() 13274 MB/s Aug 13 00:20:45.238844 kernel: raid6: neonx1 gen() 10480 MB/s Aug 13 00:20:45.259847 kernel: raid6: int64x8 gen() 6938 MB/s Aug 13 00:20:45.280842 kernel: raid6: int64x4 gen() 7333 MB/s Aug 13 00:20:45.302843 kernel: raid6: int64x2 gen() 6130 MB/s Aug 13 00:20:45.328103 kernel: raid6: int64x1 gen() 5053 MB/s Aug 13 00:20:45.328115 kernel: raid6: using algorithm neonx8 gen() 15770 MB/s Aug 13 00:20:45.353104 kernel: raid6: .... xor() 11947 MB/s, rmw enabled Aug 13 00:20:45.353126 kernel: raid6: using neon recovery algorithm Aug 13 00:20:45.366948 kernel: xor: measuring software checksum speed Aug 13 00:20:45.366966 kernel: 8regs : 19750 MB/sec Aug 13 00:20:45.371289 kernel: 32regs : 19589 MB/sec Aug 13 00:20:45.375236 kernel: arm64_neon : 27079 MB/sec Aug 13 00:20:45.380104 kernel: xor: using function: arm64_neon (27079 MB/sec) Aug 13 00:20:45.431848 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:20:45.442342 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:20:45.459044 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:20:45.483326 systemd-udevd[437]: Using default interface naming scheme 'v255'. Aug 13 00:20:45.489298 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:20:45.516973 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:20:45.537347 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Aug 13 00:20:45.569425 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:20:45.586102 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:20:45.634471 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:20:45.658011 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:20:45.689467 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:20:45.702522 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:20:45.718296 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:20:45.732691 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:20:45.749859 kernel: hv_vmbus: Vmbus version:5.3 Aug 13 00:20:45.759884 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:20:45.776359 kernel: hv_vmbus: registering driver hid_hyperv Aug 13 00:20:45.776384 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Aug 13 00:20:45.791080 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 13 00:20:45.796352 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:20:45.847608 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 13 00:20:45.847634 kernel: hv_vmbus: registering driver hv_netvsc Aug 13 00:20:45.847644 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Aug 13 00:20:45.847666 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:20:45.847677 kernel: hv_vmbus: registering driver hv_storvsc Aug 13 00:20:45.847708 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:20:45.847718 kernel: scsi host1: storvsc_host_t Aug 13 00:20:45.828486 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:20:45.868694 kernel: scsi host0: storvsc_host_t Aug 13 00:20:45.869032 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 13 00:20:45.828667 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:20:45.886270 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:20:45.893854 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:20:45.894079 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:45.958949 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Aug 13 00:20:45.959165 kernel: hv_netvsc 000d3a07-3936-000d-3a07-3936000d3a07 eth0: VF slot 1 added Aug 13 00:20:45.959263 kernel: PTP clock support registered Aug 13 00:20:45.959277 kernel: hv_vmbus: registering driver hv_pci Aug 13 00:20:45.909731 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:20:45.967096 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:20:46.216848 kernel: hv_pci 5cac57c0-bfec-4ff9-bbb0-071d8e4e73f3: PCI VMBus probing: Using version 0x10004 Aug 13 00:20:46.217023 kernel: hv_utils: Registering HyperV Utility Driver Aug 13 00:20:46.217035 kernel: hv_vmbus: registering driver hv_utils Aug 13 00:20:46.217044 kernel: hv_utils: Heartbeat IC version 3.0 Aug 13 00:20:46.217053 kernel: hv_utils: Shutdown IC version 3.2 Aug 13 00:20:46.217063 kernel: hv_pci 5cac57c0-bfec-4ff9-bbb0-071d8e4e73f3: PCI host bridge to bus bfec:00 Aug 13 00:20:46.217157 kernel: pci_bus bfec:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Aug 13 00:20:46.217272 kernel: pci_bus bfec:00: No busn resource found for root bus, will use [bus 00-ff] Aug 13 00:20:46.217352 kernel: hv_utils: TimeSync IC version 4.0 Aug 13 00:20:46.217362 kernel: pci bfec:00:02.0: [15b3:1018] type 00 class 0x020000 Aug 13 00:20:46.217385 kernel: pci bfec:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Aug 13 00:20:46.217399 kernel: pci bfec:00:02.0: enabling Extended Tags Aug 13 00:20:46.212583 systemd-resolved[259]: Clock change detected. Flushing caches. Aug 13 00:20:46.288345 kernel: pci bfec:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at bfec:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Aug 13 00:20:46.290062 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:20:46.321712 kernel: pci_bus bfec:00: busn_res: [bus 00-ff] end is updated to 00 Aug 13 00:20:46.321883 kernel: pci bfec:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Aug 13 00:20:46.290202 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:46.344155 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 13 00:20:46.344395 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:20:46.345354 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:20:46.383181 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 13 00:20:46.383407 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 13 00:20:46.383526 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 13 00:20:46.383653 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 00:20:46.394926 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 13 00:20:46.395188 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 13 00:20:46.397180 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:46.434894 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:20:46.434936 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 00:20:46.435100 kernel: mlx5_core bfec:00:02.0: enabling device (0000 -> 0002) Aug 13 00:20:46.424508 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:20:46.454576 kernel: mlx5_core bfec:00:02.0: firmware version: 16.30.1284 Aug 13 00:20:46.478682 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:20:46.693848 kernel: hv_netvsc 000d3a07-3936-000d-3a07-3936000d3a07 eth0: VF registering: eth1 Aug 13 00:20:46.694089 kernel: mlx5_core bfec:00:02.0 eth1: joined to eth0 Aug 13 00:20:46.704238 kernel: mlx5_core bfec:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Aug 13 00:20:46.717157 kernel: mlx5_core bfec:00:02.0 enP49132s1: renamed from eth1 Aug 13 00:20:46.877472 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Aug 13 00:20:46.981947 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (486) Aug 13 00:20:46.999438 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 13 00:20:47.025367 kernel: BTRFS: device fsid 03408483-5051-409a-aab4-4e6d5027e982 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (482) Aug 13 00:20:47.024993 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Aug 13 00:20:47.044548 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Aug 13 00:20:47.052272 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Aug 13 00:20:47.086414 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:20:47.114151 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:20:47.123153 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:20:47.135172 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:20:48.136196 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:20:48.136259 disk-uuid[603]: The operation has completed successfully. Aug 13 00:20:48.209563 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:20:48.209664 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:20:48.245329 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:20:48.259362 sh[716]: Success Aug 13 00:20:48.287374 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 00:20:48.635669 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:20:48.646289 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:20:48.654963 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:20:48.703892 kernel: BTRFS info (device dm-0): first mount of filesystem 03408483-5051-409a-aab4-4e6d5027e982 Aug 13 00:20:48.703948 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:20:48.711526 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:20:48.717104 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:20:48.721785 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:20:49.109417 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:20:49.115530 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:20:49.136428 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:20:49.148360 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:20:49.186300 kernel: BTRFS info (device sda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:49.186367 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:20:49.191565 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:20:49.251163 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:20:49.260644 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:20:49.273233 kernel: BTRFS info (device sda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:49.274951 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:20:49.296350 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:20:49.308481 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:20:49.327758 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:20:49.342267 systemd-networkd[898]: lo: Link UP Aug 13 00:20:49.342277 systemd-networkd[898]: lo: Gained carrier Aug 13 00:20:49.343848 systemd-networkd[898]: Enumeration completed Aug 13 00:20:49.344516 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:20:49.344519 systemd-networkd[898]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:20:49.345737 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:20:49.352209 systemd[1]: Reached target network.target - Network. Aug 13 00:20:49.442188 kernel: mlx5_core bfec:00:02.0 enP49132s1: Link up Aug 13 00:20:49.489957 systemd-networkd[898]: enP49132s1: Link UP Aug 13 00:20:49.494725 kernel: hv_netvsc 000d3a07-3936-000d-3a07-3936000d3a07 eth0: Data path switched to VF: enP49132s1 Aug 13 00:20:49.490062 systemd-networkd[898]: eth0: Link UP Aug 13 00:20:49.490197 systemd-networkd[898]: eth0: Gained carrier Aug 13 00:20:49.490207 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:20:49.500347 systemd-networkd[898]: enP49132s1: Gained carrier Aug 13 00:20:49.529174 systemd-networkd[898]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 13 00:20:50.405247 ignition[900]: Ignition 2.19.0 Aug 13 00:20:50.405259 ignition[900]: Stage: fetch-offline Aug 13 00:20:50.407736 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:20:50.405303 ignition[900]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:50.424285 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:20:50.405312 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:50.405418 ignition[900]: parsed url from cmdline: "" Aug 13 00:20:50.405421 ignition[900]: no config URL provided Aug 13 00:20:50.405426 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:20:50.405433 ignition[900]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:20:50.405439 ignition[900]: failed to fetch config: resource requires networking Aug 13 00:20:50.405656 ignition[900]: Ignition finished successfully Aug 13 00:20:50.443790 ignition[908]: Ignition 2.19.0 Aug 13 00:20:50.443797 ignition[908]: Stage: fetch Aug 13 00:20:50.444054 ignition[908]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:50.444065 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:50.444206 ignition[908]: parsed url from cmdline: "" Aug 13 00:20:50.444211 ignition[908]: no config URL provided Aug 13 00:20:50.444217 ignition[908]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:20:50.444228 ignition[908]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:20:50.444251 ignition[908]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 13 00:20:50.554402 ignition[908]: GET result: OK Aug 13 00:20:50.554488 ignition[908]: config has been read from IMDS userdata Aug 13 00:20:50.554533 ignition[908]: parsing config with SHA512: 4654ed86e5940310633b00b104551a618102d0ab1127d4ffb43d316502957902e053b29dda36bb3db3254744e4112fe3112bd4f93887daee104bb617112482b0 Aug 13 00:20:50.559668 unknown[908]: fetched base config from "system" Aug 13 00:20:50.560149 ignition[908]: fetch: fetch complete Aug 13 00:20:50.559675 unknown[908]: fetched base config from "system" Aug 13 00:20:50.560155 ignition[908]: fetch: fetch passed Aug 13 00:20:50.559680 unknown[908]: fetched user config from "azure" Aug 13 00:20:50.560204 ignition[908]: Ignition finished successfully Aug 13 00:20:50.564392 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:20:50.584320 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:20:50.603238 ignition[915]: Ignition 2.19.0 Aug 13 00:20:50.606552 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:20:50.603245 ignition[915]: Stage: kargs Aug 13 00:20:50.603431 ignition[915]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:50.603439 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:50.631437 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:20:50.604375 ignition[915]: kargs: kargs passed Aug 13 00:20:50.604423 ignition[915]: Ignition finished successfully Aug 13 00:20:50.658759 ignition[922]: Ignition 2.19.0 Aug 13 00:20:50.664739 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:20:50.658766 ignition[922]: Stage: disks Aug 13 00:20:50.673374 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:20:50.658981 ignition[922]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:50.685742 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:20:50.658990 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:50.697603 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:20:50.660061 ignition[922]: disks: disks passed Aug 13 00:20:50.710407 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:20:50.660116 ignition[922]: Ignition finished successfully Aug 13 00:20:50.721406 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:20:50.754437 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:20:50.826791 systemd-fsck[931]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Aug 13 00:20:50.834883 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:20:50.857238 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:20:50.929156 kernel: EXT4-fs (sda9): mounted filesystem 128aec8b-f05d-48ed-8996-c9e8b21a7810 r/w with ordered data mode. Quota mode: none. Aug 13 00:20:50.929527 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:20:50.935352 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:20:50.980218 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:20:51.007160 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (942) Aug 13 00:20:51.005201 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:20:51.013360 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 00:20:51.072434 kernel: BTRFS info (device sda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:51.072465 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:20:51.072476 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:20:51.072486 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:20:51.049316 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:20:51.049354 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:20:51.065229 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:20:51.078759 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:20:51.118442 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:20:51.145268 systemd-networkd[898]: eth0: Gained IPv6LL Aug 13 00:20:51.606776 coreos-metadata[944]: Aug 13 00:20:51.606 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 00:20:51.617419 coreos-metadata[944]: Aug 13 00:20:51.613 INFO Fetch successful Aug 13 00:20:51.617419 coreos-metadata[944]: Aug 13 00:20:51.613 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 13 00:20:51.638279 coreos-metadata[944]: Aug 13 00:20:51.626 INFO Fetch successful Aug 13 00:20:51.645356 coreos-metadata[944]: Aug 13 00:20:51.640 INFO wrote hostname ci-4081.3.5-a-2fbd311b45 to /sysroot/etc/hostname Aug 13 00:20:51.645771 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:20:51.852400 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:20:51.889526 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:20:51.915950 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:20:51.933912 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:20:52.984628 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:20:53.003369 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:20:53.017213 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:20:53.041199 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:20:53.048151 kernel: BTRFS info (device sda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:53.068170 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:20:53.084590 ignition[1060]: INFO : Ignition 2.19.0 Aug 13 00:20:53.084590 ignition[1060]: INFO : Stage: mount Aug 13 00:20:53.094337 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:53.094337 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:53.094337 ignition[1060]: INFO : mount: mount passed Aug 13 00:20:53.094337 ignition[1060]: INFO : Ignition finished successfully Aug 13 00:20:53.095211 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:20:53.126362 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:20:53.157321 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:20:53.183151 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1072) Aug 13 00:20:53.199280 kernel: BTRFS info (device sda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:53.199321 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:20:53.204473 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:20:53.213147 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:20:53.215209 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:20:53.245749 ignition[1089]: INFO : Ignition 2.19.0 Aug 13 00:20:53.250198 ignition[1089]: INFO : Stage: files Aug 13 00:20:53.250198 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:53.250198 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:53.250198 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:20:53.279792 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:20:53.279792 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:20:53.395499 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:20:53.404417 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:20:53.404417 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:20:53.395976 unknown[1089]: wrote ssh authorized keys file for user: core Aug 13 00:20:53.441013 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 13 00:20:53.453376 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Aug 13 00:20:53.501013 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:20:53.701408 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Aug 13 00:20:54.117643 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 00:20:54.381835 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:20:54.381835 ignition[1089]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 00:20:54.415164 ignition[1089]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:20:54.415164 ignition[1089]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:20:54.415164 ignition[1089]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 00:20:54.415164 ignition[1089]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:20:54.415164 ignition[1089]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:20:54.415164 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:20:54.493890 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:20:54.493890 ignition[1089]: INFO : files: files passed Aug 13 00:20:54.493890 ignition[1089]: INFO : Ignition finished successfully Aug 13 00:20:54.429455 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:20:54.494462 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:20:54.517375 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:20:54.535622 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:20:54.593641 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:20:54.593641 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:20:54.535718 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:20:54.625232 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:20:54.555078 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:20:54.571360 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:20:54.615420 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:20:54.676211 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:20:54.678194 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:20:54.693404 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:20:54.705702 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:20:54.720282 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:20:54.740306 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:20:54.768899 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:20:54.787405 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:20:54.810628 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:20:54.812172 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:20:54.826664 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:20:54.842703 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:20:54.856973 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:20:54.870434 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:20:54.870511 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:20:54.891485 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:20:54.898044 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:20:54.911383 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:20:54.925431 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:20:54.941479 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:20:54.956248 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:20:54.969693 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:20:54.986342 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:20:54.999452 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:20:55.014074 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:20:55.026249 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:20:55.026329 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:20:55.046975 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:20:55.054017 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:20:55.069772 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:20:55.069831 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:20:55.084867 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:20:55.084944 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:20:55.107872 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:20:55.107941 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:20:55.122524 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:20:55.122578 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:20:55.137497 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:20:55.210435 ignition[1143]: INFO : Ignition 2.19.0 Aug 13 00:20:55.210435 ignition[1143]: INFO : Stage: umount Aug 13 00:20:55.210435 ignition[1143]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:55.210435 ignition[1143]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:55.210435 ignition[1143]: INFO : umount: umount passed Aug 13 00:20:55.210435 ignition[1143]: INFO : Ignition finished successfully Aug 13 00:20:55.137548 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:20:55.178424 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:20:55.209118 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:20:55.217272 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:20:55.217363 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:20:55.231304 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:20:55.231376 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:20:55.261702 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:20:55.261825 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:20:55.273564 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:20:55.273639 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:20:55.299533 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:20:55.299604 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:20:55.317027 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:20:55.317086 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:20:55.332866 systemd[1]: Stopped target network.target - Network. Aug 13 00:20:55.345785 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:20:55.345869 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:20:55.362125 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:20:55.384253 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:20:55.388157 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:20:55.401238 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:20:55.414363 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:20:55.434575 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:20:55.434663 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:20:55.450374 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:20:55.450459 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:20:55.465566 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:20:55.465645 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:20:55.479857 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:20:55.479939 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:20:55.495603 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:20:55.508533 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:20:55.525113 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:20:55.532935 systemd-networkd[898]: eth0: DHCPv6 lease lost Aug 13 00:20:55.534952 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:20:55.535482 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:20:55.551284 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:20:55.551508 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:20:55.565724 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:20:55.565801 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:20:55.605364 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:20:55.618215 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:20:55.618301 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:20:55.636038 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:20:55.863458 kernel: hv_netvsc 000d3a07-3936-000d-3a07-3936000d3a07 eth0: Data path switched from VF: enP49132s1 Aug 13 00:20:55.636099 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:20:55.656219 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:20:55.656289 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:20:55.669990 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:20:55.670052 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:20:55.685900 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:20:55.705041 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:20:55.705381 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:20:55.756550 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:20:55.756696 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:20:55.775551 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:20:55.775679 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:20:55.790213 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:20:55.790268 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:20:55.803425 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:20:55.803484 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:20:55.824253 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:20:55.842055 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:20:55.863546 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:20:55.863613 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:20:55.878585 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:20:55.878666 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:20:55.916432 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:20:55.934092 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:20:55.934194 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:20:55.957949 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:20:55.958013 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:20:55.970213 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:20:55.970267 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:20:55.985898 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:20:55.985955 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:55.998597 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:20:55.998715 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:20:56.009658 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:20:56.009746 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:20:56.022416 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:20:56.193674 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Aug 13 00:20:56.053324 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:20:56.069387 systemd[1]: Switching root. Aug 13 00:20:56.203534 systemd-journald[217]: Journal stopped Aug 13 00:20:44.420559 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 13 00:20:44.420582 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Aug 12 22:21:53 -00 2025 Aug 13 00:20:44.420591 kernel: KASLR enabled Aug 13 00:20:44.420597 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Aug 13 00:20:44.420605 kernel: printk: bootconsole [pl11] enabled Aug 13 00:20:44.420611 kernel: efi: EFI v2.7 by EDK II Aug 13 00:20:44.420618 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Aug 13 00:20:44.420624 kernel: random: crng init done Aug 13 00:20:44.420630 kernel: ACPI: Early table checksum verification disabled Aug 13 00:20:44.420636 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Aug 13 00:20:44.420643 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.420649 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.420657 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Aug 13 00:20:44.420664 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.420671 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.420678 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.420685 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.420693 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.420700 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.420706 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Aug 13 00:20:44.420713 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.420719 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Aug 13 00:20:44.420726 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Aug 13 00:20:44.420732 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Aug 13 00:20:44.420738 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Aug 13 00:20:44.420745 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Aug 13 00:20:44.420751 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Aug 13 00:20:44.420757 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Aug 13 00:20:44.420765 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Aug 13 00:20:44.420772 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Aug 13 00:20:44.420778 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Aug 13 00:20:44.420784 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Aug 13 00:20:44.420791 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Aug 13 00:20:44.420797 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Aug 13 00:20:44.420803 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Aug 13 00:20:44.420810 kernel: Zone ranges: Aug 13 00:20:44.420816 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Aug 13 00:20:44.420822 kernel: DMA32 empty Aug 13 00:20:44.420829 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Aug 13 00:20:44.423907 kernel: Movable zone start for each node Aug 13 00:20:44.423926 kernel: Early memory node ranges Aug 13 00:20:44.423934 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Aug 13 00:20:44.423941 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Aug 13 00:20:44.423947 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Aug 13 00:20:44.423954 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Aug 13 00:20:44.423963 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Aug 13 00:20:44.423969 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Aug 13 00:20:44.423977 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Aug 13 00:20:44.423985 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Aug 13 00:20:44.423992 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Aug 13 00:20:44.423999 kernel: psci: probing for conduit method from ACPI. Aug 13 00:20:44.424006 kernel: psci: PSCIv1.1 detected in firmware. Aug 13 00:20:44.424013 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 00:20:44.424020 kernel: psci: MIGRATE_INFO_TYPE not supported. Aug 13 00:20:44.424027 kernel: psci: SMC Calling Convention v1.4 Aug 13 00:20:44.424033 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Aug 13 00:20:44.424040 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Aug 13 00:20:44.424049 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Aug 13 00:20:44.424055 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Aug 13 00:20:44.424062 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 13 00:20:44.424069 kernel: Detected PIPT I-cache on CPU0 Aug 13 00:20:44.424076 kernel: CPU features: detected: GIC system register CPU interface Aug 13 00:20:44.424083 kernel: CPU features: detected: Hardware dirty bit management Aug 13 00:20:44.424090 kernel: CPU features: detected: Spectre-BHB Aug 13 00:20:44.424097 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 13 00:20:44.424105 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 13 00:20:44.424112 kernel: CPU features: detected: ARM erratum 1418040 Aug 13 00:20:44.424119 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Aug 13 00:20:44.424128 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 13 00:20:44.424135 kernel: alternatives: applying boot alternatives Aug 13 00:20:44.424144 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:20:44.424151 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:20:44.424159 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:20:44.424166 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:20:44.424173 kernel: Fallback order for Node 0: 0 Aug 13 00:20:44.424180 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Aug 13 00:20:44.424187 kernel: Policy zone: Normal Aug 13 00:20:44.424194 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:20:44.424201 kernel: software IO TLB: area num 2. Aug 13 00:20:44.424210 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Aug 13 00:20:44.424217 kernel: Memory: 3982624K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 211536K reserved, 0K cma-reserved) Aug 13 00:20:44.424224 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:20:44.424231 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:20:44.424239 kernel: rcu: RCU event tracing is enabled. Aug 13 00:20:44.424246 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:20:44.424253 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:20:44.424260 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:20:44.424267 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:20:44.424274 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:20:44.424281 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 00:20:44.424289 kernel: GICv3: 960 SPIs implemented Aug 13 00:20:44.424296 kernel: GICv3: 0 Extended SPIs implemented Aug 13 00:20:44.424303 kernel: Root IRQ handler: gic_handle_irq Aug 13 00:20:44.424310 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 13 00:20:44.424318 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Aug 13 00:20:44.424325 kernel: ITS: No ITS available, not enabling LPIs Aug 13 00:20:44.424332 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:20:44.424339 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:20:44.424346 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 13 00:20:44.424353 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 13 00:20:44.424361 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 13 00:20:44.424369 kernel: Console: colour dummy device 80x25 Aug 13 00:20:44.424377 kernel: printk: console [tty1] enabled Aug 13 00:20:44.424384 kernel: ACPI: Core revision 20230628 Aug 13 00:20:44.424391 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 13 00:20:44.424399 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:20:44.424406 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:20:44.424413 kernel: landlock: Up and running. Aug 13 00:20:44.424421 kernel: SELinux: Initializing. Aug 13 00:20:44.424428 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:20:44.424435 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:20:44.424444 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:20:44.424451 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:20:44.424458 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Aug 13 00:20:44.424465 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Aug 13 00:20:44.424472 kernel: Hyper-V: enabling crash_kexec_post_notifiers Aug 13 00:20:44.424480 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:20:44.424487 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:20:44.424502 kernel: Remapping and enabling EFI services. Aug 13 00:20:44.424510 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:20:44.424517 kernel: Detected PIPT I-cache on CPU1 Aug 13 00:20:44.424525 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Aug 13 00:20:44.424557 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:20:44.424565 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 13 00:20:44.424574 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:20:44.424582 kernel: SMP: Total of 2 processors activated. Aug 13 00:20:44.424592 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 00:20:44.424605 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Aug 13 00:20:44.424612 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 13 00:20:44.424620 kernel: CPU features: detected: CRC32 instructions Aug 13 00:20:44.424627 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 13 00:20:44.424635 kernel: CPU features: detected: LSE atomic instructions Aug 13 00:20:44.424643 kernel: CPU features: detected: Privileged Access Never Aug 13 00:20:44.424650 kernel: CPU: All CPU(s) started at EL1 Aug 13 00:20:44.424657 kernel: alternatives: applying system-wide alternatives Aug 13 00:20:44.424665 kernel: devtmpfs: initialized Aug 13 00:20:44.424674 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:20:44.424682 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:20:44.424689 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:20:44.424696 kernel: SMBIOS 3.1.0 present. Aug 13 00:20:44.424704 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Aug 13 00:20:44.424712 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:20:44.424719 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 00:20:44.424727 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 00:20:44.424735 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 00:20:44.424744 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:20:44.424752 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Aug 13 00:20:44.424759 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:20:44.424767 kernel: cpuidle: using governor menu Aug 13 00:20:44.424775 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 00:20:44.424788 kernel: ASID allocator initialised with 32768 entries Aug 13 00:20:44.424796 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:20:44.424803 kernel: Serial: AMBA PL011 UART driver Aug 13 00:20:44.424811 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 13 00:20:44.424820 kernel: Modules: 0 pages in range for non-PLT usage Aug 13 00:20:44.424827 kernel: Modules: 509008 pages in range for PLT usage Aug 13 00:20:44.424953 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:20:44.424961 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:20:44.424968 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 00:20:44.424976 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 13 00:20:44.424983 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:20:44.424991 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:20:44.424998 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 00:20:44.425008 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 13 00:20:44.425015 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:20:44.425023 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:20:44.425030 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:20:44.425038 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:20:44.425045 kernel: ACPI: Interpreter enabled Aug 13 00:20:44.425052 kernel: ACPI: Using GIC for interrupt routing Aug 13 00:20:44.425060 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Aug 13 00:20:44.425067 kernel: printk: console [ttyAMA0] enabled Aug 13 00:20:44.425076 kernel: printk: bootconsole [pl11] disabled Aug 13 00:20:44.425084 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Aug 13 00:20:44.425091 kernel: iommu: Default domain type: Translated Aug 13 00:20:44.425099 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 00:20:44.425106 kernel: efivars: Registered efivars operations Aug 13 00:20:44.425113 kernel: vgaarb: loaded Aug 13 00:20:44.425121 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 00:20:44.425128 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:20:44.425136 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:20:44.425145 kernel: pnp: PnP ACPI init Aug 13 00:20:44.425152 kernel: pnp: PnP ACPI: found 0 devices Aug 13 00:20:44.425160 kernel: NET: Registered PF_INET protocol family Aug 13 00:20:44.425168 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:20:44.425175 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:20:44.425183 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:20:44.425190 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:20:44.425197 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:20:44.425205 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:20:44.425214 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:20:44.425221 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:20:44.425229 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:20:44.425236 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:20:44.425243 kernel: kvm [1]: HYP mode not available Aug 13 00:20:44.425251 kernel: Initialise system trusted keyrings Aug 13 00:20:44.425258 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:20:44.425265 kernel: Key type asymmetric registered Aug 13 00:20:44.425273 kernel: Asymmetric key parser 'x509' registered Aug 13 00:20:44.425281 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:20:44.425289 kernel: io scheduler mq-deadline registered Aug 13 00:20:44.425296 kernel: io scheduler kyber registered Aug 13 00:20:44.425303 kernel: io scheduler bfq registered Aug 13 00:20:44.425311 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:20:44.425318 kernel: thunder_xcv, ver 1.0 Aug 13 00:20:44.425325 kernel: thunder_bgx, ver 1.0 Aug 13 00:20:44.425333 kernel: nicpf, ver 1.0 Aug 13 00:20:44.425340 kernel: nicvf, ver 1.0 Aug 13 00:20:44.425502 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 00:20:44.425580 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T00:20:43 UTC (1755044443) Aug 13 00:20:44.425590 kernel: efifb: probing for efifb Aug 13 00:20:44.425598 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 13 00:20:44.425605 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 13 00:20:44.425612 kernel: efifb: scrolling: redraw Aug 13 00:20:44.425620 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 00:20:44.425628 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:20:44.425637 kernel: fb0: EFI VGA frame buffer device Aug 13 00:20:44.425645 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Aug 13 00:20:44.425652 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:20:44.425660 kernel: No ACPI PMU IRQ for CPU0 Aug 13 00:20:44.425667 kernel: No ACPI PMU IRQ for CPU1 Aug 13 00:20:44.425675 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Aug 13 00:20:44.425682 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 13 00:20:44.425690 kernel: watchdog: Hard watchdog permanently disabled Aug 13 00:20:44.425697 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:20:44.425706 kernel: Segment Routing with IPv6 Aug 13 00:20:44.425713 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:20:44.425721 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:20:44.425728 kernel: Key type dns_resolver registered Aug 13 00:20:44.425735 kernel: registered taskstats version 1 Aug 13 00:20:44.425743 kernel: Loading compiled-in X.509 certificates Aug 13 00:20:44.425750 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 7263800c6d21650660e2b030c1023dce09b1e8b6' Aug 13 00:20:44.425758 kernel: Key type .fscrypt registered Aug 13 00:20:44.425765 kernel: Key type fscrypt-provisioning registered Aug 13 00:20:44.425774 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:20:44.425782 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:20:44.425789 kernel: ima: No architecture policies found Aug 13 00:20:44.425796 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 00:20:44.425804 kernel: clk: Disabling unused clocks Aug 13 00:20:44.425811 kernel: Freeing unused kernel memory: 39424K Aug 13 00:20:44.425819 kernel: Run /init as init process Aug 13 00:20:44.425827 kernel: with arguments: Aug 13 00:20:44.426396 kernel: /init Aug 13 00:20:44.426414 kernel: with environment: Aug 13 00:20:44.426421 kernel: HOME=/ Aug 13 00:20:44.426429 kernel: TERM=linux Aug 13 00:20:44.426436 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:20:44.426447 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:20:44.426457 systemd[1]: Detected virtualization microsoft. Aug 13 00:20:44.426465 systemd[1]: Detected architecture arm64. Aug 13 00:20:44.426473 systemd[1]: Running in initrd. Aug 13 00:20:44.426482 systemd[1]: No hostname configured, using default hostname. Aug 13 00:20:44.426490 systemd[1]: Hostname set to . Aug 13 00:20:44.426498 systemd[1]: Initializing machine ID from random generator. Aug 13 00:20:44.426506 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:20:44.426514 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:20:44.426522 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:20:44.426531 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:20:44.426539 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:20:44.426549 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:20:44.426557 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:20:44.426566 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:20:44.426574 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:20:44.426582 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:20:44.426591 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:20:44.426600 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:20:44.426608 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:20:44.426616 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:20:44.426625 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:20:44.426633 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:20:44.426641 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:20:44.426649 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:20:44.426657 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 00:20:44.426665 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:20:44.426675 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:20:44.426682 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:20:44.426690 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:20:44.426698 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:20:44.426706 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:20:44.426714 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:20:44.426722 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:20:44.426730 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:20:44.426738 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:20:44.426794 systemd-journald[217]: Collecting audit messages is disabled. Aug 13 00:20:44.426816 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:20:44.426826 systemd-journald[217]: Journal started Aug 13 00:20:44.426858 systemd-journald[217]: Runtime Journal (/run/log/journal/535032f3cf3c4b22a1d3c3a408843484) is 8.0M, max 78.5M, 70.5M free. Aug 13 00:20:44.424906 systemd-modules-load[218]: Inserted module 'overlay' Aug 13 00:20:44.459332 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:20:44.459386 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:20:44.465848 kernel: Bridge firewalling registered Aug 13 00:20:44.469688 systemd-modules-load[218]: Inserted module 'br_netfilter' Aug 13 00:20:44.470733 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:20:44.490676 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:20:44.500380 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:20:44.513660 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:20:44.525994 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:44.551379 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:20:44.565017 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:20:44.581756 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:20:44.614580 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:20:44.633378 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:20:44.652468 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:20:44.663992 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:20:44.682531 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:20:44.712179 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:20:44.731002 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:20:44.748656 dracut-cmdline[251]: dracut-dracut-053 Aug 13 00:20:44.748656 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:20:44.752135 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:20:44.767018 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:20:44.844019 systemd-resolved[259]: Positive Trust Anchors: Aug 13 00:20:44.844038 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:20:44.844070 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:20:44.847293 systemd-resolved[259]: Defaulting to hostname 'linux'. Aug 13 00:20:44.848284 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:20:44.859672 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:20:44.981859 kernel: SCSI subsystem initialized Aug 13 00:20:44.989863 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:20:45.000865 kernel: iscsi: registered transport (tcp) Aug 13 00:20:45.019965 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:20:45.020049 kernel: QLogic iSCSI HBA Driver Aug 13 00:20:45.057559 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:20:45.083175 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:20:45.117846 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:20:45.117904 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:20:45.117916 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:20:45.174855 kernel: raid6: neonx8 gen() 15770 MB/s Aug 13 00:20:45.195844 kernel: raid6: neonx4 gen() 15673 MB/s Aug 13 00:20:45.216843 kernel: raid6: neonx2 gen() 13274 MB/s Aug 13 00:20:45.238844 kernel: raid6: neonx1 gen() 10480 MB/s Aug 13 00:20:45.259847 kernel: raid6: int64x8 gen() 6938 MB/s Aug 13 00:20:45.280842 kernel: raid6: int64x4 gen() 7333 MB/s Aug 13 00:20:45.302843 kernel: raid6: int64x2 gen() 6130 MB/s Aug 13 00:20:45.328103 kernel: raid6: int64x1 gen() 5053 MB/s Aug 13 00:20:45.328115 kernel: raid6: using algorithm neonx8 gen() 15770 MB/s Aug 13 00:20:45.353104 kernel: raid6: .... xor() 11947 MB/s, rmw enabled Aug 13 00:20:45.353126 kernel: raid6: using neon recovery algorithm Aug 13 00:20:45.366948 kernel: xor: measuring software checksum speed Aug 13 00:20:45.366966 kernel: 8regs : 19750 MB/sec Aug 13 00:20:45.371289 kernel: 32regs : 19589 MB/sec Aug 13 00:20:45.375236 kernel: arm64_neon : 27079 MB/sec Aug 13 00:20:45.380104 kernel: xor: using function: arm64_neon (27079 MB/sec) Aug 13 00:20:45.431848 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:20:45.442342 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:20:45.459044 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:20:45.483326 systemd-udevd[437]: Using default interface naming scheme 'v255'. Aug 13 00:20:45.489298 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:20:45.516973 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:20:45.537347 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Aug 13 00:20:45.569425 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:20:45.586102 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:20:45.634471 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:20:45.658011 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:20:45.689467 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:20:45.702522 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:20:45.718296 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:20:45.732691 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:20:45.749859 kernel: hv_vmbus: Vmbus version:5.3 Aug 13 00:20:45.759884 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:20:45.776359 kernel: hv_vmbus: registering driver hid_hyperv Aug 13 00:20:45.776384 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Aug 13 00:20:45.791080 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 13 00:20:45.796352 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:20:45.847608 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 13 00:20:45.847634 kernel: hv_vmbus: registering driver hv_netvsc Aug 13 00:20:45.847644 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Aug 13 00:20:45.847666 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:20:45.847677 kernel: hv_vmbus: registering driver hv_storvsc Aug 13 00:20:45.847708 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:20:45.847718 kernel: scsi host1: storvsc_host_t Aug 13 00:20:45.828486 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:20:45.868694 kernel: scsi host0: storvsc_host_t Aug 13 00:20:45.869032 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 13 00:20:45.828667 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:20:45.886270 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:20:45.893854 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:20:45.894079 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:45.958949 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Aug 13 00:20:45.959165 kernel: hv_netvsc 000d3a07-3936-000d-3a07-3936000d3a07 eth0: VF slot 1 added Aug 13 00:20:45.959263 kernel: PTP clock support registered Aug 13 00:20:45.959277 kernel: hv_vmbus: registering driver hv_pci Aug 13 00:20:45.909731 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:20:45.967096 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:20:46.216848 kernel: hv_pci 5cac57c0-bfec-4ff9-bbb0-071d8e4e73f3: PCI VMBus probing: Using version 0x10004 Aug 13 00:20:46.217023 kernel: hv_utils: Registering HyperV Utility Driver Aug 13 00:20:46.217035 kernel: hv_vmbus: registering driver hv_utils Aug 13 00:20:46.217044 kernel: hv_utils: Heartbeat IC version 3.0 Aug 13 00:20:46.217053 kernel: hv_utils: Shutdown IC version 3.2 Aug 13 00:20:46.217063 kernel: hv_pci 5cac57c0-bfec-4ff9-bbb0-071d8e4e73f3: PCI host bridge to bus bfec:00 Aug 13 00:20:46.217157 kernel: pci_bus bfec:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Aug 13 00:20:46.217272 kernel: pci_bus bfec:00: No busn resource found for root bus, will use [bus 00-ff] Aug 13 00:20:46.217352 kernel: hv_utils: TimeSync IC version 4.0 Aug 13 00:20:46.217362 kernel: pci bfec:00:02.0: [15b3:1018] type 00 class 0x020000 Aug 13 00:20:46.217385 kernel: pci bfec:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Aug 13 00:20:46.217399 kernel: pci bfec:00:02.0: enabling Extended Tags Aug 13 00:20:46.212583 systemd-resolved[259]: Clock change detected. Flushing caches. Aug 13 00:20:46.288345 kernel: pci bfec:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at bfec:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Aug 13 00:20:46.290062 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:20:46.321712 kernel: pci_bus bfec:00: busn_res: [bus 00-ff] end is updated to 00 Aug 13 00:20:46.321883 kernel: pci bfec:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Aug 13 00:20:46.290202 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:46.344155 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 13 00:20:46.344395 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:20:46.345354 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:20:46.383181 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 13 00:20:46.383407 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 13 00:20:46.383526 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 13 00:20:46.383653 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 00:20:46.394926 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 13 00:20:46.395188 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 13 00:20:46.397180 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:46.434894 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:20:46.434936 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 00:20:46.435100 kernel: mlx5_core bfec:00:02.0: enabling device (0000 -> 0002) Aug 13 00:20:46.424508 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:20:46.454576 kernel: mlx5_core bfec:00:02.0: firmware version: 16.30.1284 Aug 13 00:20:46.478682 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:20:46.693848 kernel: hv_netvsc 000d3a07-3936-000d-3a07-3936000d3a07 eth0: VF registering: eth1 Aug 13 00:20:46.694089 kernel: mlx5_core bfec:00:02.0 eth1: joined to eth0 Aug 13 00:20:46.704238 kernel: mlx5_core bfec:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Aug 13 00:20:46.717157 kernel: mlx5_core bfec:00:02.0 enP49132s1: renamed from eth1 Aug 13 00:20:46.877472 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Aug 13 00:20:46.981947 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (486) Aug 13 00:20:46.999438 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 13 00:20:47.025367 kernel: BTRFS: device fsid 03408483-5051-409a-aab4-4e6d5027e982 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (482) Aug 13 00:20:47.024993 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Aug 13 00:20:47.044548 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Aug 13 00:20:47.052272 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Aug 13 00:20:47.086414 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:20:47.114151 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:20:47.123153 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:20:47.135172 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:20:48.136196 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:20:48.136259 disk-uuid[603]: The operation has completed successfully. Aug 13 00:20:48.209563 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:20:48.209664 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:20:48.245329 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:20:48.259362 sh[716]: Success Aug 13 00:20:48.287374 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 00:20:48.635669 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:20:48.646289 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:20:48.654963 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:20:48.703892 kernel: BTRFS info (device dm-0): first mount of filesystem 03408483-5051-409a-aab4-4e6d5027e982 Aug 13 00:20:48.703948 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:20:48.711526 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:20:48.717104 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:20:48.721785 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:20:49.109417 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:20:49.115530 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:20:49.136428 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:20:49.148360 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:20:49.186300 kernel: BTRFS info (device sda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:49.186367 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:20:49.191565 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:20:49.251163 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:20:49.260644 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:20:49.273233 kernel: BTRFS info (device sda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:49.274951 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:20:49.296350 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:20:49.308481 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:20:49.327758 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:20:49.342267 systemd-networkd[898]: lo: Link UP Aug 13 00:20:49.342277 systemd-networkd[898]: lo: Gained carrier Aug 13 00:20:49.343848 systemd-networkd[898]: Enumeration completed Aug 13 00:20:49.344516 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:20:49.344519 systemd-networkd[898]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:20:49.345737 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:20:49.352209 systemd[1]: Reached target network.target - Network. Aug 13 00:20:49.442188 kernel: mlx5_core bfec:00:02.0 enP49132s1: Link up Aug 13 00:20:49.489957 systemd-networkd[898]: enP49132s1: Link UP Aug 13 00:20:49.494725 kernel: hv_netvsc 000d3a07-3936-000d-3a07-3936000d3a07 eth0: Data path switched to VF: enP49132s1 Aug 13 00:20:49.490062 systemd-networkd[898]: eth0: Link UP Aug 13 00:20:49.490197 systemd-networkd[898]: eth0: Gained carrier Aug 13 00:20:49.490207 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:20:49.500347 systemd-networkd[898]: enP49132s1: Gained carrier Aug 13 00:20:49.529174 systemd-networkd[898]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 13 00:20:50.405247 ignition[900]: Ignition 2.19.0 Aug 13 00:20:50.405259 ignition[900]: Stage: fetch-offline Aug 13 00:20:50.407736 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:20:50.405303 ignition[900]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:50.424285 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:20:50.405312 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:50.405418 ignition[900]: parsed url from cmdline: "" Aug 13 00:20:50.405421 ignition[900]: no config URL provided Aug 13 00:20:50.405426 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:20:50.405433 ignition[900]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:20:50.405439 ignition[900]: failed to fetch config: resource requires networking Aug 13 00:20:50.405656 ignition[900]: Ignition finished successfully Aug 13 00:20:50.443790 ignition[908]: Ignition 2.19.0 Aug 13 00:20:50.443797 ignition[908]: Stage: fetch Aug 13 00:20:50.444054 ignition[908]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:50.444065 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:50.444206 ignition[908]: parsed url from cmdline: "" Aug 13 00:20:50.444211 ignition[908]: no config URL provided Aug 13 00:20:50.444217 ignition[908]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:20:50.444228 ignition[908]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:20:50.444251 ignition[908]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 13 00:20:50.554402 ignition[908]: GET result: OK Aug 13 00:20:50.554488 ignition[908]: config has been read from IMDS userdata Aug 13 00:20:50.554533 ignition[908]: parsing config with SHA512: 4654ed86e5940310633b00b104551a618102d0ab1127d4ffb43d316502957902e053b29dda36bb3db3254744e4112fe3112bd4f93887daee104bb617112482b0 Aug 13 00:20:50.559668 unknown[908]: fetched base config from "system" Aug 13 00:20:50.560149 ignition[908]: fetch: fetch complete Aug 13 00:20:50.559675 unknown[908]: fetched base config from "system" Aug 13 00:20:50.560155 ignition[908]: fetch: fetch passed Aug 13 00:20:50.559680 unknown[908]: fetched user config from "azure" Aug 13 00:20:50.560204 ignition[908]: Ignition finished successfully Aug 13 00:20:50.564392 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:20:50.584320 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:20:50.603238 ignition[915]: Ignition 2.19.0 Aug 13 00:20:50.606552 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:20:50.603245 ignition[915]: Stage: kargs Aug 13 00:20:50.603431 ignition[915]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:50.603439 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:50.631437 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:20:50.604375 ignition[915]: kargs: kargs passed Aug 13 00:20:50.604423 ignition[915]: Ignition finished successfully Aug 13 00:20:50.658759 ignition[922]: Ignition 2.19.0 Aug 13 00:20:50.664739 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:20:50.658766 ignition[922]: Stage: disks Aug 13 00:20:50.673374 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:20:50.658981 ignition[922]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:50.685742 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:20:50.658990 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:50.697603 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:20:50.660061 ignition[922]: disks: disks passed Aug 13 00:20:50.710407 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:20:50.660116 ignition[922]: Ignition finished successfully Aug 13 00:20:50.721406 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:20:50.754437 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:20:50.826791 systemd-fsck[931]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Aug 13 00:20:50.834883 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:20:50.857238 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:20:50.929156 kernel: EXT4-fs (sda9): mounted filesystem 128aec8b-f05d-48ed-8996-c9e8b21a7810 r/w with ordered data mode. Quota mode: none. Aug 13 00:20:50.929527 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:20:50.935352 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:20:50.980218 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:20:51.007160 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (942) Aug 13 00:20:51.005201 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:20:51.013360 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 00:20:51.072434 kernel: BTRFS info (device sda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:51.072465 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:20:51.072476 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:20:51.072486 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:20:51.049316 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:20:51.049354 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:20:51.065229 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:20:51.078759 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:20:51.118442 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:20:51.145268 systemd-networkd[898]: eth0: Gained IPv6LL Aug 13 00:20:51.606776 coreos-metadata[944]: Aug 13 00:20:51.606 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 00:20:51.617419 coreos-metadata[944]: Aug 13 00:20:51.613 INFO Fetch successful Aug 13 00:20:51.617419 coreos-metadata[944]: Aug 13 00:20:51.613 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 13 00:20:51.638279 coreos-metadata[944]: Aug 13 00:20:51.626 INFO Fetch successful Aug 13 00:20:51.645356 coreos-metadata[944]: Aug 13 00:20:51.640 INFO wrote hostname ci-4081.3.5-a-2fbd311b45 to /sysroot/etc/hostname Aug 13 00:20:51.645771 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:20:51.852400 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:20:51.889526 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:20:51.915950 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:20:51.933912 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:20:52.984628 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:20:53.003369 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:20:53.017213 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:20:53.041199 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:20:53.048151 kernel: BTRFS info (device sda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:53.068170 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:20:53.084590 ignition[1060]: INFO : Ignition 2.19.0 Aug 13 00:20:53.084590 ignition[1060]: INFO : Stage: mount Aug 13 00:20:53.094337 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:53.094337 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:53.094337 ignition[1060]: INFO : mount: mount passed Aug 13 00:20:53.094337 ignition[1060]: INFO : Ignition finished successfully Aug 13 00:20:53.095211 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:20:53.126362 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:20:53.157321 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:20:53.183151 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1072) Aug 13 00:20:53.199280 kernel: BTRFS info (device sda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:53.199321 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:20:53.204473 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:20:53.213147 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:20:53.215209 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:20:53.245749 ignition[1089]: INFO : Ignition 2.19.0 Aug 13 00:20:53.250198 ignition[1089]: INFO : Stage: files Aug 13 00:20:53.250198 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:53.250198 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:53.250198 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:20:53.279792 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:20:53.279792 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:20:53.395499 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:20:53.404417 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:20:53.404417 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:20:53.395976 unknown[1089]: wrote ssh authorized keys file for user: core Aug 13 00:20:53.441013 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 13 00:20:53.453376 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Aug 13 00:20:53.501013 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:20:53.701408 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:20:53.715156 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Aug 13 00:20:54.117643 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 00:20:54.381835 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:20:54.381835 ignition[1089]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 00:20:54.415164 ignition[1089]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:20:54.415164 ignition[1089]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:20:54.415164 ignition[1089]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 00:20:54.415164 ignition[1089]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:20:54.415164 ignition[1089]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:20:54.415164 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:20:54.493890 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:20:54.493890 ignition[1089]: INFO : files: files passed Aug 13 00:20:54.493890 ignition[1089]: INFO : Ignition finished successfully Aug 13 00:20:54.429455 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:20:54.494462 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:20:54.517375 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:20:54.535622 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:20:54.593641 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:20:54.593641 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:20:54.535718 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:20:54.625232 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:20:54.555078 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:20:54.571360 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:20:54.615420 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:20:54.676211 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:20:54.678194 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:20:54.693404 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:20:54.705702 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:20:54.720282 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:20:54.740306 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:20:54.768899 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:20:54.787405 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:20:54.810628 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:20:54.812172 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:20:54.826664 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:20:54.842703 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:20:54.856973 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:20:54.870434 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:20:54.870511 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:20:54.891485 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:20:54.898044 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:20:54.911383 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:20:54.925431 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:20:54.941479 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:20:54.956248 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:20:54.969693 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:20:54.986342 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:20:54.999452 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:20:55.014074 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:20:55.026249 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:20:55.026329 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:20:55.046975 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:20:55.054017 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:20:55.069772 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:20:55.069831 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:20:55.084867 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:20:55.084944 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:20:55.107872 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:20:55.107941 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:20:55.122524 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:20:55.122578 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:20:55.137497 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:20:55.210435 ignition[1143]: INFO : Ignition 2.19.0 Aug 13 00:20:55.210435 ignition[1143]: INFO : Stage: umount Aug 13 00:20:55.210435 ignition[1143]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:55.210435 ignition[1143]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:55.210435 ignition[1143]: INFO : umount: umount passed Aug 13 00:20:55.210435 ignition[1143]: INFO : Ignition finished successfully Aug 13 00:20:55.137548 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:20:55.178424 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:20:55.209118 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:20:55.217272 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:20:55.217363 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:20:55.231304 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:20:55.231376 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:20:55.261702 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:20:55.261825 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:20:55.273564 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:20:55.273639 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:20:55.299533 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:20:55.299604 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:20:55.317027 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:20:55.317086 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:20:55.332866 systemd[1]: Stopped target network.target - Network. Aug 13 00:20:55.345785 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:20:55.345869 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:20:55.362125 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:20:55.384253 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:20:55.388157 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:20:55.401238 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:20:55.414363 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:20:55.434575 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:20:55.434663 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:20:55.450374 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:20:55.450459 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:20:55.465566 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:20:55.465645 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:20:55.479857 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:20:55.479939 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:20:55.495603 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:20:55.508533 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:20:55.525113 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:20:55.532935 systemd-networkd[898]: eth0: DHCPv6 lease lost Aug 13 00:20:55.534952 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:20:55.535482 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:20:55.551284 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:20:55.551508 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:20:55.565724 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:20:55.565801 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:20:55.605364 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:20:55.618215 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:20:55.618301 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:20:55.636038 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:20:55.863458 kernel: hv_netvsc 000d3a07-3936-000d-3a07-3936000d3a07 eth0: Data path switched from VF: enP49132s1 Aug 13 00:20:55.636099 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:20:55.656219 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:20:55.656289 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:20:55.669990 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:20:55.670052 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:20:55.685900 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:20:55.705041 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:20:55.705381 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:20:55.756550 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:20:55.756696 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:20:55.775551 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:20:55.775679 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:20:55.790213 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:20:55.790268 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:20:55.803425 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:20:55.803484 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:20:55.824253 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:20:55.842055 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:20:55.863546 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:20:55.863613 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:20:55.878585 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:20:55.878666 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:20:55.916432 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:20:55.934092 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:20:55.934194 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:20:55.957949 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:20:55.958013 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:20:55.970213 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:20:55.970267 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:20:55.985898 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:20:55.985955 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:55.998597 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:20:55.998715 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:20:56.009658 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:20:56.009746 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:20:56.022416 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:20:56.193674 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Aug 13 00:20:56.053324 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:20:56.069387 systemd[1]: Switching root. Aug 13 00:20:56.203534 systemd-journald[217]: Journal stopped Aug 13 00:21:03.452677 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:21:03.452701 kernel: SELinux: policy capability open_perms=1 Aug 13 00:21:03.452711 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:21:03.452719 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:21:03.452728 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:21:03.452736 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:21:03.452745 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:21:03.452755 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:21:03.452763 kernel: audit: type=1403 audit(1755044457.363:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:21:03.452772 systemd[1]: Successfully loaded SELinux policy in 229.241ms. Aug 13 00:21:03.452784 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.975ms. Aug 13 00:21:03.452794 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:21:03.452803 systemd[1]: Detected virtualization microsoft. Aug 13 00:21:03.452812 systemd[1]: Detected architecture arm64. Aug 13 00:21:03.452821 systemd[1]: Detected first boot. Aug 13 00:21:03.452832 systemd[1]: Hostname set to . Aug 13 00:21:03.452841 systemd[1]: Initializing machine ID from random generator. Aug 13 00:21:03.452850 zram_generator::config[1183]: No configuration found. Aug 13 00:21:03.452861 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:21:03.452869 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:21:03.452878 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:21:03.452887 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:21:03.452898 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:21:03.452908 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:21:03.452917 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:21:03.452926 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:21:03.452936 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:21:03.452946 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:21:03.452956 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:21:03.452966 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:21:03.452976 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:21:03.452985 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:21:03.452994 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:21:03.453004 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:21:03.453013 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:21:03.453023 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:21:03.453032 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 13 00:21:03.453042 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:21:03.453052 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:21:03.453061 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:21:03.453073 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:21:03.453082 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:21:03.453092 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:21:03.453101 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:21:03.453110 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:21:03.453121 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:21:03.453141 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:21:03.453153 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:21:03.453162 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:21:03.453173 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:21:03.453183 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:21:03.453195 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:21:03.453204 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:21:03.453214 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:21:03.453224 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:21:03.453233 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:21:03.453243 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:21:03.453252 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:21:03.453264 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:21:03.453274 systemd[1]: Reached target machines.target - Containers. Aug 13 00:21:03.453283 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:21:03.453293 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:21:03.453303 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:21:03.453312 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:21:03.453322 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:21:03.453331 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:21:03.453343 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:21:03.453353 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:21:03.453362 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:21:03.453373 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:21:03.453383 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:21:03.453393 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:21:03.453403 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:21:03.453412 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:21:03.453423 kernel: fuse: init (API version 7.39) Aug 13 00:21:03.453432 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:21:03.453442 kernel: loop: module loaded Aug 13 00:21:03.453450 kernel: ACPI: bus type drm_connector registered Aug 13 00:21:03.453459 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:21:03.453469 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:21:03.453478 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:21:03.453503 systemd-journald[1276]: Collecting audit messages is disabled. Aug 13 00:21:03.453526 systemd-journald[1276]: Journal started Aug 13 00:21:03.453546 systemd-journald[1276]: Runtime Journal (/run/log/journal/f3ff23aa22f24cdb98c0e6798b38c369) is 8.0M, max 78.5M, 70.5M free. Aug 13 00:21:02.291025 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:21:02.431200 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 00:21:02.431580 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:21:02.431925 systemd[1]: systemd-journald.service: Consumed 3.777s CPU time. Aug 13 00:21:03.469047 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:21:03.486225 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:21:03.486295 systemd[1]: Stopped verity-setup.service. Aug 13 00:21:03.505919 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:21:03.506766 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:21:03.514023 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:21:03.520974 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:21:03.526866 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:21:03.533768 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:21:03.540614 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:21:03.546931 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:21:03.554029 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:21:03.562411 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:21:03.562546 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:21:03.570073 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:21:03.570253 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:21:03.577382 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:21:03.577556 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:21:03.584390 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:21:03.584536 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:21:03.592147 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:21:03.592301 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:21:03.599864 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:21:03.600001 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:21:03.608167 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:21:03.615850 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:21:03.623474 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:21:03.632846 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:21:03.648967 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:21:03.666263 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:21:03.674041 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:21:03.680471 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:21:03.680518 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:21:03.687492 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 00:21:03.703305 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:21:03.711179 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:21:03.717228 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:21:03.749314 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:21:03.757842 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:21:03.766709 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:21:03.767851 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:21:03.774622 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:21:03.775839 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:21:03.783319 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:21:03.799392 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:21:03.813819 systemd-journald[1276]: Time spent on flushing to /var/log/journal/f3ff23aa22f24cdb98c0e6798b38c369 is 15.544ms for 896 entries. Aug 13 00:21:03.813819 systemd-journald[1276]: System Journal (/var/log/journal/f3ff23aa22f24cdb98c0e6798b38c369) is 8.0M, max 2.6G, 2.6G free. Aug 13 00:21:03.864929 systemd-journald[1276]: Received client request to flush runtime journal. Aug 13 00:21:03.824371 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 00:21:03.842083 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:21:03.853727 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:21:03.861764 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:21:03.871303 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:21:03.887302 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:21:03.889475 kernel: loop0: detected capacity change from 0 to 31320 Aug 13 00:21:03.904408 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:21:03.920418 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 00:21:03.928631 udevadm[1320]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:21:03.945304 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:21:03.994548 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:21:03.995217 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 00:21:04.068152 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Aug 13 00:21:04.068169 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Aug 13 00:21:04.072661 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:21:04.091313 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:21:04.311169 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:21:04.421170 kernel: loop1: detected capacity change from 0 to 207008 Aug 13 00:21:04.496166 kernel: loop2: detected capacity change from 0 to 114432 Aug 13 00:21:04.616096 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:21:04.635686 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:21:04.656193 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Aug 13 00:21:04.656659 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Aug 13 00:21:04.661054 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:21:04.942153 kernel: loop3: detected capacity change from 0 to 114328 Aug 13 00:21:05.380167 kernel: loop4: detected capacity change from 0 to 31320 Aug 13 00:21:05.395161 kernel: loop5: detected capacity change from 0 to 207008 Aug 13 00:21:05.418166 kernel: loop6: detected capacity change from 0 to 114432 Aug 13 00:21:05.434197 kernel: loop7: detected capacity change from 0 to 114328 Aug 13 00:21:05.457313 (sd-merge)[1345]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Aug 13 00:21:05.457745 (sd-merge)[1345]: Merged extensions into '/usr'. Aug 13 00:21:05.462003 systemd[1]: Reloading requested from client PID 1317 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:21:05.462021 systemd[1]: Reloading... Aug 13 00:21:05.536159 zram_generator::config[1367]: No configuration found. Aug 13 00:21:05.675217 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:21:05.732161 systemd[1]: Reloading finished in 269 ms. Aug 13 00:21:05.758075 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:21:05.765791 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:21:05.784362 systemd[1]: Starting ensure-sysext.service... Aug 13 00:21:05.789904 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:21:05.798363 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:21:05.824011 systemd[1]: Reloading requested from client PID 1427 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:21:05.824031 systemd[1]: Reloading... Aug 13 00:21:05.851373 systemd-udevd[1429]: Using default interface naming scheme 'v255'. Aug 13 00:21:05.859335 systemd-tmpfiles[1428]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:21:05.860706 systemd-tmpfiles[1428]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:21:05.861594 systemd-tmpfiles[1428]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:21:05.861832 systemd-tmpfiles[1428]: ACLs are not supported, ignoring. Aug 13 00:21:05.861879 systemd-tmpfiles[1428]: ACLs are not supported, ignoring. Aug 13 00:21:05.901180 zram_generator::config[1457]: No configuration found. Aug 13 00:21:05.922478 systemd-tmpfiles[1428]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:21:05.922489 systemd-tmpfiles[1428]: Skipping /boot Aug 13 00:21:05.931451 systemd-tmpfiles[1428]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:21:05.931472 systemd-tmpfiles[1428]: Skipping /boot Aug 13 00:21:06.013124 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:21:06.070704 systemd[1]: Reloading finished in 246 ms. Aug 13 00:21:06.094655 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:21:06.114495 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:21:06.163436 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:21:06.175045 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:21:06.194593 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:21:06.209020 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:21:06.221715 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:21:06.227510 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:21:06.240315 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:21:06.256508 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:21:06.264714 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:21:06.266212 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:21:06.268658 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:21:06.278318 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:21:06.278474 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:21:06.287680 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:21:06.287882 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:21:06.303279 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:21:06.314696 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:21:06.320425 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:21:06.329516 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:21:06.339446 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:21:06.347682 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:21:06.354524 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:21:06.362448 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:21:06.362641 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:21:06.373094 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:21:06.373273 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:21:06.381971 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:21:06.383182 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:21:06.402462 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Aug 13 00:21:06.409590 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:21:06.413386 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:21:06.424158 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:21:06.433507 augenrules[1549]: No rules Aug 13 00:21:06.434461 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:21:06.445585 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:21:06.451964 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:21:06.452413 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:21:06.464379 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:21:06.474195 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:21:06.483400 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:21:06.483543 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:21:06.491601 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:21:06.491739 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:21:06.499650 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:21:06.499806 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:21:06.509160 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:21:06.509335 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:21:06.522476 systemd[1]: Finished ensure-sysext.service. Aug 13 00:21:06.531927 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:21:06.532016 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:21:06.545798 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:21:06.571653 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:21:06.581234 systemd-resolved[1519]: Positive Trust Anchors: Aug 13 00:21:06.581247 systemd-resolved[1519]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:21:06.581278 systemd-resolved[1519]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:21:06.591484 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:21:06.648007 systemd-resolved[1519]: Using system hostname 'ci-4081.3.5-a-2fbd311b45'. Aug 13 00:21:06.650107 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:21:06.661341 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 13 00:21:06.661437 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:21:06.766172 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Aug 13 00:21:06.808229 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:21:06.808366 kernel: hv_vmbus: registering driver hyperv_fb Aug 13 00:21:06.808393 kernel: hv_vmbus: registering driver hv_balloon Aug 13 00:21:06.824908 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Aug 13 00:21:06.825008 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Aug 13 00:21:06.816850 systemd-networkd[1579]: lo: Link UP Aug 13 00:21:06.816862 systemd-networkd[1579]: lo: Gained carrier Aug 13 00:21:06.821271 systemd-networkd[1579]: Enumeration completed Aug 13 00:21:06.824530 systemd-networkd[1579]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:21:06.824534 systemd-networkd[1579]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:21:06.826834 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Aug 13 00:21:06.837159 kernel: Console: switching to colour dummy device 80x25 Aug 13 00:21:06.843155 kernel: hv_balloon: Memory hot add disabled on ARM64 Aug 13 00:21:06.849659 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:21:06.851224 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:21:06.864617 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:21:06.894653 systemd[1]: Reached target network.target - Network. Aug 13 00:21:06.914677 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1585) Aug 13 00:21:06.919960 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:21:06.934681 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:21:06.935497 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:21:06.963881 kernel: mlx5_core bfec:00:02.0 enP49132s1: Link up Aug 13 00:21:06.966598 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:21:06.991167 kernel: hv_netvsc 000d3a07-3936-000d-3a07-3936000d3a07 eth0: Data path switched to VF: enP49132s1 Aug 13 00:21:06.991927 systemd-networkd[1579]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:21:06.991991 systemd-networkd[1579]: enP49132s1: Link UP Aug 13 00:21:06.992346 systemd-networkd[1579]: eth0: Link UP Aug 13 00:21:06.992357 systemd-networkd[1579]: eth0: Gained carrier Aug 13 00:21:06.992371 systemd-networkd[1579]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:21:06.995174 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 13 00:21:07.003393 systemd-networkd[1579]: enP49132s1: Gained carrier Aug 13 00:21:07.010258 systemd-networkd[1579]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 13 00:21:07.011904 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:21:07.020727 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:21:07.020973 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:21:07.029202 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:21:07.075657 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 00:21:07.093280 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 00:21:07.101380 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:21:07.160216 lvm[1661]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:21:07.202114 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 00:21:07.210770 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:21:07.223335 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 00:21:07.236703 lvm[1664]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:21:07.269875 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 00:21:08.169310 systemd-networkd[1579]: eth0: Gained IPv6LL Aug 13 00:21:08.171944 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:21:08.179781 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:21:08.208168 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:21:08.727477 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:21:08.736459 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:21:11.468966 ldconfig[1312]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:21:11.484689 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:21:11.496352 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:21:11.527677 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:21:11.534973 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:21:11.542198 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:21:11.549521 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:21:11.558426 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:21:11.564820 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:21:11.572790 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:21:11.580043 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:21:11.580093 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:21:11.585506 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:21:11.605424 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:21:11.613527 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:21:11.623992 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:21:11.630850 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:21:11.637180 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:21:11.642928 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:21:11.649089 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:21:11.649119 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:21:11.684254 systemd[1]: Starting chronyd.service - NTP client/server... Aug 13 00:21:11.692299 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:21:11.706342 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:21:11.716988 (chronyd)[1676]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Aug 13 00:21:11.722451 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:21:11.738904 chronyd[1684]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Aug 13 00:21:11.740303 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:21:11.748315 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:21:11.757444 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:21:11.757502 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Aug 13 00:21:11.759480 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Aug 13 00:21:11.765807 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Aug 13 00:21:11.769322 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:21:11.771790 KVP[1686]: KVP starting; pid is:1686 Aug 13 00:21:11.772681 jq[1682]: false Aug 13 00:21:11.791382 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:21:11.801035 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:21:11.816489 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:21:11.822246 chronyd[1684]: Timezone right/UTC failed leap second check, ignoring Aug 13 00:21:11.822498 chronyd[1684]: Loaded seccomp filter (level 2) Aug 13 00:21:11.825364 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:21:11.835547 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:21:11.844152 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:21:11.852358 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:21:11.855703 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:21:11.857477 extend-filesystems[1685]: Found loop4 Aug 13 00:21:11.857477 extend-filesystems[1685]: Found loop5 Aug 13 00:21:11.857477 extend-filesystems[1685]: Found loop6 Aug 13 00:21:11.857477 extend-filesystems[1685]: Found loop7 Aug 13 00:21:11.857477 extend-filesystems[1685]: Found sda Aug 13 00:21:11.857477 extend-filesystems[1685]: Found sda1 Aug 13 00:21:11.857477 extend-filesystems[1685]: Found sda2 Aug 13 00:21:11.857477 extend-filesystems[1685]: Found sda3 Aug 13 00:21:11.857477 extend-filesystems[1685]: Found usr Aug 13 00:21:11.857477 extend-filesystems[1685]: Found sda4 Aug 13 00:21:11.857477 extend-filesystems[1685]: Found sda6 Aug 13 00:21:11.857477 extend-filesystems[1685]: Found sda7 Aug 13 00:21:11.857477 extend-filesystems[1685]: Found sda9 Aug 13 00:21:11.857477 extend-filesystems[1685]: Checking size of /dev/sda9 Aug 13 00:21:12.017993 kernel: hv_utils: KVP IC version 4.0 Aug 13 00:21:11.904563 KVP[1686]: KVP LIC Version: 3.1 Aug 13 00:21:11.857695 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:21:12.018297 extend-filesystems[1685]: Old size kept for /dev/sda9 Aug 13 00:21:12.018297 extend-filesystems[1685]: Found sr0 Aug 13 00:21:11.895115 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:21:12.055381 update_engine[1698]: I20250813 00:21:11.978188 1698 main.cc:92] Flatcar Update Engine starting Aug 13 00:21:11.911684 systemd[1]: Started chronyd.service - NTP client/server. Aug 13 00:21:12.055822 jq[1705]: true Aug 13 00:21:11.938496 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:21:11.942647 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:21:11.943820 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:21:11.944003 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:21:11.964495 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:21:11.965244 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:21:11.983972 systemd-logind[1697]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Aug 13 00:21:11.990598 systemd-logind[1697]: New seat seat0. Aug 13 00:21:12.012297 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:21:12.033169 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:21:12.049461 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:21:12.049643 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:21:12.090112 jq[1728]: true Aug 13 00:21:12.097687 (ntainerd)[1729]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:21:12.112966 tar[1714]: linux-arm64/LICENSE Aug 13 00:21:12.113692 tar[1714]: linux-arm64/helm Aug 13 00:21:12.130103 dbus-daemon[1679]: [system] SELinux support is enabled Aug 13 00:21:12.131453 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:21:12.134698 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1721) Aug 13 00:21:12.150360 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:21:12.150923 dbus-daemon[1679]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 00:21:12.150421 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:21:12.160169 update_engine[1698]: I20250813 00:21:12.158232 1698 update_check_scheduler.cc:74] Next update check in 10m40s Aug 13 00:21:12.164547 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:21:12.164576 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:21:12.183304 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:21:12.218625 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:21:12.273912 coreos-metadata[1678]: Aug 13 00:21:12.273 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 00:21:12.283671 coreos-metadata[1678]: Aug 13 00:21:12.283 INFO Fetch successful Aug 13 00:21:12.283671 coreos-metadata[1678]: Aug 13 00:21:12.283 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Aug 13 00:21:12.292714 coreos-metadata[1678]: Aug 13 00:21:12.292 INFO Fetch successful Aug 13 00:21:12.292714 coreos-metadata[1678]: Aug 13 00:21:12.292 INFO Fetching http://168.63.129.16/machine/d911f84d-d2c9-4d57-a346-d50d200c061c/47b29995%2Da1ad%2D465a%2Db3fa%2D2eee04311f11.%5Fci%2D4081.3.5%2Da%2D2fbd311b45?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Aug 13 00:21:12.296927 coreos-metadata[1678]: Aug 13 00:21:12.296 INFO Fetch successful Aug 13 00:21:12.297228 coreos-metadata[1678]: Aug 13 00:21:12.297 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Aug 13 00:21:12.310532 coreos-metadata[1678]: Aug 13 00:21:12.310 INFO Fetch successful Aug 13 00:21:12.341924 bash[1784]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:21:12.343763 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:21:12.358742 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 00:21:12.363607 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:21:12.375751 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:21:12.441996 locksmithd[1773]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:21:12.856535 containerd[1729]: time="2025-08-13T00:21:12.856426800Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 00:21:12.942160 containerd[1729]: time="2025-08-13T00:21:12.940112640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:21:12.944578 containerd[1729]: time="2025-08-13T00:21:12.944517560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:21:12.944733 containerd[1729]: time="2025-08-13T00:21:12.944715080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:21:12.944807 containerd[1729]: time="2025-08-13T00:21:12.944794280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:21:12.945076 containerd[1729]: time="2025-08-13T00:21:12.945057360Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 00:21:12.945199 containerd[1729]: time="2025-08-13T00:21:12.945183960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 00:21:12.945368 containerd[1729]: time="2025-08-13T00:21:12.945350160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:21:12.945463 containerd[1729]: time="2025-08-13T00:21:12.945448400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:21:12.946282 containerd[1729]: time="2025-08-13T00:21:12.946259720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:21:12.946435 containerd[1729]: time="2025-08-13T00:21:12.946364120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:21:12.946435 containerd[1729]: time="2025-08-13T00:21:12.946386080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:21:12.946435 containerd[1729]: time="2025-08-13T00:21:12.946397080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:21:12.946626 containerd[1729]: time="2025-08-13T00:21:12.946610280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:21:12.948454 containerd[1729]: time="2025-08-13T00:21:12.948422640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:21:12.948725 containerd[1729]: time="2025-08-13T00:21:12.948703600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:21:12.948800 containerd[1729]: time="2025-08-13T00:21:12.948788200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:21:12.948978 containerd[1729]: time="2025-08-13T00:21:12.948961600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:21:12.949110 containerd[1729]: time="2025-08-13T00:21:12.949086480Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:21:12.977077 containerd[1729]: time="2025-08-13T00:21:12.977031320Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:21:12.977262 containerd[1729]: time="2025-08-13T00:21:12.977246800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:21:12.980408 containerd[1729]: time="2025-08-13T00:21:12.979241160Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 00:21:12.980408 containerd[1729]: time="2025-08-13T00:21:12.979271200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 00:21:12.980408 containerd[1729]: time="2025-08-13T00:21:12.979287600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:21:12.980408 containerd[1729]: time="2025-08-13T00:21:12.979505960Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:21:12.980408 containerd[1729]: time="2025-08-13T00:21:12.979814000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:21:12.980408 containerd[1729]: time="2025-08-13T00:21:12.979955840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 00:21:12.980408 containerd[1729]: time="2025-08-13T00:21:12.979988880Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 00:21:12.980408 containerd[1729]: time="2025-08-13T00:21:12.980003120Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 00:21:12.980408 containerd[1729]: time="2025-08-13T00:21:12.980017800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:21:12.980408 containerd[1729]: time="2025-08-13T00:21:12.980031440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:21:12.980408 containerd[1729]: time="2025-08-13T00:21:12.980044840Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:21:12.980408 containerd[1729]: time="2025-08-13T00:21:12.980059920Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:21:12.980408 containerd[1729]: time="2025-08-13T00:21:12.980078600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:21:12.980408 containerd[1729]: time="2025-08-13T00:21:12.980092560Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:21:12.980753 containerd[1729]: time="2025-08-13T00:21:12.980107560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:21:12.980753 containerd[1729]: time="2025-08-13T00:21:12.980120760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:21:12.980753 containerd[1729]: time="2025-08-13T00:21:12.980182920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:21:12.980753 containerd[1729]: time="2025-08-13T00:21:12.980207480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:21:12.980753 containerd[1729]: time="2025-08-13T00:21:12.980221880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:21:12.980753 containerd[1729]: time="2025-08-13T00:21:12.980248360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:21:12.980753 containerd[1729]: time="2025-08-13T00:21:12.980263000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:21:12.980753 containerd[1729]: time="2025-08-13T00:21:12.980277240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:21:12.980753 containerd[1729]: time="2025-08-13T00:21:12.980288960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:21:12.980753 containerd[1729]: time="2025-08-13T00:21:12.980301800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:21:12.980753 containerd[1729]: time="2025-08-13T00:21:12.980322840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 00:21:12.980753 containerd[1729]: time="2025-08-13T00:21:12.980339440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 00:21:12.980753 containerd[1729]: time="2025-08-13T00:21:12.980353840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:21:12.980753 containerd[1729]: time="2025-08-13T00:21:12.980367320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 00:21:12.981101 containerd[1729]: time="2025-08-13T00:21:12.980380000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:21:12.981101 containerd[1729]: time="2025-08-13T00:21:12.981057120Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 00:21:12.981202 containerd[1729]: time="2025-08-13T00:21:12.981188160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 00:21:12.981347 containerd[1729]: time="2025-08-13T00:21:12.981262720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:21:12.981347 containerd[1729]: time="2025-08-13T00:21:12.981280320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:21:12.981599 containerd[1729]: time="2025-08-13T00:21:12.981485080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:21:12.981599 containerd[1729]: time="2025-08-13T00:21:12.981523680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 00:21:12.981599 containerd[1729]: time="2025-08-13T00:21:12.981535640Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:21:12.981599 containerd[1729]: time="2025-08-13T00:21:12.981548520Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 00:21:12.981599 containerd[1729]: time="2025-08-13T00:21:12.981557520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:21:12.983380 containerd[1729]: time="2025-08-13T00:21:12.982673760Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 00:21:12.983380 containerd[1729]: time="2025-08-13T00:21:12.982698240Z" level=info msg="NRI interface is disabled by configuration." Aug 13 00:21:12.983380 containerd[1729]: time="2025-08-13T00:21:12.982712200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:21:12.983493 containerd[1729]: time="2025-08-13T00:21:12.983057760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:21:12.983493 containerd[1729]: time="2025-08-13T00:21:12.983147760Z" level=info msg="Connect containerd service" Aug 13 00:21:12.983493 containerd[1729]: time="2025-08-13T00:21:12.983188400Z" level=info msg="using legacy CRI server" Aug 13 00:21:12.983493 containerd[1729]: time="2025-08-13T00:21:12.983195240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:21:12.983493 containerd[1729]: time="2025-08-13T00:21:12.983313000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:21:12.988744 containerd[1729]: time="2025-08-13T00:21:12.988687000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:21:12.988744 containerd[1729]: time="2025-08-13T00:21:12.989269120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:21:12.989494 containerd[1729]: time="2025-08-13T00:21:12.989442920Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:21:12.989601 containerd[1729]: time="2025-08-13T00:21:12.989572040Z" level=info msg="Start subscribing containerd event" Aug 13 00:21:12.997307 containerd[1729]: time="2025-08-13T00:21:12.994207200Z" level=info msg="Start recovering state" Aug 13 00:21:12.997307 containerd[1729]: time="2025-08-13T00:21:12.995243200Z" level=info msg="Start event monitor" Aug 13 00:21:12.997307 containerd[1729]: time="2025-08-13T00:21:12.995271600Z" level=info msg="Start snapshots syncer" Aug 13 00:21:12.997307 containerd[1729]: time="2025-08-13T00:21:12.995281680Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:21:12.997307 containerd[1729]: time="2025-08-13T00:21:12.995289040Z" level=info msg="Start streaming server" Aug 13 00:21:12.997307 containerd[1729]: time="2025-08-13T00:21:12.996176840Z" level=info msg="containerd successfully booted in 0.140586s" Aug 13 00:21:12.995531 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:21:13.068722 tar[1714]: linux-arm64/README.md Aug 13 00:21:13.086974 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:21:13.184347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:21:13.192313 (kubelet)[1814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:21:13.197128 sshd_keygen[1709]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:21:13.217847 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:21:13.232579 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:21:13.242435 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Aug 13 00:21:13.249722 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:21:13.250551 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:21:13.268888 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:21:13.277601 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Aug 13 00:21:13.300409 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:21:13.315723 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:21:13.324524 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 13 00:21:13.332063 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:21:13.339382 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:21:13.346287 systemd[1]: Startup finished in 761ms (kernel) + 13.132s (initrd) + 16.210s (userspace) = 30.104s. Aug 13 00:21:13.745055 kubelet[1814]: E0813 00:21:13.744990 1814 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:21:13.747753 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:21:13.747910 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:21:13.962486 login[1837]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:13.962858 login[1838]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:13.970773 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:21:13.976400 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:21:13.978401 systemd-logind[1697]: New session 2 of user core. Aug 13 00:21:13.982002 systemd-logind[1697]: New session 1 of user core. Aug 13 00:21:14.020572 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:21:14.029582 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:21:14.045783 (systemd)[1851]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:21:14.327168 systemd[1851]: Queued start job for default target default.target. Aug 13 00:21:14.337184 systemd[1851]: Created slice app.slice - User Application Slice. Aug 13 00:21:14.337353 systemd[1851]: Reached target paths.target - Paths. Aug 13 00:21:14.337438 systemd[1851]: Reached target timers.target - Timers. Aug 13 00:21:14.338790 systemd[1851]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:21:14.350065 systemd[1851]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:21:14.350224 systemd[1851]: Reached target sockets.target - Sockets. Aug 13 00:21:14.350238 systemd[1851]: Reached target basic.target - Basic System. Aug 13 00:21:14.350285 systemd[1851]: Reached target default.target - Main User Target. Aug 13 00:21:14.350314 systemd[1851]: Startup finished in 297ms. Aug 13 00:21:14.350445 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:21:14.358329 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:21:14.359249 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:21:15.277160 waagent[1834]: 2025-08-13T00:21:15.274093Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Aug 13 00:21:15.280658 waagent[1834]: 2025-08-13T00:21:15.280576Z INFO Daemon Daemon OS: flatcar 4081.3.5 Aug 13 00:21:15.285757 waagent[1834]: 2025-08-13T00:21:15.285682Z INFO Daemon Daemon Python: 3.11.9 Aug 13 00:21:15.292286 waagent[1834]: 2025-08-13T00:21:15.292204Z INFO Daemon Daemon Run daemon Aug 13 00:21:15.296746 waagent[1834]: 2025-08-13T00:21:15.296681Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.5' Aug 13 00:21:15.306558 waagent[1834]: 2025-08-13T00:21:15.306410Z INFO Daemon Daemon Using waagent for provisioning Aug 13 00:21:15.312241 waagent[1834]: 2025-08-13T00:21:15.312183Z INFO Daemon Daemon Activate resource disk Aug 13 00:21:15.317254 waagent[1834]: 2025-08-13T00:21:15.317193Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Aug 13 00:21:15.328982 waagent[1834]: 2025-08-13T00:21:15.328914Z INFO Daemon Daemon Found device: None Aug 13 00:21:15.333988 waagent[1834]: 2025-08-13T00:21:15.333923Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Aug 13 00:21:15.343708 waagent[1834]: 2025-08-13T00:21:15.343648Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Aug 13 00:21:15.358073 waagent[1834]: 2025-08-13T00:21:15.358003Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 00:21:15.364424 waagent[1834]: 2025-08-13T00:21:15.364351Z INFO Daemon Daemon Running default provisioning handler Aug 13 00:21:15.379172 waagent[1834]: 2025-08-13T00:21:15.377476Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Aug 13 00:21:15.393346 waagent[1834]: 2025-08-13T00:21:15.393273Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 13 00:21:15.404420 waagent[1834]: 2025-08-13T00:21:15.404341Z INFO Daemon Daemon cloud-init is enabled: False Aug 13 00:21:15.409891 waagent[1834]: 2025-08-13T00:21:15.409832Z INFO Daemon Daemon Copying ovf-env.xml Aug 13 00:21:15.537276 waagent[1834]: 2025-08-13T00:21:15.535944Z INFO Daemon Daemon Successfully mounted dvd Aug 13 00:21:15.564990 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Aug 13 00:21:15.568168 waagent[1834]: 2025-08-13T00:21:15.568035Z INFO Daemon Daemon Detect protocol endpoint Aug 13 00:21:15.573664 waagent[1834]: 2025-08-13T00:21:15.573585Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 00:21:15.579924 waagent[1834]: 2025-08-13T00:21:15.579859Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Aug 13 00:21:15.586908 waagent[1834]: 2025-08-13T00:21:15.586846Z INFO Daemon Daemon Test for route to 168.63.129.16 Aug 13 00:21:15.592808 waagent[1834]: 2025-08-13T00:21:15.592752Z INFO Daemon Daemon Route to 168.63.129.16 exists Aug 13 00:21:15.598120 waagent[1834]: 2025-08-13T00:21:15.598064Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Aug 13 00:21:15.631866 waagent[1834]: 2025-08-13T00:21:15.631815Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Aug 13 00:21:15.639395 waagent[1834]: 2025-08-13T00:21:15.639349Z INFO Daemon Daemon Wire protocol version:2012-11-30 Aug 13 00:21:15.645180 waagent[1834]: 2025-08-13T00:21:15.645107Z INFO Daemon Daemon Server preferred version:2015-04-05 Aug 13 00:21:16.179189 waagent[1834]: 2025-08-13T00:21:16.178791Z INFO Daemon Daemon Initializing goal state during protocol detection Aug 13 00:21:16.186297 waagent[1834]: 2025-08-13T00:21:16.186215Z INFO Daemon Daemon Forcing an update of the goal state. Aug 13 00:21:16.195927 waagent[1834]: 2025-08-13T00:21:16.195863Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 13 00:21:16.222240 waagent[1834]: 2025-08-13T00:21:16.222188Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Aug 13 00:21:16.229046 waagent[1834]: 2025-08-13T00:21:16.228991Z INFO Daemon Aug 13 00:21:16.232070 waagent[1834]: 2025-08-13T00:21:16.232018Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 9c4ff7a0-e24c-4130-8b99-7f10dabc82bd eTag: 5492698744224379791 source: Fabric] Aug 13 00:21:16.243989 waagent[1834]: 2025-08-13T00:21:16.243937Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Aug 13 00:21:16.251408 waagent[1834]: 2025-08-13T00:21:16.251344Z INFO Daemon Aug 13 00:21:16.254336 waagent[1834]: 2025-08-13T00:21:16.254284Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Aug 13 00:21:16.266053 waagent[1834]: 2025-08-13T00:21:16.266014Z INFO Daemon Daemon Downloading artifacts profile blob Aug 13 00:21:16.349870 waagent[1834]: 2025-08-13T00:21:16.349765Z INFO Daemon Downloaded certificate {'thumbprint': 'C1DD2E098ACADA746D40578D7B10416B8E5FE54E', 'hasPrivateKey': False} Aug 13 00:21:16.360759 waagent[1834]: 2025-08-13T00:21:16.360701Z INFO Daemon Downloaded certificate {'thumbprint': '185DFFF2990EB4AA1A7804613976182712048EFF', 'hasPrivateKey': True} Aug 13 00:21:16.371524 waagent[1834]: 2025-08-13T00:21:16.371463Z INFO Daemon Fetch goal state completed Aug 13 00:21:16.384596 waagent[1834]: 2025-08-13T00:21:16.384548Z INFO Daemon Daemon Starting provisioning Aug 13 00:21:16.390277 waagent[1834]: 2025-08-13T00:21:16.390200Z INFO Daemon Daemon Handle ovf-env.xml. Aug 13 00:21:16.395660 waagent[1834]: 2025-08-13T00:21:16.395602Z INFO Daemon Daemon Set hostname [ci-4081.3.5-a-2fbd311b45] Aug 13 00:21:16.428460 waagent[1834]: 2025-08-13T00:21:16.428377Z INFO Daemon Daemon Publish hostname [ci-4081.3.5-a-2fbd311b45] Aug 13 00:21:16.435989 waagent[1834]: 2025-08-13T00:21:16.435880Z INFO Daemon Daemon Examine /proc/net/route for primary interface Aug 13 00:21:16.442682 waagent[1834]: 2025-08-13T00:21:16.442621Z INFO Daemon Daemon Primary interface is [eth0] Aug 13 00:21:16.499931 systemd-networkd[1579]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:21:16.500181 systemd-networkd[1579]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:21:16.500213 systemd-networkd[1579]: eth0: DHCP lease lost Aug 13 00:21:16.501269 waagent[1834]: 2025-08-13T00:21:16.501178Z INFO Daemon Daemon Create user account if not exists Aug 13 00:21:16.507634 waagent[1834]: 2025-08-13T00:21:16.507570Z INFO Daemon Daemon User core already exists, skip useradd Aug 13 00:21:16.513590 waagent[1834]: 2025-08-13T00:21:16.513529Z INFO Daemon Daemon Configure sudoer Aug 13 00:21:16.518536 waagent[1834]: 2025-08-13T00:21:16.518468Z INFO Daemon Daemon Configure sshd Aug 13 00:21:16.523668 waagent[1834]: 2025-08-13T00:21:16.523602Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Aug 13 00:21:16.537811 waagent[1834]: 2025-08-13T00:21:16.537742Z INFO Daemon Daemon Deploy ssh public key. Aug 13 00:21:16.545190 systemd-networkd[1579]: eth0: DHCPv6 lease lost Aug 13 00:21:16.557193 systemd-networkd[1579]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 13 00:21:17.666707 waagent[1834]: 2025-08-13T00:21:17.666643Z INFO Daemon Daemon Provisioning complete Aug 13 00:21:17.686400 waagent[1834]: 2025-08-13T00:21:17.686333Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Aug 13 00:21:17.693348 waagent[1834]: 2025-08-13T00:21:17.693273Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Aug 13 00:21:17.703851 waagent[1834]: 2025-08-13T00:21:17.703789Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Aug 13 00:21:17.842890 waagent[1908]: 2025-08-13T00:21:17.842807Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Aug 13 00:21:17.843247 waagent[1908]: 2025-08-13T00:21:17.842968Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.5 Aug 13 00:21:17.843247 waagent[1908]: 2025-08-13T00:21:17.843022Z INFO ExtHandler ExtHandler Python: 3.11.9 Aug 13 00:21:17.918947 waagent[1908]: 2025-08-13T00:21:17.918755Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.5; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Aug 13 00:21:17.919129 waagent[1908]: 2025-08-13T00:21:17.919072Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:21:17.919239 waagent[1908]: 2025-08-13T00:21:17.919194Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:21:17.928927 waagent[1908]: 2025-08-13T00:21:17.928837Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 13 00:21:17.935497 waagent[1908]: 2025-08-13T00:21:17.935413Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Aug 13 00:21:17.936103 waagent[1908]: 2025-08-13T00:21:17.936055Z INFO ExtHandler Aug 13 00:21:17.936190 waagent[1908]: 2025-08-13T00:21:17.936158Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 789b4b04-47fb-47c4-adc0-1ecd386e896f eTag: 5492698744224379791 source: Fabric] Aug 13 00:21:17.936514 waagent[1908]: 2025-08-13T00:21:17.936470Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Aug 13 00:21:17.937089 waagent[1908]: 2025-08-13T00:21:17.937043Z INFO ExtHandler Aug 13 00:21:17.937183 waagent[1908]: 2025-08-13T00:21:17.937123Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Aug 13 00:21:17.941406 waagent[1908]: 2025-08-13T00:21:17.941346Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Aug 13 00:21:18.047593 waagent[1908]: 2025-08-13T00:21:18.047493Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C1DD2E098ACADA746D40578D7B10416B8E5FE54E', 'hasPrivateKey': False} Aug 13 00:21:18.048023 waagent[1908]: 2025-08-13T00:21:18.047973Z INFO ExtHandler Downloaded certificate {'thumbprint': '185DFFF2990EB4AA1A7804613976182712048EFF', 'hasPrivateKey': True} Aug 13 00:21:18.048470 waagent[1908]: 2025-08-13T00:21:18.048426Z INFO ExtHandler Fetch goal state completed Aug 13 00:21:18.065324 waagent[1908]: 2025-08-13T00:21:18.065252Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1908 Aug 13 00:21:18.065500 waagent[1908]: 2025-08-13T00:21:18.065456Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Aug 13 00:21:18.067330 waagent[1908]: 2025-08-13T00:21:18.067278Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.5', '', 'Flatcar Container Linux by Kinvolk'] Aug 13 00:21:18.067753 waagent[1908]: 2025-08-13T00:21:18.067711Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Aug 13 00:21:18.197751 waagent[1908]: 2025-08-13T00:21:18.197242Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 13 00:21:18.197751 waagent[1908]: 2025-08-13T00:21:18.197477Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 13 00:21:18.204637 waagent[1908]: 2025-08-13T00:21:18.204603Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 13 00:21:18.211700 systemd[1]: Reloading requested from client PID 1923 ('systemctl') (unit waagent.service)... Aug 13 00:21:18.211718 systemd[1]: Reloading... Aug 13 00:21:18.307173 zram_generator::config[1959]: No configuration found. Aug 13 00:21:18.413379 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:21:18.493641 systemd[1]: Reloading finished in 281 ms. Aug 13 00:21:18.522644 waagent[1908]: 2025-08-13T00:21:18.522348Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Aug 13 00:21:18.527764 systemd[1]: Reloading requested from client PID 2013 ('systemctl') (unit waagent.service)... Aug 13 00:21:18.527778 systemd[1]: Reloading... Aug 13 00:21:18.613234 zram_generator::config[2046]: No configuration found. Aug 13 00:21:18.717379 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:21:18.794978 systemd[1]: Reloading finished in 266 ms. Aug 13 00:21:18.815192 waagent[1908]: 2025-08-13T00:21:18.814357Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Aug 13 00:21:18.815192 waagent[1908]: 2025-08-13T00:21:18.814534Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Aug 13 00:21:19.238179 waagent[1908]: 2025-08-13T00:21:19.237357Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Aug 13 00:21:19.238179 waagent[1908]: 2025-08-13T00:21:19.237976Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Aug 13 00:21:19.238861 waagent[1908]: 2025-08-13T00:21:19.238770Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 13 00:21:19.239304 waagent[1908]: 2025-08-13T00:21:19.239203Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 13 00:21:19.240312 waagent[1908]: 2025-08-13T00:21:19.239547Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:21:19.240312 waagent[1908]: 2025-08-13T00:21:19.239634Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:21:19.240312 waagent[1908]: 2025-08-13T00:21:19.239831Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 13 00:21:19.240312 waagent[1908]: 2025-08-13T00:21:19.240001Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 13 00:21:19.240312 waagent[1908]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 13 00:21:19.240312 waagent[1908]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Aug 13 00:21:19.240312 waagent[1908]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 13 00:21:19.240312 waagent[1908]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:21:19.240312 waagent[1908]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:21:19.240312 waagent[1908]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:21:19.240682 waagent[1908]: 2025-08-13T00:21:19.240610Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 13 00:21:19.240864 waagent[1908]: 2025-08-13T00:21:19.240811Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 13 00:21:19.241311 waagent[1908]: 2025-08-13T00:21:19.241208Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 13 00:21:19.241406 waagent[1908]: 2025-08-13T00:21:19.241312Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 13 00:21:19.241637 waagent[1908]: 2025-08-13T00:21:19.241574Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:21:19.241900 waagent[1908]: 2025-08-13T00:21:19.241840Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 13 00:21:19.242857 waagent[1908]: 2025-08-13T00:21:19.242274Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:21:19.242857 waagent[1908]: 2025-08-13T00:21:19.242465Z INFO EnvHandler ExtHandler Configure routes Aug 13 00:21:19.242857 waagent[1908]: 2025-08-13T00:21:19.242528Z INFO EnvHandler ExtHandler Gateway:None Aug 13 00:21:19.242857 waagent[1908]: 2025-08-13T00:21:19.242569Z INFO EnvHandler ExtHandler Routes:None Aug 13 00:21:19.247871 waagent[1908]: 2025-08-13T00:21:19.247818Z INFO ExtHandler ExtHandler Aug 13 00:21:19.248270 waagent[1908]: 2025-08-13T00:21:19.248215Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 0a7f5749-4855-44f0-96fd-a4d47e089b07 correlation ed565cf8-4a43-4d13-b78d-5342455c880d created: 2025-08-13T00:20:00.530533Z] Aug 13 00:21:19.249536 waagent[1908]: 2025-08-13T00:21:19.249477Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Aug 13 00:21:19.251446 waagent[1908]: 2025-08-13T00:21:19.251398Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Aug 13 00:21:19.291546 waagent[1908]: 2025-08-13T00:21:19.291430Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 50F99F75-D534-4773-AF39-C69A46A3D3CC;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Aug 13 00:21:19.320185 waagent[1908]: 2025-08-13T00:21:19.319792Z INFO MonitorHandler ExtHandler Network interfaces: Aug 13 00:21:19.320185 waagent[1908]: Executing ['ip', '-a', '-o', 'link']: Aug 13 00:21:19.320185 waagent[1908]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 13 00:21:19.320185 waagent[1908]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:07:39:36 brd ff:ff:ff:ff:ff:ff Aug 13 00:21:19.320185 waagent[1908]: 3: enP49132s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:07:39:36 brd ff:ff:ff:ff:ff:ff\ altname enP49132p0s2 Aug 13 00:21:19.320185 waagent[1908]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 13 00:21:19.320185 waagent[1908]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 13 00:21:19.320185 waagent[1908]: 2: eth0 inet 10.200.20.40/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 13 00:21:19.320185 waagent[1908]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 13 00:21:19.320185 waagent[1908]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Aug 13 00:21:19.320185 waagent[1908]: 2: eth0 inet6 fe80::20d:3aff:fe07:3936/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Aug 13 00:21:19.362544 waagent[1908]: 2025-08-13T00:21:19.362465Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Aug 13 00:21:19.362544 waagent[1908]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:21:19.362544 waagent[1908]: pkts bytes target prot opt in out source destination Aug 13 00:21:19.362544 waagent[1908]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:21:19.362544 waagent[1908]: pkts bytes target prot opt in out source destination Aug 13 00:21:19.362544 waagent[1908]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:21:19.362544 waagent[1908]: pkts bytes target prot opt in out source destination Aug 13 00:21:19.362544 waagent[1908]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 13 00:21:19.362544 waagent[1908]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 00:21:19.362544 waagent[1908]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 00:21:19.366030 waagent[1908]: 2025-08-13T00:21:19.365963Z INFO EnvHandler ExtHandler Current Firewall rules: Aug 13 00:21:19.366030 waagent[1908]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:21:19.366030 waagent[1908]: pkts bytes target prot opt in out source destination Aug 13 00:21:19.366030 waagent[1908]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:21:19.366030 waagent[1908]: pkts bytes target prot opt in out source destination Aug 13 00:21:19.366030 waagent[1908]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:21:19.366030 waagent[1908]: pkts bytes target prot opt in out source destination Aug 13 00:21:19.366030 waagent[1908]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 13 00:21:19.366030 waagent[1908]: 6 519 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 00:21:19.366030 waagent[1908]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 00:21:19.366386 waagent[1908]: 2025-08-13T00:21:19.366288Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Aug 13 00:21:23.942363 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:21:23.952803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:21:24.068834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:21:24.073221 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:21:24.216005 kubelet[2140]: E0813 00:21:24.215891 2140 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:21:24.219462 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:21:24.219669 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:21:34.442390 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:21:34.450380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:21:34.553300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:21:34.557919 (kubelet)[2155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:21:34.643442 kubelet[2155]: E0813 00:21:34.643387 2155 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:21:34.646100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:21:34.646284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:21:35.614481 chronyd[1684]: Selected source PHC0 Aug 13 00:21:44.692413 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:21:44.699329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:21:44.804357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:21:44.815514 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:21:44.923317 kubelet[2170]: E0813 00:21:44.923245 2170 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:21:44.925959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:21:44.926252 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:21:47.401680 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:21:47.407613 systemd[1]: Started sshd@0-10.200.20.40:22-10.200.16.10:56594.service - OpenSSH per-connection server daemon (10.200.16.10:56594). Aug 13 00:21:47.935628 sshd[2178]: Accepted publickey for core from 10.200.16.10 port 56594 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:21:47.937007 sshd[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:47.941283 systemd-logind[1697]: New session 3 of user core. Aug 13 00:21:47.946292 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:21:48.361962 systemd[1]: Started sshd@1-10.200.20.40:22-10.200.16.10:56600.service - OpenSSH per-connection server daemon (10.200.16.10:56600). Aug 13 00:21:48.803929 sshd[2183]: Accepted publickey for core from 10.200.16.10 port 56600 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:21:48.805375 sshd[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:48.809370 systemd-logind[1697]: New session 4 of user core. Aug 13 00:21:48.820313 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:21:49.126150 sshd[2183]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:49.130015 systemd[1]: sshd@1-10.200.20.40:22-10.200.16.10:56600.service: Deactivated successfully. Aug 13 00:21:49.131796 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:21:49.133721 systemd-logind[1697]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:21:49.134642 systemd-logind[1697]: Removed session 4. Aug 13 00:21:49.212902 systemd[1]: Started sshd@2-10.200.20.40:22-10.200.16.10:56616.service - OpenSSH per-connection server daemon (10.200.16.10:56616). Aug 13 00:21:49.695092 sshd[2190]: Accepted publickey for core from 10.200.16.10 port 56616 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:21:49.696517 sshd[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:49.701333 systemd-logind[1697]: New session 5 of user core. Aug 13 00:21:49.707309 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:21:50.052412 sshd[2190]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:50.056274 systemd[1]: sshd@2-10.200.20.40:22-10.200.16.10:56616.service: Deactivated successfully. Aug 13 00:21:50.057875 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:21:50.059673 systemd-logind[1697]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:21:50.060851 systemd-logind[1697]: Removed session 5. Aug 13 00:21:50.139086 systemd[1]: Started sshd@3-10.200.20.40:22-10.200.16.10:56628.service - OpenSSH per-connection server daemon (10.200.16.10:56628). Aug 13 00:21:50.603623 sshd[2197]: Accepted publickey for core from 10.200.16.10 port 56628 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:21:50.604992 sshd[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:50.608841 systemd-logind[1697]: New session 6 of user core. Aug 13 00:21:50.617297 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:21:50.954872 sshd[2197]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:50.958444 systemd[1]: sshd@3-10.200.20.40:22-10.200.16.10:56628.service: Deactivated successfully. Aug 13 00:21:50.960053 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:21:50.961933 systemd-logind[1697]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:21:50.962999 systemd-logind[1697]: Removed session 6. Aug 13 00:21:51.042625 systemd[1]: Started sshd@4-10.200.20.40:22-10.200.16.10:50910.service - OpenSSH per-connection server daemon (10.200.16.10:50910). Aug 13 00:21:51.509607 sshd[2204]: Accepted publickey for core from 10.200.16.10 port 50910 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:21:51.511082 sshd[2204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:51.516570 systemd-logind[1697]: New session 7 of user core. Aug 13 00:21:51.522322 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:21:51.996259 sudo[2207]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:21:51.996550 sudo[2207]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:21:52.023877 sudo[2207]: pam_unix(sudo:session): session closed for user root Aug 13 00:21:52.110601 sshd[2204]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:52.114734 systemd[1]: sshd@4-10.200.20.40:22-10.200.16.10:50910.service: Deactivated successfully. Aug 13 00:21:52.116713 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:21:52.118870 systemd-logind[1697]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:21:52.119820 systemd-logind[1697]: Removed session 7. Aug 13 00:21:52.201406 systemd[1]: Started sshd@5-10.200.20.40:22-10.200.16.10:50924.service - OpenSSH per-connection server daemon (10.200.16.10:50924). Aug 13 00:21:52.670399 sshd[2212]: Accepted publickey for core from 10.200.16.10 port 50924 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:21:52.671965 sshd[2212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:52.676211 systemd-logind[1697]: New session 8 of user core. Aug 13 00:21:52.684339 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:21:52.935747 sudo[2216]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:21:52.936630 sudo[2216]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:21:52.939941 sudo[2216]: pam_unix(sudo:session): session closed for user root Aug 13 00:21:52.944767 sudo[2215]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 00:21:52.945406 sudo[2215]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:21:52.965538 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 00:21:52.966864 auditctl[2219]: No rules Aug 13 00:21:52.967221 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:21:52.967405 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 00:21:52.970581 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:21:52.997118 augenrules[2237]: No rules Aug 13 00:21:52.998721 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:21:53.001178 sudo[2215]: pam_unix(sudo:session): session closed for user root Aug 13 00:21:53.075454 sshd[2212]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:53.079842 systemd[1]: sshd@5-10.200.20.40:22-10.200.16.10:50924.service: Deactivated successfully. Aug 13 00:21:53.082807 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:21:53.083544 systemd-logind[1697]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:21:53.084709 systemd-logind[1697]: Removed session 8. Aug 13 00:21:53.165351 systemd[1]: Started sshd@6-10.200.20.40:22-10.200.16.10:50938.service - OpenSSH per-connection server daemon (10.200.16.10:50938). Aug 13 00:21:53.648518 sshd[2245]: Accepted publickey for core from 10.200.16.10 port 50938 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:21:53.649877 sshd[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:53.653719 systemd-logind[1697]: New session 9 of user core. Aug 13 00:21:53.665286 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:21:53.920024 sudo[2248]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:21:53.920342 sudo[2248]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:21:54.935553 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Aug 13 00:21:54.942238 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 00:21:54.950338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:21:55.034531 (dockerd)[2267]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:21:55.035458 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:21:55.064909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:21:55.076435 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:21:55.111911 kubelet[2273]: E0813 00:21:55.111863 2273 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:21:55.114530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:21:55.114689 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:21:55.981450 dockerd[2267]: time="2025-08-13T00:21:55.981376127Z" level=info msg="Starting up" Aug 13 00:21:56.466164 dockerd[2267]: time="2025-08-13T00:21:56.465932212Z" level=info msg="Loading containers: start." Aug 13 00:21:56.685171 kernel: Initializing XFRM netlink socket Aug 13 00:21:56.882774 systemd-networkd[1579]: docker0: Link UP Aug 13 00:21:56.912276 dockerd[2267]: time="2025-08-13T00:21:56.912231989Z" level=info msg="Loading containers: done." Aug 13 00:21:56.933210 dockerd[2267]: time="2025-08-13T00:21:56.933036310Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:21:56.933736 dockerd[2267]: time="2025-08-13T00:21:56.933412391Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 00:21:56.933736 dockerd[2267]: time="2025-08-13T00:21:56.933543712Z" level=info msg="Daemon has completed initialization" Aug 13 00:21:56.990152 dockerd[2267]: time="2025-08-13T00:21:56.990049372Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:21:56.990298 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:21:57.244177 update_engine[1698]: I20250813 00:21:57.243783 1698 update_attempter.cc:509] Updating boot flags... Aug 13 00:21:57.311391 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2425) Aug 13 00:21:58.451580 containerd[1729]: time="2025-08-13T00:21:58.451537289Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 00:21:59.364003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1882103343.mount: Deactivated successfully. Aug 13 00:22:00.519182 containerd[1729]: time="2025-08-13T00:22:00.518587485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:00.520946 containerd[1729]: time="2025-08-13T00:22:00.520727450Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=26327781" Aug 13 00:22:00.523717 containerd[1729]: time="2025-08-13T00:22:00.523684577Z" level=info msg="ImageCreate event name:\"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:00.529198 containerd[1729]: time="2025-08-13T00:22:00.529084551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:00.530421 containerd[1729]: time="2025-08-13T00:22:00.530233193Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"26324581\" in 2.078648944s" Aug 13 00:22:00.530421 containerd[1729]: time="2025-08-13T00:22:00.530277994Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\"" Aug 13 00:22:00.531780 containerd[1729]: time="2025-08-13T00:22:00.531736477Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 00:22:01.690981 containerd[1729]: time="2025-08-13T00:22:01.690937211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:01.693715 containerd[1729]: time="2025-08-13T00:22:01.693674418Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=22529696" Aug 13 00:22:01.697946 containerd[1729]: time="2025-08-13T00:22:01.696449465Z" level=info msg="ImageCreate event name:\"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:01.700499 containerd[1729]: time="2025-08-13T00:22:01.700460635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:01.701629 containerd[1729]: time="2025-08-13T00:22:01.701583757Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"24065486\" in 1.16981012s" Aug 13 00:22:01.701629 containerd[1729]: time="2025-08-13T00:22:01.701623917Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\"" Aug 13 00:22:01.702126 containerd[1729]: time="2025-08-13T00:22:01.702069279Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 00:22:02.758461 containerd[1729]: time="2025-08-13T00:22:02.758400799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:02.760918 containerd[1729]: time="2025-08-13T00:22:02.760604285Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=17484138" Aug 13 00:22:02.765771 containerd[1729]: time="2025-08-13T00:22:02.765735617Z" level=info msg="ImageCreate event name:\"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:02.770834 containerd[1729]: time="2025-08-13T00:22:02.770775390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:02.772543 containerd[1729]: time="2025-08-13T00:22:02.772416914Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"19019946\" in 1.070306355s" Aug 13 00:22:02.772543 containerd[1729]: time="2025-08-13T00:22:02.772469754Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\"" Aug 13 00:22:02.773764 containerd[1729]: time="2025-08-13T00:22:02.773468996Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 00:22:03.765372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1495434389.mount: Deactivated successfully. Aug 13 00:22:04.136639 containerd[1729]: time="2025-08-13T00:22:04.136359512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:04.138886 containerd[1729]: time="2025-08-13T00:22:04.138845878Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=27378405" Aug 13 00:22:04.143224 containerd[1729]: time="2025-08-13T00:22:04.143167849Z" level=info msg="ImageCreate event name:\"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:04.147830 containerd[1729]: time="2025-08-13T00:22:04.147765020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:04.148527 containerd[1729]: time="2025-08-13T00:22:04.148372141Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"27377424\" in 1.374862825s" Aug 13 00:22:04.148527 containerd[1729]: time="2025-08-13T00:22:04.148418382Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\"" Aug 13 00:22:04.149266 containerd[1729]: time="2025-08-13T00:22:04.149000103Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:22:04.890691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1316905964.mount: Deactivated successfully. Aug 13 00:22:05.192934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Aug 13 00:22:05.202478 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:22:05.897285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:05.909482 (kubelet)[2546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:22:05.981669 kubelet[2546]: E0813 00:22:05.981605 2546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:22:05.984638 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:22:05.984965 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:22:07.128839 containerd[1729]: time="2025-08-13T00:22:07.128779479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:07.131535 containerd[1729]: time="2025-08-13T00:22:07.131493246Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Aug 13 00:22:07.136257 containerd[1729]: time="2025-08-13T00:22:07.136217738Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:07.141055 containerd[1729]: time="2025-08-13T00:22:07.140990149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:07.142978 containerd[1729]: time="2025-08-13T00:22:07.142580153Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.99331113s" Aug 13 00:22:07.142978 containerd[1729]: time="2025-08-13T00:22:07.142641314Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 13 00:22:07.144851 containerd[1729]: time="2025-08-13T00:22:07.144744079Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:22:08.192053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1492515388.mount: Deactivated successfully. Aug 13 00:22:08.629099 containerd[1729]: time="2025-08-13T00:22:08.628257893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:08.674247 containerd[1729]: time="2025-08-13T00:22:08.674192922Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Aug 13 00:22:08.677225 containerd[1729]: time="2025-08-13T00:22:08.677164332Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:08.721896 containerd[1729]: time="2025-08-13T00:22:08.721816957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:08.723064 containerd[1729]: time="2025-08-13T00:22:08.722624719Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.57781916s" Aug 13 00:22:08.723064 containerd[1729]: time="2025-08-13T00:22:08.722661359Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 13 00:22:08.723206 containerd[1729]: time="2025-08-13T00:22:08.723155041Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 00:22:11.447299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount812506333.mount: Deactivated successfully. Aug 13 00:22:16.192284 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Aug 13 00:22:16.200331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:22:21.157524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:21.162192 (kubelet)[2606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:22:21.196792 kubelet[2606]: E0813 00:22:21.196736 2606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:22:21.198863 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:22:21.198989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:22:25.182211 containerd[1729]: time="2025-08-13T00:22:25.181389923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:25.184015 containerd[1729]: time="2025-08-13T00:22:25.183710930Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Aug 13 00:22:25.186281 containerd[1729]: time="2025-08-13T00:22:25.186222977Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:25.194257 containerd[1729]: time="2025-08-13T00:22:25.194170879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:25.195155 containerd[1729]: time="2025-08-13T00:22:25.194890521Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 16.47170604s" Aug 13 00:22:25.195155 containerd[1729]: time="2025-08-13T00:22:25.194934641Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Aug 13 00:22:30.306673 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:30.316623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:22:30.352859 systemd[1]: Reloading requested from client PID 2689 ('systemctl') (unit session-9.scope)... Aug 13 00:22:30.352883 systemd[1]: Reloading... Aug 13 00:22:30.475161 zram_generator::config[2735]: No configuration found. Aug 13 00:22:30.588615 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:22:30.668628 systemd[1]: Reloading finished in 315 ms. Aug 13 00:22:30.721387 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:22:30.721486 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:22:30.721851 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:30.724468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:22:31.337291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:31.350483 (kubelet)[2797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:22:31.391866 kubelet[2797]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:22:31.393847 kubelet[2797]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:22:31.393847 kubelet[2797]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:22:31.393847 kubelet[2797]: I0813 00:22:31.392320 2797 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:22:32.084786 kubelet[2797]: I0813 00:22:32.084739 2797 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:22:32.084786 kubelet[2797]: I0813 00:22:32.084776 2797 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:22:32.085084 kubelet[2797]: I0813 00:22:32.085060 2797 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:22:32.105999 kubelet[2797]: E0813 00:22:32.105678 2797 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:32.108007 kubelet[2797]: I0813 00:22:32.107962 2797 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:22:32.113799 kubelet[2797]: E0813 00:22:32.113763 2797 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:22:32.114219 kubelet[2797]: I0813 00:22:32.114007 2797 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:22:32.117150 kubelet[2797]: I0813 00:22:32.117103 2797 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:22:32.117898 kubelet[2797]: I0813 00:22:32.117488 2797 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:22:32.117898 kubelet[2797]: I0813 00:22:32.117519 2797 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-a-2fbd311b45","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:22:32.117898 kubelet[2797]: I0813 00:22:32.117704 2797 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:22:32.117898 kubelet[2797]: I0813 00:22:32.117714 2797 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:22:32.118092 kubelet[2797]: I0813 00:22:32.117841 2797 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:22:32.120858 kubelet[2797]: I0813 00:22:32.120831 2797 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:22:32.120954 kubelet[2797]: I0813 00:22:32.120936 2797 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:22:32.120980 kubelet[2797]: I0813 00:22:32.120967 2797 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:22:32.120980 kubelet[2797]: I0813 00:22:32.120979 2797 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:22:32.125696 kubelet[2797]: W0813 00:22:32.125614 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-2fbd311b45&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Aug 13 00:22:32.125779 kubelet[2797]: E0813 00:22:32.125703 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-2fbd311b45&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:32.125842 kubelet[2797]: I0813 00:22:32.125822 2797 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:22:32.126876 kubelet[2797]: I0813 00:22:32.126298 2797 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:22:32.126876 kubelet[2797]: W0813 00:22:32.126363 2797 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:22:32.128236 kubelet[2797]: I0813 00:22:32.128020 2797 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:22:32.128236 kubelet[2797]: I0813 00:22:32.128062 2797 server.go:1287] "Started kubelet" Aug 13 00:22:32.135212 kubelet[2797]: I0813 00:22:32.135161 2797 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:22:32.136194 kubelet[2797]: E0813 00:22:32.136006 2797 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-a-2fbd311b45.185b2bb8b6939556 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-a-2fbd311b45,UID:ci-4081.3.5-a-2fbd311b45,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-a-2fbd311b45,},FirstTimestamp:2025-08-13 00:22:32.128042326 +0000 UTC m=+0.774245370,LastTimestamp:2025-08-13 00:22:32.128042326 +0000 UTC m=+0.774245370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-a-2fbd311b45,}" Aug 13 00:22:32.137165 kubelet[2797]: W0813 00:22:32.136507 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Aug 13 00:22:32.137165 kubelet[2797]: E0813 00:22:32.136566 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:32.137165 kubelet[2797]: I0813 00:22:32.136952 2797 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:22:32.139184 kubelet[2797]: I0813 00:22:32.138181 2797 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:22:32.139184 kubelet[2797]: I0813 00:22:32.138556 2797 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:22:32.140872 kubelet[2797]: I0813 00:22:32.140842 2797 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:22:32.143066 kubelet[2797]: I0813 00:22:32.143040 2797 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:22:32.144327 kubelet[2797]: I0813 00:22:32.144308 2797 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:22:32.144666 kubelet[2797]: E0813 00:22:32.144644 2797 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-2fbd311b45\" not found" Aug 13 00:22:32.145382 kubelet[2797]: I0813 00:22:32.145363 2797 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:22:32.145535 kubelet[2797]: I0813 00:22:32.145526 2797 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:22:32.146706 kubelet[2797]: I0813 00:22:32.146684 2797 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:22:32.146872 kubelet[2797]: I0813 00:22:32.146852 2797 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:22:32.147256 kubelet[2797]: W0813 00:22:32.147221 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Aug 13 00:22:32.147376 kubelet[2797]: E0813 00:22:32.147357 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:32.147515 kubelet[2797]: E0813 00:22:32.147494 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-2fbd311b45?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="200ms" Aug 13 00:22:32.149323 kubelet[2797]: E0813 00:22:32.149300 2797 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:22:32.150055 kubelet[2797]: I0813 00:22:32.150037 2797 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:22:32.191011 kubelet[2797]: I0813 00:22:32.190976 2797 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:22:32.191212 kubelet[2797]: I0813 00:22:32.191196 2797 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:22:32.191299 kubelet[2797]: I0813 00:22:32.191290 2797 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:22:32.245677 kubelet[2797]: E0813 00:22:32.245642 2797 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-2fbd311b45\" not found" Aug 13 00:22:32.346999 kubelet[2797]: E0813 00:22:32.345907 2797 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-2fbd311b45\" not found" Aug 13 00:22:32.348614 kubelet[2797]: E0813 00:22:32.348582 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-2fbd311b45?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="400ms" Aug 13 00:22:32.446256 kubelet[2797]: E0813 00:22:32.446167 2797 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-2fbd311b45\" not found" Aug 13 00:22:32.483838 kubelet[2797]: I0813 00:22:32.483807 2797 policy_none.go:49] "None policy: Start" Aug 13 00:22:32.483838 kubelet[2797]: I0813 00:22:32.483843 2797 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:22:32.483919 kubelet[2797]: I0813 00:22:32.483857 2797 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:22:32.492550 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:22:32.506102 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:22:32.509656 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:22:32.520324 kubelet[2797]: I0813 00:22:32.520088 2797 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:22:32.520324 kubelet[2797]: I0813 00:22:32.520328 2797 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:22:32.520488 kubelet[2797]: I0813 00:22:32.520339 2797 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:22:32.520951 kubelet[2797]: I0813 00:22:32.520706 2797 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:22:32.522246 kubelet[2797]: E0813 00:22:32.522108 2797 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:22:32.522246 kubelet[2797]: E0813 00:22:32.522182 2797 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-a-2fbd311b45\" not found" Aug 13 00:22:32.592680 kubelet[2797]: I0813 00:22:32.592619 2797 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:22:32.594922 kubelet[2797]: I0813 00:22:32.594627 2797 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:22:32.594922 kubelet[2797]: I0813 00:22:32.594659 2797 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:22:32.594922 kubelet[2797]: I0813 00:22:32.594679 2797 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:22:32.594922 kubelet[2797]: I0813 00:22:32.594685 2797 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:22:32.594922 kubelet[2797]: E0813 00:22:32.594734 2797 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Aug 13 00:22:32.596215 kubelet[2797]: W0813 00:22:32.595933 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Aug 13 00:22:32.596381 kubelet[2797]: E0813 00:22:32.596354 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:32.622789 kubelet[2797]: I0813 00:22:32.622398 2797 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:32.623366 kubelet[2797]: E0813 00:22:32.623329 2797 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:32.706659 systemd[1]: Created slice kubepods-burstable-podd00f81d17d324ea398955c0d0ccebb8a.slice - libcontainer container kubepods-burstable-podd00f81d17d324ea398955c0d0ccebb8a.slice. Aug 13 00:22:32.716935 kubelet[2797]: E0813 00:22:32.716895 2797 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-2fbd311b45\" not found" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:32.719067 systemd[1]: Created slice kubepods-burstable-pod980a08b3768e18552279b17c1c7c2087.slice - libcontainer container kubepods-burstable-pod980a08b3768e18552279b17c1c7c2087.slice. Aug 13 00:22:32.727464 kubelet[2797]: E0813 00:22:32.727420 2797 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-2fbd311b45\" not found" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:32.730364 systemd[1]: Created slice kubepods-burstable-pod5e36d868227303457fc9ad96657124cb.slice - libcontainer container kubepods-burstable-pod5e36d868227303457fc9ad96657124cb.slice. Aug 13 00:22:32.732270 kubelet[2797]: E0813 00:22:32.732236 2797 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-2fbd311b45\" not found" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:32.747537 kubelet[2797]: I0813 00:22:32.747452 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/980a08b3768e18552279b17c1c7c2087-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-a-2fbd311b45\" (UID: \"980a08b3768e18552279b17c1c7c2087\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:32.747537 kubelet[2797]: I0813 00:22:32.747503 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d00f81d17d324ea398955c0d0ccebb8a-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-a-2fbd311b45\" (UID: \"d00f81d17d324ea398955c0d0ccebb8a\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:32.747537 kubelet[2797]: I0813 00:22:32.747523 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/980a08b3768e18552279b17c1c7c2087-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-a-2fbd311b45\" (UID: \"980a08b3768e18552279b17c1c7c2087\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:32.747537 kubelet[2797]: I0813 00:22:32.747540 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/980a08b3768e18552279b17c1c7c2087-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-2fbd311b45\" (UID: \"980a08b3768e18552279b17c1c7c2087\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:32.747722 kubelet[2797]: I0813 00:22:32.747554 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/980a08b3768e18552279b17c1c7c2087-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-a-2fbd311b45\" (UID: \"980a08b3768e18552279b17c1c7c2087\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:32.747722 kubelet[2797]: I0813 00:22:32.747569 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e36d868227303457fc9ad96657124cb-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-a-2fbd311b45\" (UID: \"5e36d868227303457fc9ad96657124cb\") " pod="kube-system/kube-scheduler-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:32.747722 kubelet[2797]: I0813 00:22:32.747584 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d00f81d17d324ea398955c0d0ccebb8a-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-a-2fbd311b45\" (UID: \"d00f81d17d324ea398955c0d0ccebb8a\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:32.747722 kubelet[2797]: I0813 00:22:32.747600 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d00f81d17d324ea398955c0d0ccebb8a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-a-2fbd311b45\" (UID: \"d00f81d17d324ea398955c0d0ccebb8a\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:32.747722 kubelet[2797]: I0813 00:22:32.747614 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/980a08b3768e18552279b17c1c7c2087-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-2fbd311b45\" (UID: \"980a08b3768e18552279b17c1c7c2087\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:32.749888 kubelet[2797]: E0813 00:22:32.749850 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-2fbd311b45?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="800ms" Aug 13 00:22:32.825913 kubelet[2797]: I0813 00:22:32.825558 2797 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:32.825913 kubelet[2797]: E0813 00:22:32.825885 2797 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:33.018794 containerd[1729]: time="2025-08-13T00:22:33.018633859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-a-2fbd311b45,Uid:d00f81d17d324ea398955c0d0ccebb8a,Namespace:kube-system,Attempt:0,}" Aug 13 00:22:33.028904 containerd[1729]: time="2025-08-13T00:22:33.028858247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-a-2fbd311b45,Uid:980a08b3768e18552279b17c1c7c2087,Namespace:kube-system,Attempt:0,}" Aug 13 00:22:33.034087 containerd[1729]: time="2025-08-13T00:22:33.034002782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-a-2fbd311b45,Uid:5e36d868227303457fc9ad96657124cb,Namespace:kube-system,Attempt:0,}" Aug 13 00:22:33.228681 kubelet[2797]: I0813 00:22:33.228364 2797 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:33.228811 kubelet[2797]: E0813 00:22:33.228720 2797 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:33.459776 kubelet[2797]: W0813 00:22:33.459542 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-2fbd311b45&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Aug 13 00:22:33.459776 kubelet[2797]: E0813 00:22:33.459616 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-2fbd311b45&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:33.550824 kubelet[2797]: E0813 00:22:33.550782 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-2fbd311b45?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="1.6s" Aug 13 00:22:33.621696 kubelet[2797]: W0813 00:22:33.621659 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Aug 13 00:22:33.621846 kubelet[2797]: E0813 00:22:33.621706 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:33.641833 kubelet[2797]: W0813 00:22:33.641760 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Aug 13 00:22:33.641833 kubelet[2797]: E0813 00:22:33.641803 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:33.690984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1307340449.mount: Deactivated successfully. Aug 13 00:22:33.708860 containerd[1729]: time="2025-08-13T00:22:33.708808110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:22:33.716594 containerd[1729]: time="2025-08-13T00:22:33.716225451Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Aug 13 00:22:33.718886 containerd[1729]: time="2025-08-13T00:22:33.718831458Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:22:33.722165 containerd[1729]: time="2025-08-13T00:22:33.721536306Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:22:33.723564 containerd[1729]: time="2025-08-13T00:22:33.723458631Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:22:33.726000 containerd[1729]: time="2025-08-13T00:22:33.725916398Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:22:33.728441 containerd[1729]: time="2025-08-13T00:22:33.728398125Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:22:33.732724 containerd[1729]: time="2025-08-13T00:22:33.732676617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:22:33.734001 containerd[1729]: time="2025-08-13T00:22:33.733525539Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 714.81108ms" Aug 13 00:22:33.737339 containerd[1729]: time="2025-08-13T00:22:33.737297430Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 703.216168ms" Aug 13 00:22:33.753947 containerd[1729]: time="2025-08-13T00:22:33.753898396Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 724.952909ms" Aug 13 00:22:33.804589 kubelet[2797]: W0813 00:22:33.804551 2797 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Aug 13 00:22:33.804732 kubelet[2797]: E0813 00:22:33.804597 2797 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:34.032094 kubelet[2797]: I0813 00:22:34.031577 2797 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:34.032094 kubelet[2797]: E0813 00:22:34.031907 2797 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:34.109819 kubelet[2797]: E0813 00:22:34.109769 2797 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:34.375002 containerd[1729]: time="2025-08-13T00:22:34.374549654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:22:34.375002 containerd[1729]: time="2025-08-13T00:22:34.374609774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:22:34.375002 containerd[1729]: time="2025-08-13T00:22:34.374626414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:34.375002 containerd[1729]: time="2025-08-13T00:22:34.374724134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:34.380359 containerd[1729]: time="2025-08-13T00:22:34.380068949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:22:34.380359 containerd[1729]: time="2025-08-13T00:22:34.380165309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:22:34.380359 containerd[1729]: time="2025-08-13T00:22:34.380189669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:34.380359 containerd[1729]: time="2025-08-13T00:22:34.380276070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:34.383408 containerd[1729]: time="2025-08-13T00:22:34.383293078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:22:34.383408 containerd[1729]: time="2025-08-13T00:22:34.383345998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:22:34.383408 containerd[1729]: time="2025-08-13T00:22:34.383358158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:34.384147 containerd[1729]: time="2025-08-13T00:22:34.383993160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:34.416707 systemd[1]: Started cri-containerd-01959fe1f8df0d5c6264b238c0beeab82fe417946ad933533eb8f6918c83982b.scope - libcontainer container 01959fe1f8df0d5c6264b238c0beeab82fe417946ad933533eb8f6918c83982b. Aug 13 00:22:34.418876 systemd[1]: Started cri-containerd-c71031283f19d4693f8c994fd4713004c282713e87ecf6b94ae3c9093d5b8892.scope - libcontainer container c71031283f19d4693f8c994fd4713004c282713e87ecf6b94ae3c9093d5b8892. Aug 13 00:22:34.429367 systemd[1]: Started cri-containerd-ee4ba5601b06b38fd9513c3b7f4d5d06c4b953c97c8178b9ec43adb4fd645a4c.scope - libcontainer container ee4ba5601b06b38fd9513c3b7f4d5d06c4b953c97c8178b9ec43adb4fd645a4c. Aug 13 00:22:34.472118 containerd[1729]: time="2025-08-13T00:22:34.471958366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-a-2fbd311b45,Uid:980a08b3768e18552279b17c1c7c2087,Namespace:kube-system,Attempt:0,} returns sandbox id \"01959fe1f8df0d5c6264b238c0beeab82fe417946ad933533eb8f6918c83982b\"" Aug 13 00:22:34.492098 containerd[1729]: time="2025-08-13T00:22:34.491978262Z" level=info msg="CreateContainer within sandbox \"01959fe1f8df0d5c6264b238c0beeab82fe417946ad933533eb8f6918c83982b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:22:34.494098 containerd[1729]: time="2025-08-13T00:22:34.493822427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-a-2fbd311b45,Uid:5e36d868227303457fc9ad96657124cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"c71031283f19d4693f8c994fd4713004c282713e87ecf6b94ae3c9093d5b8892\"" Aug 13 00:22:34.497088 containerd[1729]: time="2025-08-13T00:22:34.496914956Z" level=info msg="CreateContainer within sandbox \"c71031283f19d4693f8c994fd4713004c282713e87ecf6b94ae3c9093d5b8892\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:22:34.501413 containerd[1729]: time="2025-08-13T00:22:34.501381489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-a-2fbd311b45,Uid:d00f81d17d324ea398955c0d0ccebb8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee4ba5601b06b38fd9513c3b7f4d5d06c4b953c97c8178b9ec43adb4fd645a4c\"" Aug 13 00:22:34.504446 containerd[1729]: time="2025-08-13T00:22:34.504373977Z" level=info msg="CreateContainer within sandbox \"ee4ba5601b06b38fd9513c3b7f4d5d06c4b953c97c8178b9ec43adb4fd645a4c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:22:34.566715 containerd[1729]: time="2025-08-13T00:22:34.566646551Z" level=info msg="CreateContainer within sandbox \"c71031283f19d4693f8c994fd4713004c282713e87ecf6b94ae3c9093d5b8892\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6bf99fb025dc918b04451c5293046950d5dca1e471f713e9b62a44ea31c5d61d\"" Aug 13 00:22:34.570551 containerd[1729]: time="2025-08-13T00:22:34.570453122Z" level=info msg="CreateContainer within sandbox \"01959fe1f8df0d5c6264b238c0beeab82fe417946ad933533eb8f6918c83982b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5f0c0c9cdc86f32147e05ba168dbdaf740d666bc50c3c5f27bf5e6eb75aef227\"" Aug 13 00:22:34.570907 containerd[1729]: time="2025-08-13T00:22:34.570729243Z" level=info msg="StartContainer for \"6bf99fb025dc918b04451c5293046950d5dca1e471f713e9b62a44ea31c5d61d\"" Aug 13 00:22:34.576257 containerd[1729]: time="2025-08-13T00:22:34.575855777Z" level=info msg="CreateContainer within sandbox \"ee4ba5601b06b38fd9513c3b7f4d5d06c4b953c97c8178b9ec43adb4fd645a4c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9e854c967f0304b3c6b8812c36fc859e34596dfec66667a2aaea59efc23f0e2a\"" Aug 13 00:22:34.576257 containerd[1729]: time="2025-08-13T00:22:34.576068498Z" level=info msg="StartContainer for \"5f0c0c9cdc86f32147e05ba168dbdaf740d666bc50c3c5f27bf5e6eb75aef227\"" Aug 13 00:22:34.581985 containerd[1729]: time="2025-08-13T00:22:34.581941154Z" level=info msg="StartContainer for \"9e854c967f0304b3c6b8812c36fc859e34596dfec66667a2aaea59efc23f0e2a\"" Aug 13 00:22:34.598358 systemd[1]: Started cri-containerd-6bf99fb025dc918b04451c5293046950d5dca1e471f713e9b62a44ea31c5d61d.scope - libcontainer container 6bf99fb025dc918b04451c5293046950d5dca1e471f713e9b62a44ea31c5d61d. Aug 13 00:22:34.631358 systemd[1]: Started cri-containerd-5f0c0c9cdc86f32147e05ba168dbdaf740d666bc50c3c5f27bf5e6eb75aef227.scope - libcontainer container 5f0c0c9cdc86f32147e05ba168dbdaf740d666bc50c3c5f27bf5e6eb75aef227. Aug 13 00:22:34.641382 systemd[1]: Started cri-containerd-9e854c967f0304b3c6b8812c36fc859e34596dfec66667a2aaea59efc23f0e2a.scope - libcontainer container 9e854c967f0304b3c6b8812c36fc859e34596dfec66667a2aaea59efc23f0e2a. Aug 13 00:22:34.657971 containerd[1729]: time="2025-08-13T00:22:34.657921367Z" level=info msg="StartContainer for \"6bf99fb025dc918b04451c5293046950d5dca1e471f713e9b62a44ea31c5d61d\" returns successfully" Aug 13 00:22:34.717282 containerd[1729]: time="2025-08-13T00:22:34.717240533Z" level=info msg="StartContainer for \"9e854c967f0304b3c6b8812c36fc859e34596dfec66667a2aaea59efc23f0e2a\" returns successfully" Aug 13 00:22:34.717740 containerd[1729]: time="2025-08-13T00:22:34.717493453Z" level=info msg="StartContainer for \"5f0c0c9cdc86f32147e05ba168dbdaf740d666bc50c3c5f27bf5e6eb75aef227\" returns successfully" Aug 13 00:22:35.625463 kubelet[2797]: E0813 00:22:35.624853 2797 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-2fbd311b45\" not found" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:35.629378 kubelet[2797]: E0813 00:22:35.626958 2797 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-2fbd311b45\" not found" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:35.632877 kubelet[2797]: E0813 00:22:35.632822 2797 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-2fbd311b45\" not found" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:35.634671 kubelet[2797]: I0813 00:22:35.634490 2797 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:36.635155 kubelet[2797]: E0813 00:22:36.634621 2797 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-2fbd311b45\" not found" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:36.635155 kubelet[2797]: E0813 00:22:36.634927 2797 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-2fbd311b45\" not found" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:36.635743 kubelet[2797]: E0813 00:22:36.635729 2797 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-2fbd311b45\" not found" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:36.766110 kubelet[2797]: E0813 00:22:36.766058 2797 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.5-a-2fbd311b45\" not found" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:36.954147 kubelet[2797]: I0813 00:22:36.952597 2797 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:37.045316 kubelet[2797]: I0813 00:22:37.045114 2797 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:37.064602 kubelet[2797]: E0813 00:22:37.064360 2797 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-a-2fbd311b45\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:37.064602 kubelet[2797]: I0813 00:22:37.064456 2797 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:37.069080 kubelet[2797]: E0813 00:22:37.068843 2797 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-a-2fbd311b45\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:37.069080 kubelet[2797]: I0813 00:22:37.068875 2797 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:37.077456 kubelet[2797]: E0813 00:22:37.077416 2797 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.5-a-2fbd311b45\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:37.135724 kubelet[2797]: I0813 00:22:37.135450 2797 apiserver.go:52] "Watching apiserver" Aug 13 00:22:37.145830 kubelet[2797]: I0813 00:22:37.145784 2797 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:22:37.636147 kubelet[2797]: I0813 00:22:37.635010 2797 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:37.636147 kubelet[2797]: I0813 00:22:37.635026 2797 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:37.644343 kubelet[2797]: W0813 00:22:37.644281 2797 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:22:37.646614 kubelet[2797]: W0813 00:22:37.646442 2797 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:22:38.787266 systemd[1]: Reloading requested from client PID 3067 ('systemctl') (unit session-9.scope)... Aug 13 00:22:38.787654 systemd[1]: Reloading... Aug 13 00:22:38.882159 zram_generator::config[3110]: No configuration found. Aug 13 00:22:39.005499 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:22:39.105181 systemd[1]: Reloading finished in 317 ms. Aug 13 00:22:39.142348 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:22:39.160736 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:22:39.160960 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:39.173838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:22:39.298161 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:39.311528 (kubelet)[3171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:22:39.404172 kubelet[3171]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:22:39.404841 kubelet[3171]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:22:39.404841 kubelet[3171]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:22:39.404841 kubelet[3171]: I0813 00:22:39.404592 3171 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:22:39.415428 kubelet[3171]: I0813 00:22:39.415364 3171 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:22:39.415428 kubelet[3171]: I0813 00:22:39.415408 3171 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:22:39.415696 kubelet[3171]: I0813 00:22:39.415676 3171 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:22:39.417009 kubelet[3171]: I0813 00:22:39.416987 3171 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:22:39.419783 kubelet[3171]: I0813 00:22:39.419278 3171 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:22:39.423948 kubelet[3171]: E0813 00:22:39.423906 3171 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:22:39.423948 kubelet[3171]: I0813 00:22:39.423944 3171 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:22:39.429011 kubelet[3171]: I0813 00:22:39.428979 3171 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:22:39.429234 kubelet[3171]: I0813 00:22:39.429205 3171 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:22:39.429433 kubelet[3171]: I0813 00:22:39.429234 3171 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-a-2fbd311b45","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:22:39.429433 kubelet[3171]: I0813 00:22:39.429431 3171 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:22:39.429548 kubelet[3171]: I0813 00:22:39.429442 3171 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:22:39.429548 kubelet[3171]: I0813 00:22:39.429483 3171 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:22:39.430493 kubelet[3171]: I0813 00:22:39.429614 3171 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:22:39.430493 kubelet[3171]: I0813 00:22:39.429630 3171 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:22:39.430493 kubelet[3171]: I0813 00:22:39.429648 3171 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:22:39.430493 kubelet[3171]: I0813 00:22:39.429658 3171 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:22:39.431308 kubelet[3171]: I0813 00:22:39.431284 3171 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:22:39.431915 kubelet[3171]: I0813 00:22:39.431895 3171 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:22:39.432460 kubelet[3171]: I0813 00:22:39.432441 3171 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:22:39.432567 kubelet[3171]: I0813 00:22:39.432557 3171 server.go:1287] "Started kubelet" Aug 13 00:22:39.435450 kubelet[3171]: I0813 00:22:39.435433 3171 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:22:39.439030 kubelet[3171]: I0813 00:22:39.439000 3171 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:22:39.439414 kubelet[3171]: I0813 00:22:39.439104 3171 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:22:39.443602 kubelet[3171]: I0813 00:22:39.443563 3171 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:22:39.447970 kubelet[3171]: I0813 00:22:39.447879 3171 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:22:39.448209 kubelet[3171]: I0813 00:22:39.448191 3171 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:22:39.450663 kubelet[3171]: I0813 00:22:39.450640 3171 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:22:39.451017 kubelet[3171]: E0813 00:22:39.450995 3171 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-2fbd311b45\" not found" Aug 13 00:22:39.459465 kubelet[3171]: I0813 00:22:39.459413 3171 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:22:39.460446 kubelet[3171]: I0813 00:22:39.459750 3171 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:22:39.460446 kubelet[3171]: I0813 00:22:39.459775 3171 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:22:39.460446 kubelet[3171]: I0813 00:22:39.459884 3171 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:22:39.468144 kubelet[3171]: I0813 00:22:39.466581 3171 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:22:39.469323 kubelet[3171]: I0813 00:22:39.469287 3171 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:22:39.469456 kubelet[3171]: I0813 00:22:39.469445 3171 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:22:39.469522 kubelet[3171]: I0813 00:22:39.469510 3171 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:22:39.469571 kubelet[3171]: I0813 00:22:39.469563 3171 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:22:39.469869 kubelet[3171]: E0813 00:22:39.469648 3171 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:22:39.481832 kubelet[3171]: I0813 00:22:39.481803 3171 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:22:39.483493 kubelet[3171]: E0813 00:22:39.483456 3171 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:22:39.540505 kubelet[3171]: I0813 00:22:39.540477 3171 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:22:39.540760 kubelet[3171]: I0813 00:22:39.540645 3171 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:22:39.540760 kubelet[3171]: I0813 00:22:39.540718 3171 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:22:39.541154 kubelet[3171]: I0813 00:22:39.540991 3171 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:22:39.541154 kubelet[3171]: I0813 00:22:39.541007 3171 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:22:39.541154 kubelet[3171]: I0813 00:22:39.541025 3171 policy_none.go:49] "None policy: Start" Aug 13 00:22:39.541154 kubelet[3171]: I0813 00:22:39.541035 3171 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:22:39.541154 kubelet[3171]: I0813 00:22:39.541046 3171 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:22:39.541345 kubelet[3171]: I0813 00:22:39.541332 3171 state_mem.go:75] "Updated machine memory state" Aug 13 00:22:39.545416 kubelet[3171]: I0813 00:22:39.545369 3171 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:22:39.546173 kubelet[3171]: I0813 00:22:39.545550 3171 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:22:39.546173 kubelet[3171]: I0813 00:22:39.545569 3171 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:22:39.546173 kubelet[3171]: I0813 00:22:39.545780 3171 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:22:39.547838 kubelet[3171]: E0813 00:22:39.547793 3171 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:22:39.571190 kubelet[3171]: I0813 00:22:39.570937 3171 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:39.571190 kubelet[3171]: I0813 00:22:39.571080 3171 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:39.571372 kubelet[3171]: I0813 00:22:39.571348 3171 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:39.588950 kubelet[3171]: W0813 00:22:39.587998 3171 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:22:39.588950 kubelet[3171]: E0813 00:22:39.588169 3171 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-a-2fbd311b45\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:39.588950 kubelet[3171]: W0813 00:22:39.588958 3171 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:22:39.589199 kubelet[3171]: W0813 00:22:39.589039 3171 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:22:39.589294 kubelet[3171]: E0813 00:22:39.589265 3171 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-a-2fbd311b45\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:39.649575 kubelet[3171]: I0813 00:22:39.649538 3171 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:39.661064 kubelet[3171]: I0813 00:22:39.660929 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/980a08b3768e18552279b17c1c7c2087-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-2fbd311b45\" (UID: \"980a08b3768e18552279b17c1c7c2087\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:39.661064 kubelet[3171]: I0813 00:22:39.660983 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/980a08b3768e18552279b17c1c7c2087-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-a-2fbd311b45\" (UID: \"980a08b3768e18552279b17c1c7c2087\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:39.661064 kubelet[3171]: I0813 00:22:39.661006 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d00f81d17d324ea398955c0d0ccebb8a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-a-2fbd311b45\" (UID: \"d00f81d17d324ea398955c0d0ccebb8a\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:39.661064 kubelet[3171]: I0813 00:22:39.661025 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/980a08b3768e18552279b17c1c7c2087-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-2fbd311b45\" (UID: \"980a08b3768e18552279b17c1c7c2087\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:39.661064 kubelet[3171]: I0813 00:22:39.661071 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/980a08b3768e18552279b17c1c7c2087-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-a-2fbd311b45\" (UID: \"980a08b3768e18552279b17c1c7c2087\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:39.663445 kubelet[3171]: I0813 00:22:39.661092 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/980a08b3768e18552279b17c1c7c2087-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-a-2fbd311b45\" (UID: \"980a08b3768e18552279b17c1c7c2087\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:39.663445 kubelet[3171]: I0813 00:22:39.661110 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e36d868227303457fc9ad96657124cb-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-a-2fbd311b45\" (UID: \"5e36d868227303457fc9ad96657124cb\") " pod="kube-system/kube-scheduler-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:39.663445 kubelet[3171]: I0813 00:22:39.662270 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d00f81d17d324ea398955c0d0ccebb8a-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-a-2fbd311b45\" (UID: \"d00f81d17d324ea398955c0d0ccebb8a\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:39.663445 kubelet[3171]: I0813 00:22:39.662319 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d00f81d17d324ea398955c0d0ccebb8a-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-a-2fbd311b45\" (UID: \"d00f81d17d324ea398955c0d0ccebb8a\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:39.669567 kubelet[3171]: I0813 00:22:39.669534 3171 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:39.669721 kubelet[3171]: I0813 00:22:39.669624 3171 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:40.431902 kubelet[3171]: I0813 00:22:40.430930 3171 apiserver.go:52] "Watching apiserver" Aug 13 00:22:40.460748 kubelet[3171]: I0813 00:22:40.460677 3171 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:22:40.521005 kubelet[3171]: I0813 00:22:40.520579 3171 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:40.533524 kubelet[3171]: W0813 00:22:40.533483 3171 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:22:40.533675 kubelet[3171]: E0813 00:22:40.533556 3171 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-a-2fbd311b45\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-a-2fbd311b45" Aug 13 00:22:40.548235 kubelet[3171]: I0813 00:22:40.548034 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2fbd311b45" podStartSLOduration=1.547999242 podStartE2EDuration="1.547999242s" podCreationTimestamp="2025-08-13 00:22:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:22:40.547983042 +0000 UTC m=+1.233111802" watchObservedRunningTime="2025-08-13 00:22:40.547999242 +0000 UTC m=+1.233128002" Aug 13 00:22:40.579119 kubelet[3171]: I0813 00:22:40.579024 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-a-2fbd311b45" podStartSLOduration=3.579005091 podStartE2EDuration="3.579005091s" podCreationTimestamp="2025-08-13 00:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:22:40.56461249 +0000 UTC m=+1.249741250" watchObservedRunningTime="2025-08-13 00:22:40.579005091 +0000 UTC m=+1.264133891" Aug 13 00:22:40.602435 kubelet[3171]: I0813 00:22:40.602349 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-a-2fbd311b45" podStartSLOduration=3.602327959 podStartE2EDuration="3.602327959s" podCreationTimestamp="2025-08-13 00:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:22:40.579844054 +0000 UTC m=+1.264972814" watchObservedRunningTime="2025-08-13 00:22:40.602327959 +0000 UTC m=+1.287456719" Aug 13 00:22:43.610623 kubelet[3171]: I0813 00:22:43.610588 3171 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:22:43.611403 containerd[1729]: time="2025-08-13T00:22:43.611020789Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:22:43.612042 kubelet[3171]: I0813 00:22:43.611703 3171 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:22:44.419874 systemd[1]: Created slice kubepods-besteffort-poda9288c90_ad2a_4471_ad26_82c14c944b90.slice - libcontainer container kubepods-besteffort-poda9288c90_ad2a_4471_ad26_82c14c944b90.slice. Aug 13 00:22:44.498574 kubelet[3171]: I0813 00:22:44.498450 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a9288c90-ad2a-4471-ad26-82c14c944b90-kube-proxy\") pod \"kube-proxy-qv5cp\" (UID: \"a9288c90-ad2a-4471-ad26-82c14c944b90\") " pod="kube-system/kube-proxy-qv5cp" Aug 13 00:22:44.498574 kubelet[3171]: I0813 00:22:44.498494 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9288c90-ad2a-4471-ad26-82c14c944b90-xtables-lock\") pod \"kube-proxy-qv5cp\" (UID: \"a9288c90-ad2a-4471-ad26-82c14c944b90\") " pod="kube-system/kube-proxy-qv5cp" Aug 13 00:22:44.498574 kubelet[3171]: I0813 00:22:44.498516 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdd8t\" (UniqueName: \"kubernetes.io/projected/a9288c90-ad2a-4471-ad26-82c14c944b90-kube-api-access-zdd8t\") pod \"kube-proxy-qv5cp\" (UID: \"a9288c90-ad2a-4471-ad26-82c14c944b90\") " pod="kube-system/kube-proxy-qv5cp" Aug 13 00:22:44.498574 kubelet[3171]: I0813 00:22:44.498542 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9288c90-ad2a-4471-ad26-82c14c944b90-lib-modules\") pod \"kube-proxy-qv5cp\" (UID: \"a9288c90-ad2a-4471-ad26-82c14c944b90\") " pod="kube-system/kube-proxy-qv5cp" Aug 13 00:22:44.731383 containerd[1729]: time="2025-08-13T00:22:44.730084494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qv5cp,Uid:a9288c90-ad2a-4471-ad26-82c14c944b90,Namespace:kube-system,Attempt:0,}" Aug 13 00:22:44.787479 containerd[1729]: time="2025-08-13T00:22:44.787385339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:22:44.788895 containerd[1729]: time="2025-08-13T00:22:44.787487259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:22:44.788895 containerd[1729]: time="2025-08-13T00:22:44.787513739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:44.788895 containerd[1729]: time="2025-08-13T00:22:44.787722300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:44.821352 systemd[1]: Started cri-containerd-833317b0c893e9f92a238d7a07841c2048a3d0fd0b5b3b38c412ad7be924e943.scope - libcontainer container 833317b0c893e9f92a238d7a07841c2048a3d0fd0b5b3b38c412ad7be924e943. Aug 13 00:22:44.846816 systemd[1]: Created slice kubepods-besteffort-pod513552e5_d3ca_4854_a748_e3cce874497b.slice - libcontainer container kubepods-besteffort-pod513552e5_d3ca_4854_a748_e3cce874497b.slice. Aug 13 00:22:44.871246 containerd[1729]: time="2025-08-13T00:22:44.870954620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qv5cp,Uid:a9288c90-ad2a-4471-ad26-82c14c944b90,Namespace:kube-system,Attempt:0,} returns sandbox id \"833317b0c893e9f92a238d7a07841c2048a3d0fd0b5b3b38c412ad7be924e943\"" Aug 13 00:22:44.877480 containerd[1729]: time="2025-08-13T00:22:44.876945397Z" level=info msg="CreateContainer within sandbox \"833317b0c893e9f92a238d7a07841c2048a3d0fd0b5b3b38c412ad7be924e943\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:22:44.902263 kubelet[3171]: I0813 00:22:44.902191 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/513552e5-d3ca-4854-a748-e3cce874497b-var-lib-calico\") pod \"tigera-operator-747864d56d-nmf9p\" (UID: \"513552e5-d3ca-4854-a748-e3cce874497b\") " pod="tigera-operator/tigera-operator-747864d56d-nmf9p" Aug 13 00:22:44.903361 kubelet[3171]: I0813 00:22:44.902343 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmbgc\" (UniqueName: \"kubernetes.io/projected/513552e5-d3ca-4854-a748-e3cce874497b-kube-api-access-rmbgc\") pod \"tigera-operator-747864d56d-nmf9p\" (UID: \"513552e5-d3ca-4854-a748-e3cce874497b\") " pod="tigera-operator/tigera-operator-747864d56d-nmf9p" Aug 13 00:22:44.926742 containerd[1729]: time="2025-08-13T00:22:44.926606740Z" level=info msg="CreateContainer within sandbox \"833317b0c893e9f92a238d7a07841c2048a3d0fd0b5b3b38c412ad7be924e943\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"723fa8b0541a6416ff137637ae208e6332b4f47eada4ffdcbcb7b97e32d1028e\"" Aug 13 00:22:44.928955 containerd[1729]: time="2025-08-13T00:22:44.928107544Z" level=info msg="StartContainer for \"723fa8b0541a6416ff137637ae208e6332b4f47eada4ffdcbcb7b97e32d1028e\"" Aug 13 00:22:44.960354 systemd[1]: Started cri-containerd-723fa8b0541a6416ff137637ae208e6332b4f47eada4ffdcbcb7b97e32d1028e.scope - libcontainer container 723fa8b0541a6416ff137637ae208e6332b4f47eada4ffdcbcb7b97e32d1028e. Aug 13 00:22:45.003222 containerd[1729]: time="2025-08-13T00:22:45.000806514Z" level=info msg="StartContainer for \"723fa8b0541a6416ff137637ae208e6332b4f47eada4ffdcbcb7b97e32d1028e\" returns successfully" Aug 13 00:22:45.153269 containerd[1729]: time="2025-08-13T00:22:45.153199313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-nmf9p,Uid:513552e5-d3ca-4854-a748-e3cce874497b,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:22:45.192842 containerd[1729]: time="2025-08-13T00:22:45.192643787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:22:45.193058 containerd[1729]: time="2025-08-13T00:22:45.192739747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:22:45.193558 containerd[1729]: time="2025-08-13T00:22:45.193466269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:45.193871 containerd[1729]: time="2025-08-13T00:22:45.193778550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:45.209356 systemd[1]: Started cri-containerd-b63876ffb8024e5e2b76856e2c6e50d9cb971e2b866d6bfdf3decea428cebf27.scope - libcontainer container b63876ffb8024e5e2b76856e2c6e50d9cb971e2b866d6bfdf3decea428cebf27. Aug 13 00:22:45.242015 containerd[1729]: time="2025-08-13T00:22:45.241533048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-nmf9p,Uid:513552e5-d3ca-4854-a748-e3cce874497b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b63876ffb8024e5e2b76856e2c6e50d9cb971e2b866d6bfdf3decea428cebf27\"" Aug 13 00:22:45.244535 containerd[1729]: time="2025-08-13T00:22:45.244489016Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:22:45.546927 kubelet[3171]: I0813 00:22:45.546233 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qv5cp" podStartSLOduration=1.546214566 podStartE2EDuration="1.546214566s" podCreationTimestamp="2025-08-13 00:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:22:45.545990125 +0000 UTC m=+6.231118885" watchObservedRunningTime="2025-08-13 00:22:45.546214566 +0000 UTC m=+6.231343326" Aug 13 00:22:45.616527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2347907670.mount: Deactivated successfully. Aug 13 00:22:47.165152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4290348208.mount: Deactivated successfully. Aug 13 00:22:47.577593 containerd[1729]: time="2025-08-13T00:22:47.577539299Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:47.580751 containerd[1729]: time="2025-08-13T00:22:47.580694668Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 13 00:22:47.583253 containerd[1729]: time="2025-08-13T00:22:47.583197996Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:47.587342 containerd[1729]: time="2025-08-13T00:22:47.587245727Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:47.588309 containerd[1729]: time="2025-08-13T00:22:47.588010329Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.343474953s" Aug 13 00:22:47.588309 containerd[1729]: time="2025-08-13T00:22:47.588045370Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 13 00:22:47.592059 containerd[1729]: time="2025-08-13T00:22:47.592018661Z" level=info msg="CreateContainer within sandbox \"b63876ffb8024e5e2b76856e2c6e50d9cb971e2b866d6bfdf3decea428cebf27\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:22:47.618259 containerd[1729]: time="2025-08-13T00:22:47.618211137Z" level=info msg="CreateContainer within sandbox \"b63876ffb8024e5e2b76856e2c6e50d9cb971e2b866d6bfdf3decea428cebf27\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"68e3d50eebfb029c3c7981d6176b1f9b969c015de49f560e18c55a6c574855bb\"" Aug 13 00:22:47.620036 containerd[1729]: time="2025-08-13T00:22:47.619372180Z" level=info msg="StartContainer for \"68e3d50eebfb029c3c7981d6176b1f9b969c015de49f560e18c55a6c574855bb\"" Aug 13 00:22:47.647363 systemd[1]: Started cri-containerd-68e3d50eebfb029c3c7981d6176b1f9b969c015de49f560e18c55a6c574855bb.scope - libcontainer container 68e3d50eebfb029c3c7981d6176b1f9b969c015de49f560e18c55a6c574855bb. Aug 13 00:22:47.676675 containerd[1729]: time="2025-08-13T00:22:47.676539745Z" level=info msg="StartContainer for \"68e3d50eebfb029c3c7981d6176b1f9b969c015de49f560e18c55a6c574855bb\" returns successfully" Aug 13 00:22:52.520167 kubelet[3171]: I0813 00:22:52.519773 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-nmf9p" podStartSLOduration=6.17384163 podStartE2EDuration="8.51975107s" podCreationTimestamp="2025-08-13 00:22:44 +0000 UTC" firstStartedPulling="2025-08-13 00:22:45.243360053 +0000 UTC m=+5.928488813" lastFinishedPulling="2025-08-13 00:22:47.589269493 +0000 UTC m=+8.274398253" observedRunningTime="2025-08-13 00:22:48.551776047 +0000 UTC m=+9.236904847" watchObservedRunningTime="2025-08-13 00:22:52.51975107 +0000 UTC m=+13.204879870" Aug 13 00:22:53.674763 sudo[2248]: pam_unix(sudo:session): session closed for user root Aug 13 00:22:53.769818 sshd[2245]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:53.773472 systemd-logind[1697]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:22:53.776857 systemd[1]: sshd@6-10.200.20.40:22-10.200.16.10:50938.service: Deactivated successfully. Aug 13 00:22:53.785842 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:22:53.786272 systemd[1]: session-9.scope: Consumed 6.304s CPU time, 153.2M memory peak, 0B memory swap peak. Aug 13 00:22:53.787927 systemd-logind[1697]: Removed session 9. Aug 13 00:23:00.508217 systemd[1]: Created slice kubepods-besteffort-podeca1b8a2_6094_4b7b_a69a_da8add1fb225.slice - libcontainer container kubepods-besteffort-podeca1b8a2_6094_4b7b_a69a_da8add1fb225.slice. Aug 13 00:23:00.604935 kubelet[3171]: I0813 00:23:00.604879 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eca1b8a2-6094-4b7b-a69a-da8add1fb225-tigera-ca-bundle\") pod \"calico-typha-656b9fc846-9gjlt\" (UID: \"eca1b8a2-6094-4b7b-a69a-da8add1fb225\") " pod="calico-system/calico-typha-656b9fc846-9gjlt" Aug 13 00:23:00.604935 kubelet[3171]: I0813 00:23:00.604929 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4ns4\" (UniqueName: \"kubernetes.io/projected/eca1b8a2-6094-4b7b-a69a-da8add1fb225-kube-api-access-c4ns4\") pod \"calico-typha-656b9fc846-9gjlt\" (UID: \"eca1b8a2-6094-4b7b-a69a-da8add1fb225\") " pod="calico-system/calico-typha-656b9fc846-9gjlt" Aug 13 00:23:00.606445 kubelet[3171]: I0813 00:23:00.604963 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/eca1b8a2-6094-4b7b-a69a-da8add1fb225-typha-certs\") pod \"calico-typha-656b9fc846-9gjlt\" (UID: \"eca1b8a2-6094-4b7b-a69a-da8add1fb225\") " pod="calico-system/calico-typha-656b9fc846-9gjlt" Aug 13 00:23:00.683408 systemd[1]: Created slice kubepods-besteffort-pod8da73be8_d8ba_4bc8_b127_7d1b9f75b5fc.slice - libcontainer container kubepods-besteffort-pod8da73be8_d8ba_4bc8_b127_7d1b9f75b5fc.slice. Aug 13 00:23:00.706271 kubelet[3171]: I0813 00:23:00.705730 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc-cni-log-dir\") pod \"calico-node-gntcm\" (UID: \"8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc\") " pod="calico-system/calico-node-gntcm" Aug 13 00:23:00.706271 kubelet[3171]: I0813 00:23:00.705785 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc-node-certs\") pod \"calico-node-gntcm\" (UID: \"8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc\") " pod="calico-system/calico-node-gntcm" Aug 13 00:23:00.706271 kubelet[3171]: I0813 00:23:00.705802 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc-xtables-lock\") pod \"calico-node-gntcm\" (UID: \"8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc\") " pod="calico-system/calico-node-gntcm" Aug 13 00:23:00.706271 kubelet[3171]: I0813 00:23:00.705875 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc-var-lib-calico\") pod \"calico-node-gntcm\" (UID: \"8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc\") " pod="calico-system/calico-node-gntcm" Aug 13 00:23:00.706271 kubelet[3171]: I0813 00:23:00.705931 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc-cni-bin-dir\") pod \"calico-node-gntcm\" (UID: \"8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc\") " pod="calico-system/calico-node-gntcm" Aug 13 00:23:00.706509 kubelet[3171]: I0813 00:23:00.705948 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc-policysync\") pod \"calico-node-gntcm\" (UID: \"8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc\") " pod="calico-system/calico-node-gntcm" Aug 13 00:23:00.706509 kubelet[3171]: I0813 00:23:00.705966 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc-tigera-ca-bundle\") pod \"calico-node-gntcm\" (UID: \"8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc\") " pod="calico-system/calico-node-gntcm" Aug 13 00:23:00.706509 kubelet[3171]: I0813 00:23:00.705983 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc-var-run-calico\") pod \"calico-node-gntcm\" (UID: \"8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc\") " pod="calico-system/calico-node-gntcm" Aug 13 00:23:00.706509 kubelet[3171]: I0813 00:23:00.705998 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc-flexvol-driver-host\") pod \"calico-node-gntcm\" (UID: \"8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc\") " pod="calico-system/calico-node-gntcm" Aug 13 00:23:00.706509 kubelet[3171]: I0813 00:23:00.706016 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwfln\" (UniqueName: \"kubernetes.io/projected/8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc-kube-api-access-qwfln\") pod \"calico-node-gntcm\" (UID: \"8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc\") " pod="calico-system/calico-node-gntcm" Aug 13 00:23:00.706619 kubelet[3171]: I0813 00:23:00.706059 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc-cni-net-dir\") pod \"calico-node-gntcm\" (UID: \"8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc\") " pod="calico-system/calico-node-gntcm" Aug 13 00:23:00.706619 kubelet[3171]: I0813 00:23:00.706077 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc-lib-modules\") pod \"calico-node-gntcm\" (UID: \"8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc\") " pod="calico-system/calico-node-gntcm" Aug 13 00:23:00.808493 kubelet[3171]: E0813 00:23:00.807916 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.809245 kubelet[3171]: W0813 00:23:00.808821 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.809245 kubelet[3171]: E0813 00:23:00.808858 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.810051 kubelet[3171]: E0813 00:23:00.809857 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.810051 kubelet[3171]: W0813 00:23:00.809907 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.810051 kubelet[3171]: E0813 00:23:00.809929 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.810957 kubelet[3171]: E0813 00:23:00.810790 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.810957 kubelet[3171]: W0813 00:23:00.810808 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.810957 kubelet[3171]: E0813 00:23:00.810823 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.811526 kubelet[3171]: E0813 00:23:00.811309 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.811526 kubelet[3171]: W0813 00:23:00.811324 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.811526 kubelet[3171]: E0813 00:23:00.811387 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.811934 kubelet[3171]: E0813 00:23:00.811659 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.811934 kubelet[3171]: W0813 00:23:00.811681 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.811934 kubelet[3171]: E0813 00:23:00.811697 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.813408 containerd[1729]: time="2025-08-13T00:23:00.812428955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-656b9fc846-9gjlt,Uid:eca1b8a2-6094-4b7b-a69a-da8add1fb225,Namespace:calico-system,Attempt:0,}" Aug 13 00:23:00.820421 kubelet[3171]: E0813 00:23:00.818935 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.820421 kubelet[3171]: W0813 00:23:00.819105 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.820421 kubelet[3171]: E0813 00:23:00.820351 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.837814 kubelet[3171]: E0813 00:23:00.835706 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.837814 kubelet[3171]: W0813 00:23:00.836002 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.837814 kubelet[3171]: E0813 00:23:00.836027 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.868988 containerd[1729]: time="2025-08-13T00:23:00.868489431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:00.868988 containerd[1729]: time="2025-08-13T00:23:00.868759272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:00.868988 containerd[1729]: time="2025-08-13T00:23:00.868799152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:00.869935 containerd[1729]: time="2025-08-13T00:23:00.869450754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:00.898261 systemd[1]: Started cri-containerd-0b334d08908fdb3c7888efcf2f8c5ee0af8697a9601a9875f93f725b9866a459.scope - libcontainer container 0b334d08908fdb3c7888efcf2f8c5ee0af8697a9601a9875f93f725b9866a459. Aug 13 00:23:00.909859 kubelet[3171]: E0813 00:23:00.909810 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wbgr7" podUID="7bc7ffc9-ee10-44a4-88ba-09de883ee749" Aug 13 00:23:00.985947 containerd[1729]: time="2025-08-13T00:23:00.985278717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-656b9fc846-9gjlt,Uid:eca1b8a2-6094-4b7b-a69a-da8add1fb225,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b334d08908fdb3c7888efcf2f8c5ee0af8697a9601a9875f93f725b9866a459\"" Aug 13 00:23:00.988148 containerd[1729]: time="2025-08-13T00:23:00.987429763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gntcm,Uid:8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc,Namespace:calico-system,Attempt:0,}" Aug 13 00:23:00.989951 kubelet[3171]: E0813 00:23:00.989902 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.989951 kubelet[3171]: W0813 00:23:00.989942 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.990095 kubelet[3171]: E0813 00:23:00.989967 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.990421 kubelet[3171]: E0813 00:23:00.990368 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.991226 kubelet[3171]: W0813 00:23:00.990422 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.991226 kubelet[3171]: E0813 00:23:00.990484 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.991226 kubelet[3171]: E0813 00:23:00.990733 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.991226 kubelet[3171]: W0813 00:23:00.990744 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.991226 kubelet[3171]: E0813 00:23:00.990754 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.991226 kubelet[3171]: E0813 00:23:00.990979 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.991226 kubelet[3171]: W0813 00:23:00.990989 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.991226 kubelet[3171]: E0813 00:23:00.991021 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.991226 kubelet[3171]: E0813 00:23:00.991277 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.991226 kubelet[3171]: W0813 00:23:00.991289 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.991706 kubelet[3171]: E0813 00:23:00.991299 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.992123 containerd[1729]: time="2025-08-13T00:23:00.992078296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:23:00.992517 kubelet[3171]: E0813 00:23:00.992322 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.992517 kubelet[3171]: W0813 00:23:00.992340 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.992517 kubelet[3171]: E0813 00:23:00.992459 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.992816 kubelet[3171]: E0813 00:23:00.992715 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.992816 kubelet[3171]: W0813 00:23:00.992725 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.992816 kubelet[3171]: E0813 00:23:00.992736 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.993269 kubelet[3171]: E0813 00:23:00.992925 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.993269 kubelet[3171]: W0813 00:23:00.992943 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.993269 kubelet[3171]: E0813 00:23:00.992969 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.993269 kubelet[3171]: E0813 00:23:00.993184 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.993269 kubelet[3171]: W0813 00:23:00.993194 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.993269 kubelet[3171]: E0813 00:23:00.993214 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.994412 kubelet[3171]: E0813 00:23:00.993396 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.994412 kubelet[3171]: W0813 00:23:00.993405 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.994412 kubelet[3171]: E0813 00:23:00.993415 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.994412 kubelet[3171]: E0813 00:23:00.993801 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.994412 kubelet[3171]: W0813 00:23:00.993812 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.994412 kubelet[3171]: E0813 00:23:00.993821 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.994412 kubelet[3171]: E0813 00:23:00.994000 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.994412 kubelet[3171]: W0813 00:23:00.994008 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.994412 kubelet[3171]: E0813 00:23:00.994044 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.994412 kubelet[3171]: E0813 00:23:00.994304 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.994618 kubelet[3171]: W0813 00:23:00.994316 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.994618 kubelet[3171]: E0813 00:23:00.994326 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.994618 kubelet[3171]: E0813 00:23:00.994527 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.994618 kubelet[3171]: W0813 00:23:00.994537 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.994618 kubelet[3171]: E0813 00:23:00.994546 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.996422 kubelet[3171]: E0813 00:23:00.994738 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.996422 kubelet[3171]: W0813 00:23:00.994761 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.996422 kubelet[3171]: E0813 00:23:00.994772 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.996422 kubelet[3171]: E0813 00:23:00.994928 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.996422 kubelet[3171]: W0813 00:23:00.994938 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.996422 kubelet[3171]: E0813 00:23:00.994947 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.996422 kubelet[3171]: E0813 00:23:00.995179 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.996422 kubelet[3171]: W0813 00:23:00.995190 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.996422 kubelet[3171]: E0813 00:23:00.995201 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.996422 kubelet[3171]: E0813 00:23:00.995400 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.996673 kubelet[3171]: W0813 00:23:00.995411 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.996673 kubelet[3171]: E0813 00:23:00.995420 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.996673 kubelet[3171]: E0813 00:23:00.995606 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.996673 kubelet[3171]: W0813 00:23:00.995615 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.996673 kubelet[3171]: E0813 00:23:00.995625 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:00.996673 kubelet[3171]: E0813 00:23:00.995799 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:00.996673 kubelet[3171]: W0813 00:23:00.995808 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:00.996673 kubelet[3171]: E0813 00:23:00.995816 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.009016 kubelet[3171]: E0813 00:23:01.008984 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.009484 kubelet[3171]: W0813 00:23:01.009184 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.009484 kubelet[3171]: E0813 00:23:01.009211 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.009484 kubelet[3171]: I0813 00:23:01.009241 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7bc7ffc9-ee10-44a4-88ba-09de883ee749-socket-dir\") pod \"csi-node-driver-wbgr7\" (UID: \"7bc7ffc9-ee10-44a4-88ba-09de883ee749\") " pod="calico-system/csi-node-driver-wbgr7" Aug 13 00:23:01.010333 kubelet[3171]: E0813 00:23:01.010211 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.010333 kubelet[3171]: W0813 00:23:01.010256 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.010333 kubelet[3171]: E0813 00:23:01.010279 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.010333 kubelet[3171]: I0813 00:23:01.010300 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7bc7ffc9-ee10-44a4-88ba-09de883ee749-registration-dir\") pod \"csi-node-driver-wbgr7\" (UID: \"7bc7ffc9-ee10-44a4-88ba-09de883ee749\") " pod="calico-system/csi-node-driver-wbgr7" Aug 13 00:23:01.010587 kubelet[3171]: E0813 00:23:01.010565 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.010587 kubelet[3171]: W0813 00:23:01.010584 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.010668 kubelet[3171]: E0813 00:23:01.010620 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.011935 kubelet[3171]: E0813 00:23:01.011908 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.011935 kubelet[3171]: W0813 00:23:01.011929 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.012072 kubelet[3171]: E0813 00:23:01.011969 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.012206 kubelet[3171]: E0813 00:23:01.012180 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.012206 kubelet[3171]: W0813 00:23:01.012202 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.012334 kubelet[3171]: E0813 00:23:01.012232 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.012334 kubelet[3171]: I0813 00:23:01.012253 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7bc7ffc9-ee10-44a4-88ba-09de883ee749-kubelet-dir\") pod \"csi-node-driver-wbgr7\" (UID: \"7bc7ffc9-ee10-44a4-88ba-09de883ee749\") " pod="calico-system/csi-node-driver-wbgr7" Aug 13 00:23:01.012758 kubelet[3171]: E0813 00:23:01.012454 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.012758 kubelet[3171]: W0813 00:23:01.012464 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.012758 kubelet[3171]: E0813 00:23:01.012607 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.012758 kubelet[3171]: W0813 00:23:01.012615 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.012758 kubelet[3171]: E0813 00:23:01.012625 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.012758 kubelet[3171]: E0813 00:23:01.012622 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.012758 kubelet[3171]: I0813 00:23:01.012656 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7bc7ffc9-ee10-44a4-88ba-09de883ee749-varrun\") pod \"csi-node-driver-wbgr7\" (UID: \"7bc7ffc9-ee10-44a4-88ba-09de883ee749\") " pod="calico-system/csi-node-driver-wbgr7" Aug 13 00:23:01.012914 kubelet[3171]: E0813 00:23:01.012793 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.012914 kubelet[3171]: W0813 00:23:01.012802 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.012914 kubelet[3171]: E0813 00:23:01.012811 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.013924 kubelet[3171]: E0813 00:23:01.013194 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.013924 kubelet[3171]: W0813 00:23:01.013214 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.013924 kubelet[3171]: E0813 00:23:01.013227 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.013924 kubelet[3171]: E0813 00:23:01.013796 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.013924 kubelet[3171]: W0813 00:23:01.013812 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.013924 kubelet[3171]: E0813 00:23:01.013833 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.015312 kubelet[3171]: E0813 00:23:01.015188 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.015312 kubelet[3171]: W0813 00:23:01.015211 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.015312 kubelet[3171]: E0813 00:23:01.015230 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.016061 kubelet[3171]: E0813 00:23:01.016040 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.017192 kubelet[3171]: W0813 00:23:01.017019 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.017192 kubelet[3171]: E0813 00:23:01.017054 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.017864 kubelet[3171]: E0813 00:23:01.017847 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.018072 kubelet[3171]: W0813 00:23:01.017940 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.018072 kubelet[3171]: E0813 00:23:01.017960 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.018072 kubelet[3171]: I0813 00:23:01.017993 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw6xc\" (UniqueName: \"kubernetes.io/projected/7bc7ffc9-ee10-44a4-88ba-09de883ee749-kube-api-access-dw6xc\") pod \"csi-node-driver-wbgr7\" (UID: \"7bc7ffc9-ee10-44a4-88ba-09de883ee749\") " pod="calico-system/csi-node-driver-wbgr7" Aug 13 00:23:01.019164 kubelet[3171]: E0813 00:23:01.019023 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.019164 kubelet[3171]: W0813 00:23:01.019045 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.019164 kubelet[3171]: E0813 00:23:01.019061 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.019427 kubelet[3171]: E0813 00:23:01.019415 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.019494 kubelet[3171]: W0813 00:23:01.019483 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.020235 kubelet[3171]: E0813 00:23:01.020195 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.048547 containerd[1729]: time="2025-08-13T00:23:01.047345450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:01.048547 containerd[1729]: time="2025-08-13T00:23:01.047411491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:01.048547 containerd[1729]: time="2025-08-13T00:23:01.047427691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:01.048547 containerd[1729]: time="2025-08-13T00:23:01.047511331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:01.077369 systemd[1]: Started cri-containerd-b0175830f92f8d347a8edbe62fe52316f0a75f5082134225449a132c123f6ef3.scope - libcontainer container b0175830f92f8d347a8edbe62fe52316f0a75f5082134225449a132c123f6ef3. Aug 13 00:23:01.119683 kubelet[3171]: E0813 00:23:01.119330 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.119683 kubelet[3171]: W0813 00:23:01.119353 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.119683 kubelet[3171]: E0813 00:23:01.119373 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.119683 kubelet[3171]: E0813 00:23:01.119589 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.119683 kubelet[3171]: W0813 00:23:01.119600 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.119683 kubelet[3171]: E0813 00:23:01.119616 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.120192 kubelet[3171]: E0813 00:23:01.120081 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.120192 kubelet[3171]: W0813 00:23:01.120093 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.120192 kubelet[3171]: E0813 00:23:01.120115 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.120446 kubelet[3171]: E0813 00:23:01.120423 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.120446 kubelet[3171]: W0813 00:23:01.120439 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.120604 kubelet[3171]: E0813 00:23:01.120459 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.120689 kubelet[3171]: E0813 00:23:01.120672 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.120689 kubelet[3171]: W0813 00:23:01.120686 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.120842 kubelet[3171]: E0813 00:23:01.120704 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.122231 kubelet[3171]: E0813 00:23:01.122187 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.122231 kubelet[3171]: W0813 00:23:01.122225 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.122442 kubelet[3171]: E0813 00:23:01.122308 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.123123 kubelet[3171]: E0813 00:23:01.123083 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.123123 kubelet[3171]: W0813 00:23:01.123116 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.123341 kubelet[3171]: E0813 00:23:01.123222 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.123561 kubelet[3171]: E0813 00:23:01.123537 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.123561 kubelet[3171]: W0813 00:23:01.123556 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.123773 kubelet[3171]: E0813 00:23:01.123627 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.123773 kubelet[3171]: E0813 00:23:01.123761 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.123773 kubelet[3171]: W0813 00:23:01.123771 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.123902 kubelet[3171]: E0813 00:23:01.123878 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.124011 kubelet[3171]: E0813 00:23:01.123991 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.124199 kubelet[3171]: W0813 00:23:01.124007 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.124608 kubelet[3171]: E0813 00:23:01.124248 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.124740 kubelet[3171]: E0813 00:23:01.124705 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.124740 kubelet[3171]: W0813 00:23:01.124726 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.124947 kubelet[3171]: E0813 00:23:01.124748 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.125311 kubelet[3171]: E0813 00:23:01.125285 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.125311 kubelet[3171]: W0813 00:23:01.125305 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.125593 kubelet[3171]: E0813 00:23:01.125381 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.126345 kubelet[3171]: E0813 00:23:01.126319 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.126345 kubelet[3171]: W0813 00:23:01.126339 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.127275 kubelet[3171]: E0813 00:23:01.126449 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.127532 kubelet[3171]: E0813 00:23:01.127503 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.127532 kubelet[3171]: W0813 00:23:01.127526 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.127696 kubelet[3171]: E0813 00:23:01.127610 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.127748 kubelet[3171]: E0813 00:23:01.127726 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.127748 kubelet[3171]: W0813 00:23:01.127741 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.127935 kubelet[3171]: E0813 00:23:01.127826 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.128232 kubelet[3171]: E0813 00:23:01.128207 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.128232 kubelet[3171]: W0813 00:23:01.128226 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.128521 kubelet[3171]: E0813 00:23:01.128301 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.129399 kubelet[3171]: E0813 00:23:01.129357 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.129399 kubelet[3171]: W0813 00:23:01.129389 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.131240 kubelet[3171]: E0813 00:23:01.129466 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.131512 kubelet[3171]: E0813 00:23:01.131482 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.131512 kubelet[3171]: W0813 00:23:01.131508 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.131657 kubelet[3171]: E0813 00:23:01.131596 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.131749 kubelet[3171]: E0813 00:23:01.131732 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.131749 kubelet[3171]: W0813 00:23:01.131745 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.131886 kubelet[3171]: E0813 00:23:01.131801 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.132060 kubelet[3171]: E0813 00:23:01.132039 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.132060 kubelet[3171]: W0813 00:23:01.132056 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.132220 kubelet[3171]: E0813 00:23:01.132116 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.132295 kubelet[3171]: E0813 00:23:01.132267 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.132295 kubelet[3171]: W0813 00:23:01.132284 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.132359 kubelet[3171]: E0813 00:23:01.132305 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.133250 kubelet[3171]: E0813 00:23:01.133223 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.133250 kubelet[3171]: W0813 00:23:01.133246 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.133343 kubelet[3171]: E0813 00:23:01.133270 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.133566 kubelet[3171]: E0813 00:23:01.133546 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.133566 kubelet[3171]: W0813 00:23:01.133562 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.133652 kubelet[3171]: E0813 00:23:01.133581 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.133934 kubelet[3171]: E0813 00:23:01.133910 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.133934 kubelet[3171]: W0813 00:23:01.133930 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.134005 kubelet[3171]: E0813 00:23:01.133944 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.134968 kubelet[3171]: E0813 00:23:01.134945 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.135011 kubelet[3171]: W0813 00:23:01.134967 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.135011 kubelet[3171]: E0813 00:23:01.134983 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.152100 kubelet[3171]: E0813 00:23:01.152070 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:01.152348 kubelet[3171]: W0813 00:23:01.152255 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:01.152348 kubelet[3171]: E0813 00:23:01.152298 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:01.197772 containerd[1729]: time="2025-08-13T00:23:01.197722110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gntcm,Uid:8da73be8-d8ba-4bc8-b127-7d1b9f75b5fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"b0175830f92f8d347a8edbe62fe52316f0a75f5082134225449a132c123f6ef3\"" Aug 13 00:23:02.134933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3822169609.mount: Deactivated successfully. Aug 13 00:23:02.471173 kubelet[3171]: E0813 00:23:02.471097 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wbgr7" podUID="7bc7ffc9-ee10-44a4-88ba-09de883ee749" Aug 13 00:23:02.826857 containerd[1729]: time="2025-08-13T00:23:02.826722298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:02.829418 containerd[1729]: time="2025-08-13T00:23:02.829319545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Aug 13 00:23:02.832160 containerd[1729]: time="2025-08-13T00:23:02.832104913Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:02.837328 containerd[1729]: time="2025-08-13T00:23:02.836509405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:02.837328 containerd[1729]: time="2025-08-13T00:23:02.837185287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.844961751s" Aug 13 00:23:02.837328 containerd[1729]: time="2025-08-13T00:23:02.837217247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 13 00:23:02.839937 containerd[1729]: time="2025-08-13T00:23:02.839178453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:23:02.852711 containerd[1729]: time="2025-08-13T00:23:02.852474450Z" level=info msg="CreateContainer within sandbox \"0b334d08908fdb3c7888efcf2f8c5ee0af8697a9601a9875f93f725b9866a459\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:23:02.882351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3606435605.mount: Deactivated successfully. Aug 13 00:23:02.890706 containerd[1729]: time="2025-08-13T00:23:02.890581956Z" level=info msg="CreateContainer within sandbox \"0b334d08908fdb3c7888efcf2f8c5ee0af8697a9601a9875f93f725b9866a459\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d1240c410fa0fa9e0eb01964e17a6c99db46173430494125970a077c79650bad\"" Aug 13 00:23:02.892383 containerd[1729]: time="2025-08-13T00:23:02.891195158Z" level=info msg="StartContainer for \"d1240c410fa0fa9e0eb01964e17a6c99db46173430494125970a077c79650bad\"" Aug 13 00:23:02.919342 systemd[1]: Started cri-containerd-d1240c410fa0fa9e0eb01964e17a6c99db46173430494125970a077c79650bad.scope - libcontainer container d1240c410fa0fa9e0eb01964e17a6c99db46173430494125970a077c79650bad. Aug 13 00:23:02.962110 containerd[1729]: time="2025-08-13T00:23:02.962055316Z" level=info msg="StartContainer for \"d1240c410fa0fa9e0eb01964e17a6c99db46173430494125970a077c79650bad\" returns successfully" Aug 13 00:23:03.594246 kubelet[3171]: I0813 00:23:03.594173 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-656b9fc846-9gjlt" podStartSLOduration=1.7470213239999999 podStartE2EDuration="3.593902559s" podCreationTimestamp="2025-08-13 00:23:00 +0000 UTC" firstStartedPulling="2025-08-13 00:23:00.991547855 +0000 UTC m=+21.676676615" lastFinishedPulling="2025-08-13 00:23:02.83842909 +0000 UTC m=+23.523557850" observedRunningTime="2025-08-13 00:23:03.591817954 +0000 UTC m=+24.276946714" watchObservedRunningTime="2025-08-13 00:23:03.593902559 +0000 UTC m=+24.279031319" Aug 13 00:23:03.614146 kubelet[3171]: E0813 00:23:03.614100 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.614146 kubelet[3171]: W0813 00:23:03.614141 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.614304 kubelet[3171]: E0813 00:23:03.614166 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.614390 kubelet[3171]: E0813 00:23:03.614362 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.614434 kubelet[3171]: W0813 00:23:03.614417 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.614461 kubelet[3171]: E0813 00:23:03.614435 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.614649 kubelet[3171]: E0813 00:23:03.614631 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.614690 kubelet[3171]: W0813 00:23:03.614645 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.614690 kubelet[3171]: E0813 00:23:03.614663 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.614853 kubelet[3171]: E0813 00:23:03.614836 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.614853 kubelet[3171]: W0813 00:23:03.614850 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.614903 kubelet[3171]: E0813 00:23:03.614861 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.615085 kubelet[3171]: E0813 00:23:03.615066 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.615085 kubelet[3171]: W0813 00:23:03.615081 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.615176 kubelet[3171]: E0813 00:23:03.615090 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.615301 kubelet[3171]: E0813 00:23:03.615284 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.615301 kubelet[3171]: W0813 00:23:03.615298 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.615360 kubelet[3171]: E0813 00:23:03.615307 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.615511 kubelet[3171]: E0813 00:23:03.615494 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.615511 kubelet[3171]: W0813 00:23:03.615507 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.615567 kubelet[3171]: E0813 00:23:03.615517 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.615701 kubelet[3171]: E0813 00:23:03.615683 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.615701 kubelet[3171]: W0813 00:23:03.615698 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.615744 kubelet[3171]: E0813 00:23:03.615707 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.615938 kubelet[3171]: E0813 00:23:03.615920 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.615938 kubelet[3171]: W0813 00:23:03.615934 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.616001 kubelet[3171]: E0813 00:23:03.615943 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.616149 kubelet[3171]: E0813 00:23:03.616115 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.616191 kubelet[3171]: W0813 00:23:03.616129 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.616191 kubelet[3171]: E0813 00:23:03.616165 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.616400 kubelet[3171]: E0813 00:23:03.616360 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.616428 kubelet[3171]: W0813 00:23:03.616400 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.616428 kubelet[3171]: E0813 00:23:03.616410 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.616637 kubelet[3171]: E0813 00:23:03.616618 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.616637 kubelet[3171]: W0813 00:23:03.616632 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.616696 kubelet[3171]: E0813 00:23:03.616641 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.616832 kubelet[3171]: E0813 00:23:03.616812 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.616832 kubelet[3171]: W0813 00:23:03.616825 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.616887 kubelet[3171]: E0813 00:23:03.616834 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.617009 kubelet[3171]: E0813 00:23:03.616991 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.617009 kubelet[3171]: W0813 00:23:03.617005 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.617067 kubelet[3171]: E0813 00:23:03.617013 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.617211 kubelet[3171]: E0813 00:23:03.617193 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.617211 kubelet[3171]: W0813 00:23:03.617206 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.617293 kubelet[3171]: E0813 00:23:03.617215 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.642129 kubelet[3171]: E0813 00:23:03.642089 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.642129 kubelet[3171]: W0813 00:23:03.642145 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.642129 kubelet[3171]: E0813 00:23:03.642169 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.642129 kubelet[3171]: E0813 00:23:03.642378 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.642624 kubelet[3171]: W0813 00:23:03.642387 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.642624 kubelet[3171]: E0813 00:23:03.642397 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.643907 kubelet[3171]: E0813 00:23:03.643081 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.643907 kubelet[3171]: W0813 00:23:03.643910 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.644061 kubelet[3171]: E0813 00:23:03.643947 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.644288 kubelet[3171]: E0813 00:23:03.644260 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.644288 kubelet[3171]: W0813 00:23:03.644275 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.644446 kubelet[3171]: E0813 00:23:03.644305 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.644939 kubelet[3171]: E0813 00:23:03.644500 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.644939 kubelet[3171]: W0813 00:23:03.644509 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.644939 kubelet[3171]: E0813 00:23:03.644567 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.644939 kubelet[3171]: E0813 00:23:03.644792 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.644939 kubelet[3171]: W0813 00:23:03.644802 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.644939 kubelet[3171]: E0813 00:23:03.644818 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.645177 kubelet[3171]: E0813 00:23:03.645023 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.645177 kubelet[3171]: W0813 00:23:03.645032 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.645177 kubelet[3171]: E0813 00:23:03.645043 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.645681 kubelet[3171]: E0813 00:23:03.645654 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.645681 kubelet[3171]: W0813 00:23:03.645676 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.645782 kubelet[3171]: E0813 00:23:03.645703 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.646061 kubelet[3171]: E0813 00:23:03.646036 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.646117 kubelet[3171]: W0813 00:23:03.646071 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.646117 kubelet[3171]: E0813 00:23:03.646091 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.647705 kubelet[3171]: E0813 00:23:03.646365 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.647705 kubelet[3171]: W0813 00:23:03.646375 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.647705 kubelet[3171]: E0813 00:23:03.646392 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.647705 kubelet[3171]: E0813 00:23:03.646698 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.647705 kubelet[3171]: W0813 00:23:03.646719 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.647705 kubelet[3171]: E0813 00:23:03.646759 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.648377 kubelet[3171]: E0813 00:23:03.647877 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.648377 kubelet[3171]: W0813 00:23:03.647904 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.648377 kubelet[3171]: E0813 00:23:03.647918 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.649018 kubelet[3171]: E0813 00:23:03.648791 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.649018 kubelet[3171]: W0813 00:23:03.648808 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.649018 kubelet[3171]: E0813 00:23:03.648821 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.650341 kubelet[3171]: E0813 00:23:03.650209 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.650341 kubelet[3171]: W0813 00:23:03.650231 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.650341 kubelet[3171]: E0813 00:23:03.650247 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.651195 kubelet[3171]: E0813 00:23:03.650631 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.651195 kubelet[3171]: W0813 00:23:03.650649 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.651195 kubelet[3171]: E0813 00:23:03.650671 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.651195 kubelet[3171]: E0813 00:23:03.650867 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.651195 kubelet[3171]: W0813 00:23:03.650883 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.651195 kubelet[3171]: E0813 00:23:03.650894 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.651418 kubelet[3171]: E0813 00:23:03.651402 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.651442 kubelet[3171]: W0813 00:23:03.651415 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.651442 kubelet[3171]: E0813 00:23:03.651438 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.651829 kubelet[3171]: E0813 00:23:03.651809 3171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:23:03.651923 kubelet[3171]: W0813 00:23:03.651905 3171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:23:03.651980 kubelet[3171]: E0813 00:23:03.651969 3171 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:23:03.915868 containerd[1729]: time="2025-08-13T00:23:03.915721970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:03.919100 containerd[1729]: time="2025-08-13T00:23:03.919042219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Aug 13 00:23:03.922162 containerd[1729]: time="2025-08-13T00:23:03.922085747Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:03.927219 containerd[1729]: time="2025-08-13T00:23:03.926783720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:03.928472 containerd[1729]: time="2025-08-13T00:23:03.928352844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.089119231s" Aug 13 00:23:03.928604 containerd[1729]: time="2025-08-13T00:23:03.928482325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 13 00:23:03.932727 containerd[1729]: time="2025-08-13T00:23:03.932676776Z" level=info msg="CreateContainer within sandbox \"b0175830f92f8d347a8edbe62fe52316f0a75f5082134225449a132c123f6ef3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:23:03.965525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1442003205.mount: Deactivated successfully. Aug 13 00:23:03.975263 containerd[1729]: time="2025-08-13T00:23:03.975212893Z" level=info msg="CreateContainer within sandbox \"b0175830f92f8d347a8edbe62fe52316f0a75f5082134225449a132c123f6ef3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"328242e8745067f782496556fcf8fed12fe63cf0c6b70f9eab630477121c147d\"" Aug 13 00:23:03.976283 containerd[1729]: time="2025-08-13T00:23:03.976235016Z" level=info msg="StartContainer for \"328242e8745067f782496556fcf8fed12fe63cf0c6b70f9eab630477121c147d\"" Aug 13 00:23:04.012337 systemd[1]: Started cri-containerd-328242e8745067f782496556fcf8fed12fe63cf0c6b70f9eab630477121c147d.scope - libcontainer container 328242e8745067f782496556fcf8fed12fe63cf0c6b70f9eab630477121c147d. Aug 13 00:23:04.047835 containerd[1729]: time="2025-08-13T00:23:04.047768372Z" level=info msg="StartContainer for \"328242e8745067f782496556fcf8fed12fe63cf0c6b70f9eab630477121c147d\" returns successfully" Aug 13 00:23:04.055815 systemd[1]: cri-containerd-328242e8745067f782496556fcf8fed12fe63cf0c6b70f9eab630477121c147d.scope: Deactivated successfully. Aug 13 00:23:04.471070 kubelet[3171]: E0813 00:23:04.470799 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wbgr7" podUID="7bc7ffc9-ee10-44a4-88ba-09de883ee749" Aug 13 00:23:04.582406 kubelet[3171]: I0813 00:23:04.582360 3171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:23:04.843478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-328242e8745067f782496556fcf8fed12fe63cf0c6b70f9eab630477121c147d-rootfs.mount: Deactivated successfully. Aug 13 00:23:05.144215 containerd[1729]: time="2025-08-13T00:23:05.144042622Z" level=info msg="shim disconnected" id=328242e8745067f782496556fcf8fed12fe63cf0c6b70f9eab630477121c147d namespace=k8s.io Aug 13 00:23:05.144215 containerd[1729]: time="2025-08-13T00:23:05.144100462Z" level=warning msg="cleaning up after shim disconnected" id=328242e8745067f782496556fcf8fed12fe63cf0c6b70f9eab630477121c147d namespace=k8s.io Aug 13 00:23:05.144215 containerd[1729]: time="2025-08-13T00:23:05.144108782Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:23:05.588474 containerd[1729]: time="2025-08-13T00:23:05.588421922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:23:06.470562 kubelet[3171]: E0813 00:23:06.470489 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wbgr7" podUID="7bc7ffc9-ee10-44a4-88ba-09de883ee749" Aug 13 00:23:07.895351 containerd[1729]: time="2025-08-13T00:23:07.895294256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:07.897198 containerd[1729]: time="2025-08-13T00:23:07.897156101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 13 00:23:07.899723 containerd[1729]: time="2025-08-13T00:23:07.899669508Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:07.905226 containerd[1729]: time="2025-08-13T00:23:07.904615161Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.316148959s" Aug 13 00:23:07.905226 containerd[1729]: time="2025-08-13T00:23:07.904654361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 13 00:23:07.905422 containerd[1729]: time="2025-08-13T00:23:07.905396883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:07.910584 containerd[1729]: time="2025-08-13T00:23:07.910537938Z" level=info msg="CreateContainer within sandbox \"b0175830f92f8d347a8edbe62fe52316f0a75f5082134225449a132c123f6ef3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:23:07.950522 containerd[1729]: time="2025-08-13T00:23:07.950476927Z" level=info msg="CreateContainer within sandbox \"b0175830f92f8d347a8edbe62fe52316f0a75f5082134225449a132c123f6ef3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"30437d1c070425326f027470cf7756b807db8d39b148841747bbb69cbcf3915d\"" Aug 13 00:23:07.951638 containerd[1729]: time="2025-08-13T00:23:07.951597930Z" level=info msg="StartContainer for \"30437d1c070425326f027470cf7756b807db8d39b148841747bbb69cbcf3915d\"" Aug 13 00:23:07.979429 systemd[1]: Started cri-containerd-30437d1c070425326f027470cf7756b807db8d39b148841747bbb69cbcf3915d.scope - libcontainer container 30437d1c070425326f027470cf7756b807db8d39b148841747bbb69cbcf3915d. Aug 13 00:23:08.011483 containerd[1729]: time="2025-08-13T00:23:08.011435495Z" level=info msg="StartContainer for \"30437d1c070425326f027470cf7756b807db8d39b148841747bbb69cbcf3915d\" returns successfully" Aug 13 00:23:08.471937 kubelet[3171]: E0813 00:23:08.470852 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wbgr7" podUID="7bc7ffc9-ee10-44a4-88ba-09de883ee749" Aug 13 00:23:09.214588 containerd[1729]: time="2025-08-13T00:23:09.214379277Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:23:09.216956 systemd[1]: cri-containerd-30437d1c070425326f027470cf7756b807db8d39b148841747bbb69cbcf3915d.scope: Deactivated successfully. Aug 13 00:23:09.238764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30437d1c070425326f027470cf7756b807db8d39b148841747bbb69cbcf3915d-rootfs.mount: Deactivated successfully. Aug 13 00:23:09.255171 kubelet[3171]: I0813 00:23:09.254849 3171 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:23:09.688446 kubelet[3171]: W0813 00:23:09.321857 3171 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.5-a-2fbd311b45" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.5-a-2fbd311b45' and this object Aug 13 00:23:09.688446 kubelet[3171]: W0813 00:23:09.321892 3171 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081.3.5-a-2fbd311b45" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.5-a-2fbd311b45' and this object Aug 13 00:23:09.688446 kubelet[3171]: E0813 00:23:09.321914 3171 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4081.3.5-a-2fbd311b45\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.5-a-2fbd311b45' and this object" logger="UnhandledError" Aug 13 00:23:09.688446 kubelet[3171]: E0813 00:23:09.321897 3171 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.5-a-2fbd311b45\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.5-a-2fbd311b45' and this object" logger="UnhandledError" Aug 13 00:23:09.304495 systemd[1]: Created slice kubepods-burstable-podf0437c77_5cee_49cc_a3f4_ee6bceab493a.slice - libcontainer container kubepods-burstable-podf0437c77_5cee_49cc_a3f4_ee6bceab493a.slice. Aug 13 00:23:09.688979 kubelet[3171]: I0813 00:23:09.386284 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkvxl\" (UniqueName: \"kubernetes.io/projected/7b8d5ee9-2ad0-42de-94dd-d38908c2dfe3-kube-api-access-vkvxl\") pod \"calico-kube-controllers-756654fbd8-fsrdw\" (UID: \"7b8d5ee9-2ad0-42de-94dd-d38908c2dfe3\") " pod="calico-system/calico-kube-controllers-756654fbd8-fsrdw" Aug 13 00:23:09.688979 kubelet[3171]: I0813 00:23:09.386332 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/102c51f8-86ca-4e96-b21d-1cdaf3e4ee86-whisker-backend-key-pair\") pod \"whisker-5dc58ff4cb-d6pst\" (UID: \"102c51f8-86ca-4e96-b21d-1cdaf3e4ee86\") " pod="calico-system/whisker-5dc58ff4cb-d6pst" Aug 13 00:23:09.688979 kubelet[3171]: I0813 00:23:09.386350 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/102c51f8-86ca-4e96-b21d-1cdaf3e4ee86-whisker-ca-bundle\") pod \"whisker-5dc58ff4cb-d6pst\" (UID: \"102c51f8-86ca-4e96-b21d-1cdaf3e4ee86\") " pod="calico-system/whisker-5dc58ff4cb-d6pst" Aug 13 00:23:09.688979 kubelet[3171]: I0813 00:23:09.386389 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njtxs\" (UniqueName: \"kubernetes.io/projected/a72bed6b-002b-4a44-960e-b6cc7d136310-kube-api-access-njtxs\") pod \"coredns-668d6bf9bc-l25zw\" (UID: \"a72bed6b-002b-4a44-960e-b6cc7d136310\") " pod="kube-system/coredns-668d6bf9bc-l25zw" Aug 13 00:23:09.688979 kubelet[3171]: I0813 00:23:09.386409 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktl8z\" (UniqueName: \"kubernetes.io/projected/102c51f8-86ca-4e96-b21d-1cdaf3e4ee86-kube-api-access-ktl8z\") pod \"whisker-5dc58ff4cb-d6pst\" (UID: \"102c51f8-86ca-4e96-b21d-1cdaf3e4ee86\") " pod="calico-system/whisker-5dc58ff4cb-d6pst" Aug 13 00:23:09.321411 systemd[1]: Created slice kubepods-burstable-poda72bed6b_002b_4a44_960e_b6cc7d136310.slice - libcontainer container kubepods-burstable-poda72bed6b_002b_4a44_960e_b6cc7d136310.slice. Aug 13 00:23:09.689160 kubelet[3171]: I0813 00:23:09.386426 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/92ee9526-40a5-4de2-be91-45eaf7973f17-calico-apiserver-certs\") pod \"calico-apiserver-56fd6c9f9d-tgxh9\" (UID: \"92ee9526-40a5-4de2-be91-45eaf7973f17\") " pod="calico-apiserver/calico-apiserver-56fd6c9f9d-tgxh9" Aug 13 00:23:09.689160 kubelet[3171]: I0813 00:23:09.386445 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9j7m\" (UniqueName: \"kubernetes.io/projected/d0c10593-4a39-4156-937c-70315299f09c-kube-api-access-c9j7m\") pod \"calico-apiserver-56fd6c9f9d-sgp55\" (UID: \"d0c10593-4a39-4156-937c-70315299f09c\") " pod="calico-apiserver/calico-apiserver-56fd6c9f9d-sgp55" Aug 13 00:23:09.689160 kubelet[3171]: I0813 00:23:09.386463 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b8d5ee9-2ad0-42de-94dd-d38908c2dfe3-tigera-ca-bundle\") pod \"calico-kube-controllers-756654fbd8-fsrdw\" (UID: \"7b8d5ee9-2ad0-42de-94dd-d38908c2dfe3\") " pod="calico-system/calico-kube-controllers-756654fbd8-fsrdw" Aug 13 00:23:09.689160 kubelet[3171]: I0813 00:23:09.386503 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8bf5886c-e31f-4c51-ba8d-b8b7e72967c4-goldmane-key-pair\") pod \"goldmane-768f4c5c69-l455f\" (UID: \"8bf5886c-e31f-4c51-ba8d-b8b7e72967c4\") " pod="calico-system/goldmane-768f4c5c69-l455f" Aug 13 00:23:09.689160 kubelet[3171]: I0813 00:23:09.386521 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bf5886c-e31f-4c51-ba8d-b8b7e72967c4-config\") pod \"goldmane-768f4c5c69-l455f\" (UID: \"8bf5886c-e31f-4c51-ba8d-b8b7e72967c4\") " pod="calico-system/goldmane-768f4c5c69-l455f" Aug 13 00:23:09.330020 systemd[1]: Created slice kubepods-besteffort-pod7b8d5ee9_2ad0_42de_94dd_d38908c2dfe3.slice - libcontainer container kubepods-besteffort-pod7b8d5ee9_2ad0_42de_94dd_d38908c2dfe3.slice. Aug 13 00:23:09.689408 kubelet[3171]: I0813 00:23:09.386537 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mlzk\" (UniqueName: \"kubernetes.io/projected/8bf5886c-e31f-4c51-ba8d-b8b7e72967c4-kube-api-access-7mlzk\") pod \"goldmane-768f4c5c69-l455f\" (UID: \"8bf5886c-e31f-4c51-ba8d-b8b7e72967c4\") " pod="calico-system/goldmane-768f4c5c69-l455f" Aug 13 00:23:09.689408 kubelet[3171]: I0813 00:23:09.386554 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0437c77-5cee-49cc-a3f4-ee6bceab493a-config-volume\") pod \"coredns-668d6bf9bc-4rkw9\" (UID: \"f0437c77-5cee-49cc-a3f4-ee6bceab493a\") " pod="kube-system/coredns-668d6bf9bc-4rkw9" Aug 13 00:23:09.689408 kubelet[3171]: I0813 00:23:09.386570 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrr5v\" (UniqueName: \"kubernetes.io/projected/f0437c77-5cee-49cc-a3f4-ee6bceab493a-kube-api-access-zrr5v\") pod \"coredns-668d6bf9bc-4rkw9\" (UID: \"f0437c77-5cee-49cc-a3f4-ee6bceab493a\") " pod="kube-system/coredns-668d6bf9bc-4rkw9" Aug 13 00:23:09.689408 kubelet[3171]: I0813 00:23:09.386589 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bf5886c-e31f-4c51-ba8d-b8b7e72967c4-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-l455f\" (UID: \"8bf5886c-e31f-4c51-ba8d-b8b7e72967c4\") " pod="calico-system/goldmane-768f4c5c69-l455f" Aug 13 00:23:09.689408 kubelet[3171]: I0813 00:23:09.386618 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a72bed6b-002b-4a44-960e-b6cc7d136310-config-volume\") pod \"coredns-668d6bf9bc-l25zw\" (UID: \"a72bed6b-002b-4a44-960e-b6cc7d136310\") " pod="kube-system/coredns-668d6bf9bc-l25zw" Aug 13 00:23:09.342761 systemd[1]: Created slice kubepods-besteffort-podd0c10593_4a39_4156_937c_70315299f09c.slice - libcontainer container kubepods-besteffort-podd0c10593_4a39_4156_937c_70315299f09c.slice. Aug 13 00:23:09.689570 kubelet[3171]: I0813 00:23:09.386637 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d0c10593-4a39-4156-937c-70315299f09c-calico-apiserver-certs\") pod \"calico-apiserver-56fd6c9f9d-sgp55\" (UID: \"d0c10593-4a39-4156-937c-70315299f09c\") " pod="calico-apiserver/calico-apiserver-56fd6c9f9d-sgp55" Aug 13 00:23:09.689570 kubelet[3171]: I0813 00:23:09.386657 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t68k\" (UniqueName: \"kubernetes.io/projected/92ee9526-40a5-4de2-be91-45eaf7973f17-kube-api-access-2t68k\") pod \"calico-apiserver-56fd6c9f9d-tgxh9\" (UID: \"92ee9526-40a5-4de2-be91-45eaf7973f17\") " pod="calico-apiserver/calico-apiserver-56fd6c9f9d-tgxh9" Aug 13 00:23:09.350368 systemd[1]: Created slice kubepods-besteffort-pod92ee9526_40a5_4de2_be91_45eaf7973f17.slice - libcontainer container kubepods-besteffort-pod92ee9526_40a5_4de2_be91_45eaf7973f17.slice. Aug 13 00:23:09.361485 systemd[1]: Created slice kubepods-besteffort-pod102c51f8_86ca_4e96_b21d_1cdaf3e4ee86.slice - libcontainer container kubepods-besteffort-pod102c51f8_86ca_4e96_b21d_1cdaf3e4ee86.slice. Aug 13 00:23:09.368270 systemd[1]: Created slice kubepods-besteffort-pod8bf5886c_e31f_4c51_ba8d_b8b7e72967c4.slice - libcontainer container kubepods-besteffort-pod8bf5886c_e31f_4c51_ba8d_b8b7e72967c4.slice. Aug 13 00:23:09.990334 containerd[1729]: time="2025-08-13T00:23:09.990283088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4rkw9,Uid:f0437c77-5cee-49cc-a3f4-ee6bceab493a,Namespace:kube-system,Attempt:0,}" Aug 13 00:23:09.994233 containerd[1729]: time="2025-08-13T00:23:09.993717697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l25zw,Uid:a72bed6b-002b-4a44-960e-b6cc7d136310,Namespace:kube-system,Attempt:0,}" Aug 13 00:23:09.994233 containerd[1729]: time="2025-08-13T00:23:09.993999098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-l455f,Uid:8bf5886c-e31f-4c51-ba8d-b8b7e72967c4,Namespace:calico-system,Attempt:0,}" Aug 13 00:23:09.994990 containerd[1729]: time="2025-08-13T00:23:09.994766260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-756654fbd8-fsrdw,Uid:7b8d5ee9-2ad0-42de-94dd-d38908c2dfe3,Namespace:calico-system,Attempt:0,}" Aug 13 00:23:10.029034 containerd[1729]: time="2025-08-13T00:23:10.028995834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dc58ff4cb-d6pst,Uid:102c51f8-86ca-4e96-b21d-1cdaf3e4ee86,Namespace:calico-system,Attempt:0,}" Aug 13 00:23:10.094214 containerd[1729]: time="2025-08-13T00:23:10.093819132Z" level=info msg="shim disconnected" id=30437d1c070425326f027470cf7756b807db8d39b148841747bbb69cbcf3915d namespace=k8s.io Aug 13 00:23:10.094214 containerd[1729]: time="2025-08-13T00:23:10.093883132Z" level=warning msg="cleaning up after shim disconnected" id=30437d1c070425326f027470cf7756b807db8d39b148841747bbb69cbcf3915d namespace=k8s.io Aug 13 00:23:10.094214 containerd[1729]: time="2025-08-13T00:23:10.093892252Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:23:10.411851 containerd[1729]: time="2025-08-13T00:23:10.410541242Z" level=error msg="Failed to destroy network for sandbox \"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.411851 containerd[1729]: time="2025-08-13T00:23:10.411034523Z" level=error msg="encountered an error cleaning up failed sandbox \"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.411851 containerd[1729]: time="2025-08-13T00:23:10.411090083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dc58ff4cb-d6pst,Uid:102c51f8-86ca-4e96-b21d-1cdaf3e4ee86,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.412278 kubelet[3171]: E0813 00:23:10.411376 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.412278 kubelet[3171]: E0813 00:23:10.411448 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dc58ff4cb-d6pst" Aug 13 00:23:10.412278 kubelet[3171]: E0813 00:23:10.411468 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dc58ff4cb-d6pst" Aug 13 00:23:10.412373 kubelet[3171]: E0813 00:23:10.411511 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5dc58ff4cb-d6pst_calico-system(102c51f8-86ca-4e96-b21d-1cdaf3e4ee86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5dc58ff4cb-d6pst_calico-system(102c51f8-86ca-4e96-b21d-1cdaf3e4ee86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5dc58ff4cb-d6pst" podUID="102c51f8-86ca-4e96-b21d-1cdaf3e4ee86" Aug 13 00:23:10.418732 containerd[1729]: time="2025-08-13T00:23:10.417549421Z" level=error msg="Failed to destroy network for sandbox \"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.420677 containerd[1729]: time="2025-08-13T00:23:10.420567269Z" level=error msg="encountered an error cleaning up failed sandbox \"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.420677 containerd[1729]: time="2025-08-13T00:23:10.420664029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-756654fbd8-fsrdw,Uid:7b8d5ee9-2ad0-42de-94dd-d38908c2dfe3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.420953 kubelet[3171]: E0813 00:23:10.420856 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.420953 kubelet[3171]: E0813 00:23:10.420914 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-756654fbd8-fsrdw" Aug 13 00:23:10.420953 kubelet[3171]: E0813 00:23:10.420933 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-756654fbd8-fsrdw" Aug 13 00:23:10.421393 kubelet[3171]: E0813 00:23:10.420979 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-756654fbd8-fsrdw_calico-system(7b8d5ee9-2ad0-42de-94dd-d38908c2dfe3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-756654fbd8-fsrdw_calico-system(7b8d5ee9-2ad0-42de-94dd-d38908c2dfe3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-756654fbd8-fsrdw" podUID="7b8d5ee9-2ad0-42de-94dd-d38908c2dfe3" Aug 13 00:23:10.429734 containerd[1729]: time="2025-08-13T00:23:10.429643614Z" level=error msg="Failed to destroy network for sandbox \"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.430207 containerd[1729]: time="2025-08-13T00:23:10.430112095Z" level=error msg="Failed to destroy network for sandbox \"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.430880 containerd[1729]: time="2025-08-13T00:23:10.430761977Z" level=error msg="encountered an error cleaning up failed sandbox \"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.430880 containerd[1729]: time="2025-08-13T00:23:10.430813977Z" level=error msg="encountered an error cleaning up failed sandbox \"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.430987 containerd[1729]: time="2025-08-13T00:23:10.430915657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-l455f,Uid:8bf5886c-e31f-4c51-ba8d-b8b7e72967c4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.432080 containerd[1729]: time="2025-08-13T00:23:10.430871017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l25zw,Uid:a72bed6b-002b-4a44-960e-b6cc7d136310,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.432201 kubelet[3171]: E0813 00:23:10.431173 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.432201 kubelet[3171]: E0813 00:23:10.431233 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-l25zw" Aug 13 00:23:10.432201 kubelet[3171]: E0813 00:23:10.431258 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-l25zw" Aug 13 00:23:10.432977 kubelet[3171]: E0813 00:23:10.431295 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-l25zw_kube-system(a72bed6b-002b-4a44-960e-b6cc7d136310)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-l25zw_kube-system(a72bed6b-002b-4a44-960e-b6cc7d136310)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-l25zw" podUID="a72bed6b-002b-4a44-960e-b6cc7d136310" Aug 13 00:23:10.432977 kubelet[3171]: E0813 00:23:10.431177 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.432977 kubelet[3171]: E0813 00:23:10.431614 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-l455f" Aug 13 00:23:10.433127 containerd[1729]: time="2025-08-13T00:23:10.432221301Z" level=error msg="Failed to destroy network for sandbox \"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.433127 containerd[1729]: time="2025-08-13T00:23:10.432620462Z" level=error msg="encountered an error cleaning up failed sandbox \"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.433127 containerd[1729]: time="2025-08-13T00:23:10.432704902Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4rkw9,Uid:f0437c77-5cee-49cc-a3f4-ee6bceab493a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.433268 kubelet[3171]: E0813 00:23:10.431634 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-l455f" Aug 13 00:23:10.433268 kubelet[3171]: E0813 00:23:10.431680 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-l455f_calico-system(8bf5886c-e31f-4c51-ba8d-b8b7e72967c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-l455f_calico-system(8bf5886c-e31f-4c51-ba8d-b8b7e72967c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-l455f" podUID="8bf5886c-e31f-4c51-ba8d-b8b7e72967c4" Aug 13 00:23:10.433268 kubelet[3171]: E0813 00:23:10.432959 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.433359 kubelet[3171]: E0813 00:23:10.433022 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4rkw9" Aug 13 00:23:10.433359 kubelet[3171]: E0813 00:23:10.433042 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4rkw9" Aug 13 00:23:10.433359 kubelet[3171]: E0813 00:23:10.433081 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4rkw9_kube-system(f0437c77-5cee-49cc-a3f4-ee6bceab493a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4rkw9_kube-system(f0437c77-5cee-49cc-a3f4-ee6bceab493a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4rkw9" podUID="f0437c77-5cee-49cc-a3f4-ee6bceab493a" Aug 13 00:23:10.477218 systemd[1]: Created slice kubepods-besteffort-pod7bc7ffc9_ee10_44a4_88ba_09de883ee749.slice - libcontainer container kubepods-besteffort-pod7bc7ffc9_ee10_44a4_88ba_09de883ee749.slice. Aug 13 00:23:10.480190 containerd[1729]: time="2025-08-13T00:23:10.480077272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wbgr7,Uid:7bc7ffc9-ee10-44a4-88ba-09de883ee749,Namespace:calico-system,Attempt:0,}" Aug 13 00:23:10.552342 containerd[1729]: time="2025-08-13T00:23:10.552278471Z" level=error msg="Failed to destroy network for sandbox \"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.552640 containerd[1729]: time="2025-08-13T00:23:10.552610192Z" level=error msg="encountered an error cleaning up failed sandbox \"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.552687 containerd[1729]: time="2025-08-13T00:23:10.552673752Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wbgr7,Uid:7bc7ffc9-ee10-44a4-88ba-09de883ee749,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.552954 kubelet[3171]: E0813 00:23:10.552902 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.553013 kubelet[3171]: E0813 00:23:10.552975 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wbgr7" Aug 13 00:23:10.553013 kubelet[3171]: E0813 00:23:10.552993 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wbgr7" Aug 13 00:23:10.553168 kubelet[3171]: E0813 00:23:10.553039 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wbgr7_calico-system(7bc7ffc9-ee10-44a4-88ba-09de883ee749)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wbgr7_calico-system(7bc7ffc9-ee10-44a4-88ba-09de883ee749)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wbgr7" podUID="7bc7ffc9-ee10-44a4-88ba-09de883ee749" Aug 13 00:23:10.593050 containerd[1729]: time="2025-08-13T00:23:10.592993022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56fd6c9f9d-tgxh9,Uid:92ee9526-40a5-4de2-be91-45eaf7973f17,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:23:10.601093 kubelet[3171]: I0813 00:23:10.600866 3171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Aug 13 00:23:10.602380 containerd[1729]: time="2025-08-13T00:23:10.601843367Z" level=info msg="StopPodSandbox for \"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\"" Aug 13 00:23:10.602380 containerd[1729]: time="2025-08-13T00:23:10.602054287Z" level=info msg="Ensure that sandbox 66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad in task-service has been cleanup successfully" Aug 13 00:23:10.608086 containerd[1729]: time="2025-08-13T00:23:10.608008984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:23:10.612308 containerd[1729]: time="2025-08-13T00:23:10.611473833Z" level=info msg="StopPodSandbox for \"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\"" Aug 13 00:23:10.612480 kubelet[3171]: I0813 00:23:10.609886 3171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Aug 13 00:23:10.613826 containerd[1729]: time="2025-08-13T00:23:10.613679679Z" level=info msg="Ensure that sandbox 6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab in task-service has been cleanup successfully" Aug 13 00:23:10.617164 kubelet[3171]: I0813 00:23:10.616958 3171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Aug 13 00:23:10.619504 containerd[1729]: time="2025-08-13T00:23:10.619321335Z" level=info msg="StopPodSandbox for \"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\"" Aug 13 00:23:10.619607 containerd[1729]: time="2025-08-13T00:23:10.619520215Z" level=info msg="Ensure that sandbox e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194 in task-service has been cleanup successfully" Aug 13 00:23:10.624754 kubelet[3171]: I0813 00:23:10.624329 3171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Aug 13 00:23:10.629429 containerd[1729]: time="2025-08-13T00:23:10.626416994Z" level=info msg="StopPodSandbox for \"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\"" Aug 13 00:23:10.632694 containerd[1729]: time="2025-08-13T00:23:10.632302770Z" level=info msg="Ensure that sandbox c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa in task-service has been cleanup successfully" Aug 13 00:23:10.640240 containerd[1729]: time="2025-08-13T00:23:10.638533827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56fd6c9f9d-sgp55,Uid:d0c10593-4a39-4156-937c-70315299f09c,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:23:10.641228 kubelet[3171]: I0813 00:23:10.641198 3171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Aug 13 00:23:10.646973 containerd[1729]: time="2025-08-13T00:23:10.644436284Z" level=info msg="StopPodSandbox for \"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\"" Aug 13 00:23:10.646973 containerd[1729]: time="2025-08-13T00:23:10.644650124Z" level=info msg="Ensure that sandbox 65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619 in task-service has been cleanup successfully" Aug 13 00:23:10.666847 kubelet[3171]: I0813 00:23:10.666741 3171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Aug 13 00:23:10.669119 containerd[1729]: time="2025-08-13T00:23:10.668999711Z" level=info msg="StopPodSandbox for \"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\"" Aug 13 00:23:10.669820 containerd[1729]: time="2025-08-13T00:23:10.669784313Z" level=info msg="Ensure that sandbox c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047 in task-service has been cleanup successfully" Aug 13 00:23:10.742385 containerd[1729]: time="2025-08-13T00:23:10.742330232Z" level=error msg="StopPodSandbox for \"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\" failed" error="failed to destroy network for sandbox \"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.742777 kubelet[3171]: E0813 00:23:10.742740 3171 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Aug 13 00:23:10.744437 kubelet[3171]: E0813 00:23:10.744274 3171 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad"} Aug 13 00:23:10.744437 kubelet[3171]: E0813 00:23:10.744367 3171 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f0437c77-5cee-49cc-a3f4-ee6bceab493a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:10.744437 kubelet[3171]: E0813 00:23:10.744391 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f0437c77-5cee-49cc-a3f4-ee6bceab493a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4rkw9" podUID="f0437c77-5cee-49cc-a3f4-ee6bceab493a" Aug 13 00:23:10.750545 containerd[1729]: time="2025-08-13T00:23:10.750400575Z" level=error msg="StopPodSandbox for \"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\" failed" error="failed to destroy network for sandbox \"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.751481 kubelet[3171]: E0813 00:23:10.751275 3171 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Aug 13 00:23:10.751481 kubelet[3171]: E0813 00:23:10.751331 3171 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab"} Aug 13 00:23:10.751481 kubelet[3171]: E0813 00:23:10.751364 3171 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7bc7ffc9-ee10-44a4-88ba-09de883ee749\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:10.751481 kubelet[3171]: E0813 00:23:10.751426 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7bc7ffc9-ee10-44a4-88ba-09de883ee749\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wbgr7" podUID="7bc7ffc9-ee10-44a4-88ba-09de883ee749" Aug 13 00:23:10.758224 containerd[1729]: time="2025-08-13T00:23:10.757664635Z" level=error msg="StopPodSandbox for \"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\" failed" error="failed to destroy network for sandbox \"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.758406 kubelet[3171]: E0813 00:23:10.757914 3171 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Aug 13 00:23:10.758406 kubelet[3171]: E0813 00:23:10.757965 3171 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619"} Aug 13 00:23:10.758406 kubelet[3171]: E0813 00:23:10.758030 3171 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8bf5886c-e31f-4c51-ba8d-b8b7e72967c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:10.758406 kubelet[3171]: E0813 00:23:10.758052 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8bf5886c-e31f-4c51-ba8d-b8b7e72967c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-l455f" podUID="8bf5886c-e31f-4c51-ba8d-b8b7e72967c4" Aug 13 00:23:10.758862 containerd[1729]: time="2025-08-13T00:23:10.758673797Z" level=error msg="StopPodSandbox for \"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\" failed" error="failed to destroy network for sandbox \"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.758923 kubelet[3171]: E0813 00:23:10.758860 3171 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Aug 13 00:23:10.758923 kubelet[3171]: E0813 00:23:10.758900 3171 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194"} Aug 13 00:23:10.759079 kubelet[3171]: E0813 00:23:10.758924 3171 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"102c51f8-86ca-4e96-b21d-1cdaf3e4ee86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:10.759079 kubelet[3171]: E0813 00:23:10.758943 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"102c51f8-86ca-4e96-b21d-1cdaf3e4ee86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5dc58ff4cb-d6pst" podUID="102c51f8-86ca-4e96-b21d-1cdaf3e4ee86" Aug 13 00:23:10.770392 containerd[1729]: time="2025-08-13T00:23:10.770293789Z" level=error msg="StopPodSandbox for \"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\" failed" error="failed to destroy network for sandbox \"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.770843 kubelet[3171]: E0813 00:23:10.770538 3171 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Aug 13 00:23:10.770843 kubelet[3171]: E0813 00:23:10.770590 3171 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047"} Aug 13 00:23:10.770843 kubelet[3171]: E0813 00:23:10.770622 3171 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a72bed6b-002b-4a44-960e-b6cc7d136310\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:10.770843 kubelet[3171]: E0813 00:23:10.770648 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a72bed6b-002b-4a44-960e-b6cc7d136310\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-l25zw" podUID="a72bed6b-002b-4a44-960e-b6cc7d136310" Aug 13 00:23:10.777360 containerd[1729]: time="2025-08-13T00:23:10.777314449Z" level=error msg="StopPodSandbox for \"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\" failed" error="failed to destroy network for sandbox \"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.777812 kubelet[3171]: E0813 00:23:10.777747 3171 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Aug 13 00:23:10.777903 kubelet[3171]: E0813 00:23:10.777810 3171 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa"} Aug 13 00:23:10.777903 kubelet[3171]: E0813 00:23:10.777846 3171 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b8d5ee9-2ad0-42de-94dd-d38908c2dfe3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:10.777903 kubelet[3171]: E0813 00:23:10.777875 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b8d5ee9-2ad0-42de-94dd-d38908c2dfe3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-756654fbd8-fsrdw" podUID="7b8d5ee9-2ad0-42de-94dd-d38908c2dfe3" Aug 13 00:23:10.807415 containerd[1729]: time="2025-08-13T00:23:10.807352171Z" level=error msg="Failed to destroy network for sandbox \"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.808591 containerd[1729]: time="2025-08-13T00:23:10.808423734Z" level=error msg="encountered an error cleaning up failed sandbox \"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.808591 containerd[1729]: time="2025-08-13T00:23:10.808482014Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56fd6c9f9d-tgxh9,Uid:92ee9526-40a5-4de2-be91-45eaf7973f17,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.809178 kubelet[3171]: E0813 00:23:10.808928 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.809178 kubelet[3171]: E0813 00:23:10.808990 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56fd6c9f9d-tgxh9" Aug 13 00:23:10.809178 kubelet[3171]: E0813 00:23:10.809010 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56fd6c9f9d-tgxh9" Aug 13 00:23:10.809317 kubelet[3171]: E0813 00:23:10.809046 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56fd6c9f9d-tgxh9_calico-apiserver(92ee9526-40a5-4de2-be91-45eaf7973f17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56fd6c9f9d-tgxh9_calico-apiserver(92ee9526-40a5-4de2-be91-45eaf7973f17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56fd6c9f9d-tgxh9" podUID="92ee9526-40a5-4de2-be91-45eaf7973f17" Aug 13 00:23:10.816797 containerd[1729]: time="2025-08-13T00:23:10.816654637Z" level=error msg="Failed to destroy network for sandbox \"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.817481 containerd[1729]: time="2025-08-13T00:23:10.817208158Z" level=error msg="encountered an error cleaning up failed sandbox \"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.817481 containerd[1729]: time="2025-08-13T00:23:10.817273878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56fd6c9f9d-sgp55,Uid:d0c10593-4a39-4156-937c-70315299f09c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.817611 kubelet[3171]: E0813 00:23:10.817511 3171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:10.817611 kubelet[3171]: E0813 00:23:10.817569 3171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56fd6c9f9d-sgp55" Aug 13 00:23:10.817611 kubelet[3171]: E0813 00:23:10.817590 3171 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56fd6c9f9d-sgp55" Aug 13 00:23:10.817810 kubelet[3171]: E0813 00:23:10.817639 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56fd6c9f9d-sgp55_calico-apiserver(d0c10593-4a39-4156-937c-70315299f09c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56fd6c9f9d-sgp55_calico-apiserver(d0c10593-4a39-4156-937c-70315299f09c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56fd6c9f9d-sgp55" podUID="d0c10593-4a39-4156-937c-70315299f09c" Aug 13 00:23:11.240155 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa-shm.mount: Deactivated successfully. Aug 13 00:23:11.240258 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619-shm.mount: Deactivated successfully. Aug 13 00:23:11.240311 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047-shm.mount: Deactivated successfully. Aug 13 00:23:11.240363 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad-shm.mount: Deactivated successfully. Aug 13 00:23:11.240412 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194-shm.mount: Deactivated successfully. Aug 13 00:23:11.671042 kubelet[3171]: I0813 00:23:11.670908 3171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Aug 13 00:23:11.674164 containerd[1729]: time="2025-08-13T00:23:11.671737064Z" level=info msg="StopPodSandbox for \"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\"" Aug 13 00:23:11.674164 containerd[1729]: time="2025-08-13T00:23:11.671977465Z" level=info msg="Ensure that sandbox 23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83 in task-service has been cleanup successfully" Aug 13 00:23:11.674164 containerd[1729]: time="2025-08-13T00:23:11.673454789Z" level=info msg="StopPodSandbox for \"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\"" Aug 13 00:23:11.674164 containerd[1729]: time="2025-08-13T00:23:11.673677350Z" level=info msg="Ensure that sandbox 1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2 in task-service has been cleanup successfully" Aug 13 00:23:11.674577 kubelet[3171]: I0813 00:23:11.672620 3171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Aug 13 00:23:11.709574 containerd[1729]: time="2025-08-13T00:23:11.709277767Z" level=error msg="StopPodSandbox for \"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\" failed" error="failed to destroy network for sandbox \"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:11.709773 kubelet[3171]: E0813 00:23:11.709554 3171 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Aug 13 00:23:11.709773 kubelet[3171]: E0813 00:23:11.709633 3171 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2"} Aug 13 00:23:11.709773 kubelet[3171]: E0813 00:23:11.709669 3171 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d0c10593-4a39-4156-937c-70315299f09c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:11.709773 kubelet[3171]: E0813 00:23:11.709695 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d0c10593-4a39-4156-937c-70315299f09c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56fd6c9f9d-sgp55" podUID="d0c10593-4a39-4156-937c-70315299f09c" Aug 13 00:23:11.717228 containerd[1729]: time="2025-08-13T00:23:11.716075226Z" level=error msg="StopPodSandbox for \"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\" failed" error="failed to destroy network for sandbox \"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:11.717390 kubelet[3171]: E0813 00:23:11.717244 3171 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Aug 13 00:23:11.717390 kubelet[3171]: E0813 00:23:11.717309 3171 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83"} Aug 13 00:23:11.717390 kubelet[3171]: E0813 00:23:11.717342 3171 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92ee9526-40a5-4de2-be91-45eaf7973f17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:11.717509 kubelet[3171]: E0813 00:23:11.717399 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92ee9526-40a5-4de2-be91-45eaf7973f17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56fd6c9f9d-tgxh9" podUID="92ee9526-40a5-4de2-be91-45eaf7973f17" Aug 13 00:23:15.343712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount452400524.mount: Deactivated successfully. Aug 13 00:23:15.397508 containerd[1729]: time="2025-08-13T00:23:15.397447623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:15.399913 containerd[1729]: time="2025-08-13T00:23:15.399858269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 13 00:23:15.402453 containerd[1729]: time="2025-08-13T00:23:15.402359316Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:15.410356 containerd[1729]: time="2025-08-13T00:23:15.409376815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:15.410356 containerd[1729]: time="2025-08-13T00:23:15.410020497Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.801969913s" Aug 13 00:23:15.410356 containerd[1729]: time="2025-08-13T00:23:15.410052657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 13 00:23:15.428831 containerd[1729]: time="2025-08-13T00:23:15.428791149Z" level=info msg="CreateContainer within sandbox \"b0175830f92f8d347a8edbe62fe52316f0a75f5082134225449a132c123f6ef3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:23:15.471889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2228556064.mount: Deactivated successfully. Aug 13 00:23:15.482208 containerd[1729]: time="2025-08-13T00:23:15.482125616Z" level=info msg="CreateContainer within sandbox \"b0175830f92f8d347a8edbe62fe52316f0a75f5082134225449a132c123f6ef3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"aefeaba50fcb5512cbd9f6d89fd15c6a7dc567005074ba4762948ebb0143158f\"" Aug 13 00:23:15.483043 containerd[1729]: time="2025-08-13T00:23:15.483004299Z" level=info msg="StartContainer for \"aefeaba50fcb5512cbd9f6d89fd15c6a7dc567005074ba4762948ebb0143158f\"" Aug 13 00:23:15.517359 systemd[1]: Started cri-containerd-aefeaba50fcb5512cbd9f6d89fd15c6a7dc567005074ba4762948ebb0143158f.scope - libcontainer container aefeaba50fcb5512cbd9f6d89fd15c6a7dc567005074ba4762948ebb0143158f. Aug 13 00:23:15.557769 containerd[1729]: time="2025-08-13T00:23:15.557633625Z" level=info msg="StartContainer for \"aefeaba50fcb5512cbd9f6d89fd15c6a7dc567005074ba4762948ebb0143158f\" returns successfully" Aug 13 00:23:15.726183 kubelet[3171]: I0813 00:23:15.725909 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gntcm" podStartSLOduration=1.514134584 podStartE2EDuration="15.725886369s" podCreationTimestamp="2025-08-13 00:23:00 +0000 UTC" firstStartedPulling="2025-08-13 00:23:01.199377035 +0000 UTC m=+21.884505755" lastFinishedPulling="2025-08-13 00:23:15.41112882 +0000 UTC m=+36.096257540" observedRunningTime="2025-08-13 00:23:15.725515448 +0000 UTC m=+36.410644208" watchObservedRunningTime="2025-08-13 00:23:15.725886369 +0000 UTC m=+36.411015129" Aug 13 00:23:15.945065 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:23:15.945208 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:23:16.080524 containerd[1729]: time="2025-08-13T00:23:16.080363507Z" level=info msg="StopPodSandbox for \"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\"" Aug 13 00:23:16.239168 containerd[1729]: 2025-08-13 00:23:16.187 [INFO][4385] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Aug 13 00:23:16.239168 containerd[1729]: 2025-08-13 00:23:16.188 [INFO][4385] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" iface="eth0" netns="/var/run/netns/cni-8d7c8127-c8c2-ccf1-8481-4e09880ad839" Aug 13 00:23:16.239168 containerd[1729]: 2025-08-13 00:23:16.189 [INFO][4385] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" iface="eth0" netns="/var/run/netns/cni-8d7c8127-c8c2-ccf1-8481-4e09880ad839" Aug 13 00:23:16.239168 containerd[1729]: 2025-08-13 00:23:16.191 [INFO][4385] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" iface="eth0" netns="/var/run/netns/cni-8d7c8127-c8c2-ccf1-8481-4e09880ad839" Aug 13 00:23:16.239168 containerd[1729]: 2025-08-13 00:23:16.191 [INFO][4385] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Aug 13 00:23:16.239168 containerd[1729]: 2025-08-13 00:23:16.193 [INFO][4385] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Aug 13 00:23:16.239168 containerd[1729]: 2025-08-13 00:23:16.219 [INFO][4398] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" HandleID="k8s-pod-network.e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Workload="ci--4081.3.5--a--2fbd311b45-k8s-whisker--5dc58ff4cb--d6pst-eth0" Aug 13 00:23:16.239168 containerd[1729]: 2025-08-13 00:23:16.219 [INFO][4398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:16.239168 containerd[1729]: 2025-08-13 00:23:16.219 [INFO][4398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:16.239168 containerd[1729]: 2025-08-13 00:23:16.231 [WARNING][4398] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" HandleID="k8s-pod-network.e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Workload="ci--4081.3.5--a--2fbd311b45-k8s-whisker--5dc58ff4cb--d6pst-eth0" Aug 13 00:23:16.239168 containerd[1729]: 2025-08-13 00:23:16.231 [INFO][4398] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" HandleID="k8s-pod-network.e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Workload="ci--4081.3.5--a--2fbd311b45-k8s-whisker--5dc58ff4cb--d6pst-eth0" Aug 13 00:23:16.239168 containerd[1729]: 2025-08-13 00:23:16.233 [INFO][4398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:16.239168 containerd[1729]: 2025-08-13 00:23:16.236 [INFO][4385] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Aug 13 00:23:16.240462 containerd[1729]: time="2025-08-13T00:23:16.239810707Z" level=info msg="TearDown network for sandbox \"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\" successfully" Aug 13 00:23:16.240462 containerd[1729]: time="2025-08-13T00:23:16.240292228Z" level=info msg="StopPodSandbox for \"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\" returns successfully" Aug 13 00:23:16.340561 kubelet[3171]: I0813 00:23:16.340199 3171 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/102c51f8-86ca-4e96-b21d-1cdaf3e4ee86-whisker-ca-bundle\") pod \"102c51f8-86ca-4e96-b21d-1cdaf3e4ee86\" (UID: \"102c51f8-86ca-4e96-b21d-1cdaf3e4ee86\") " Aug 13 00:23:16.340561 kubelet[3171]: I0813 00:23:16.340251 3171 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktl8z\" (UniqueName: \"kubernetes.io/projected/102c51f8-86ca-4e96-b21d-1cdaf3e4ee86-kube-api-access-ktl8z\") pod \"102c51f8-86ca-4e96-b21d-1cdaf3e4ee86\" (UID: \"102c51f8-86ca-4e96-b21d-1cdaf3e4ee86\") " Aug 13 00:23:16.340561 kubelet[3171]: I0813 00:23:16.340288 3171 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/102c51f8-86ca-4e96-b21d-1cdaf3e4ee86-whisker-backend-key-pair\") pod \"102c51f8-86ca-4e96-b21d-1cdaf3e4ee86\" (UID: \"102c51f8-86ca-4e96-b21d-1cdaf3e4ee86\") " Aug 13 00:23:16.351318 kubelet[3171]: I0813 00:23:16.346779 3171 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/102c51f8-86ca-4e96-b21d-1cdaf3e4ee86-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "102c51f8-86ca-4e96-b21d-1cdaf3e4ee86" (UID: "102c51f8-86ca-4e96-b21d-1cdaf3e4ee86"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:23:16.351318 kubelet[3171]: I0813 00:23:16.349722 3171 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/102c51f8-86ca-4e96-b21d-1cdaf3e4ee86-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "102c51f8-86ca-4e96-b21d-1cdaf3e4ee86" (UID: "102c51f8-86ca-4e96-b21d-1cdaf3e4ee86"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:23:16.350976 systemd[1]: run-netns-cni\x2d8d7c8127\x2dc8c2\x2dccf1\x2d8481\x2d4e09880ad839.mount: Deactivated successfully. Aug 13 00:23:16.358070 kubelet[3171]: I0813 00:23:16.357999 3171 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/102c51f8-86ca-4e96-b21d-1cdaf3e4ee86-kube-api-access-ktl8z" (OuterVolumeSpecName: "kube-api-access-ktl8z") pod "102c51f8-86ca-4e96-b21d-1cdaf3e4ee86" (UID: "102c51f8-86ca-4e96-b21d-1cdaf3e4ee86"). InnerVolumeSpecName "kube-api-access-ktl8z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:23:16.358938 systemd[1]: var-lib-kubelet-pods-102c51f8\x2d86ca\x2d4e96\x2db21d\x2d1cdaf3e4ee86-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:23:16.361812 systemd[1]: var-lib-kubelet-pods-102c51f8\x2d86ca\x2d4e96\x2db21d\x2d1cdaf3e4ee86-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dktl8z.mount: Deactivated successfully. Aug 13 00:23:16.440724 kubelet[3171]: I0813 00:23:16.440674 3171 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/102c51f8-86ca-4e96-b21d-1cdaf3e4ee86-whisker-ca-bundle\") on node \"ci-4081.3.5-a-2fbd311b45\" DevicePath \"\"" Aug 13 00:23:16.440724 kubelet[3171]: I0813 00:23:16.440721 3171 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ktl8z\" (UniqueName: \"kubernetes.io/projected/102c51f8-86ca-4e96-b21d-1cdaf3e4ee86-kube-api-access-ktl8z\") on node \"ci-4081.3.5-a-2fbd311b45\" DevicePath \"\"" Aug 13 00:23:16.440724 kubelet[3171]: I0813 00:23:16.440732 3171 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/102c51f8-86ca-4e96-b21d-1cdaf3e4ee86-whisker-backend-key-pair\") on node \"ci-4081.3.5-a-2fbd311b45\" DevicePath \"\"" Aug 13 00:23:16.701385 systemd[1]: Removed slice kubepods-besteffort-pod102c51f8_86ca_4e96_b21d_1cdaf3e4ee86.slice - libcontainer container kubepods-besteffort-pod102c51f8_86ca_4e96_b21d_1cdaf3e4ee86.slice. Aug 13 00:23:16.806045 systemd[1]: Created slice kubepods-besteffort-pod34a3b1b6_a5e7_4c21_b875_707eea70f9d3.slice - libcontainer container kubepods-besteffort-pod34a3b1b6_a5e7_4c21_b875_707eea70f9d3.slice. Aug 13 00:23:16.844309 kubelet[3171]: I0813 00:23:16.843819 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34a3b1b6-a5e7-4c21-b875-707eea70f9d3-whisker-ca-bundle\") pod \"whisker-656d64bf9b-fss7b\" (UID: \"34a3b1b6-a5e7-4c21-b875-707eea70f9d3\") " pod="calico-system/whisker-656d64bf9b-fss7b" Aug 13 00:23:16.844309 kubelet[3171]: I0813 00:23:16.843871 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/34a3b1b6-a5e7-4c21-b875-707eea70f9d3-whisker-backend-key-pair\") pod \"whisker-656d64bf9b-fss7b\" (UID: \"34a3b1b6-a5e7-4c21-b875-707eea70f9d3\") " pod="calico-system/whisker-656d64bf9b-fss7b" Aug 13 00:23:16.844309 kubelet[3171]: I0813 00:23:16.843917 3171 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssqvz\" (UniqueName: \"kubernetes.io/projected/34a3b1b6-a5e7-4c21-b875-707eea70f9d3-kube-api-access-ssqvz\") pod \"whisker-656d64bf9b-fss7b\" (UID: \"34a3b1b6-a5e7-4c21-b875-707eea70f9d3\") " pod="calico-system/whisker-656d64bf9b-fss7b" Aug 13 00:23:17.112117 containerd[1729]: time="2025-08-13T00:23:17.111987433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-656d64bf9b-fss7b,Uid:34a3b1b6-a5e7-4c21-b875-707eea70f9d3,Namespace:calico-system,Attempt:0,}" Aug 13 00:23:17.481098 kubelet[3171]: I0813 00:23:17.481054 3171 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="102c51f8-86ca-4e96-b21d-1cdaf3e4ee86" path="/var/lib/kubelet/pods/102c51f8-86ca-4e96-b21d-1cdaf3e4ee86/volumes" Aug 13 00:23:17.567471 systemd-networkd[1579]: cali35559a61545: Link UP Aug 13 00:23:17.568125 systemd-networkd[1579]: cali35559a61545: Gained carrier Aug 13 00:23:17.601896 containerd[1729]: 2025-08-13 00:23:17.384 [INFO][4442] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:23:17.601896 containerd[1729]: 2025-08-13 00:23:17.403 [INFO][4442] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--2fbd311b45-k8s-whisker--656d64bf9b--fss7b-eth0 whisker-656d64bf9b- calico-system 34a3b1b6-a5e7-4c21-b875-707eea70f9d3 864 0 2025-08-13 00:23:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:656d64bf9b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.5-a-2fbd311b45 whisker-656d64bf9b-fss7b eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali35559a61545 [] [] }} ContainerID="930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444" Namespace="calico-system" Pod="whisker-656d64bf9b-fss7b" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-whisker--656d64bf9b--fss7b-" Aug 13 00:23:17.601896 containerd[1729]: 2025-08-13 00:23:17.403 [INFO][4442] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444" Namespace="calico-system" Pod="whisker-656d64bf9b-fss7b" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-whisker--656d64bf9b--fss7b-eth0" Aug 13 00:23:17.601896 containerd[1729]: 2025-08-13 00:23:17.468 [INFO][4482] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444" HandleID="k8s-pod-network.930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444" Workload="ci--4081.3.5--a--2fbd311b45-k8s-whisker--656d64bf9b--fss7b-eth0" Aug 13 00:23:17.601896 containerd[1729]: 2025-08-13 00:23:17.468 [INFO][4482] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444" HandleID="k8s-pod-network.930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444" Workload="ci--4081.3.5--a--2fbd311b45-k8s-whisker--656d64bf9b--fss7b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d9c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-2fbd311b45", "pod":"whisker-656d64bf9b-fss7b", "timestamp":"2025-08-13 00:23:17.468308096 +0000 UTC"}, Hostname:"ci-4081.3.5-a-2fbd311b45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:17.601896 containerd[1729]: 2025-08-13 00:23:17.468 [INFO][4482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:17.601896 containerd[1729]: 2025-08-13 00:23:17.468 [INFO][4482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:17.601896 containerd[1729]: 2025-08-13 00:23:17.468 [INFO][4482] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-2fbd311b45' Aug 13 00:23:17.601896 containerd[1729]: 2025-08-13 00:23:17.490 [INFO][4482] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:17.601896 containerd[1729]: 2025-08-13 00:23:17.497 [INFO][4482] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:17.601896 containerd[1729]: 2025-08-13 00:23:17.503 [INFO][4482] ipam/ipam.go 511: Trying affinity for 192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:17.601896 containerd[1729]: 2025-08-13 00:23:17.506 [INFO][4482] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:17.601896 containerd[1729]: 2025-08-13 00:23:17.510 [INFO][4482] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:17.601896 containerd[1729]: 2025-08-13 00:23:17.510 [INFO][4482] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.192/26 handle="k8s-pod-network.930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:17.601896 containerd[1729]: 2025-08-13 00:23:17.512 [INFO][4482] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444 Aug 13 00:23:17.601896 containerd[1729]: 2025-08-13 00:23:17.518 [INFO][4482] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.192/26 handle="k8s-pod-network.930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:17.601896 containerd[1729]: 2025-08-13 00:23:17.527 [INFO][4482] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.193/26] block=192.168.105.192/26 handle="k8s-pod-network.930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:17.601896 containerd[1729]: 2025-08-13 00:23:17.528 [INFO][4482] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.193/26] handle="k8s-pod-network.930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:17.601896 containerd[1729]: 2025-08-13 00:23:17.528 [INFO][4482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:17.601896 containerd[1729]: 2025-08-13 00:23:17.528 [INFO][4482] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.193/26] IPv6=[] ContainerID="930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444" HandleID="k8s-pod-network.930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444" Workload="ci--4081.3.5--a--2fbd311b45-k8s-whisker--656d64bf9b--fss7b-eth0" Aug 13 00:23:17.604124 containerd[1729]: 2025-08-13 00:23:17.531 [INFO][4442] cni-plugin/k8s.go 418: Populated endpoint ContainerID="930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444" Namespace="calico-system" Pod="whisker-656d64bf9b-fss7b" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-whisker--656d64bf9b--fss7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-whisker--656d64bf9b--fss7b-eth0", GenerateName:"whisker-656d64bf9b-", Namespace:"calico-system", SelfLink:"", UID:"34a3b1b6-a5e7-4c21-b875-707eea70f9d3", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"656d64bf9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"", Pod:"whisker-656d64bf9b-fss7b", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.105.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali35559a61545", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:17.604124 containerd[1729]: 2025-08-13 00:23:17.531 [INFO][4442] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.193/32] ContainerID="930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444" Namespace="calico-system" Pod="whisker-656d64bf9b-fss7b" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-whisker--656d64bf9b--fss7b-eth0" Aug 13 00:23:17.604124 containerd[1729]: 2025-08-13 00:23:17.532 [INFO][4442] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35559a61545 ContainerID="930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444" Namespace="calico-system" Pod="whisker-656d64bf9b-fss7b" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-whisker--656d64bf9b--fss7b-eth0" Aug 13 00:23:17.604124 containerd[1729]: 2025-08-13 00:23:17.564 [INFO][4442] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444" Namespace="calico-system" Pod="whisker-656d64bf9b-fss7b" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-whisker--656d64bf9b--fss7b-eth0" Aug 13 00:23:17.604124 containerd[1729]: 2025-08-13 00:23:17.567 [INFO][4442] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444" Namespace="calico-system" Pod="whisker-656d64bf9b-fss7b" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-whisker--656d64bf9b--fss7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-whisker--656d64bf9b--fss7b-eth0", GenerateName:"whisker-656d64bf9b-", Namespace:"calico-system", SelfLink:"", UID:"34a3b1b6-a5e7-4c21-b875-707eea70f9d3", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"656d64bf9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444", Pod:"whisker-656d64bf9b-fss7b", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.105.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali35559a61545", MAC:"f6:6a:fb:a7:58:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:17.604124 containerd[1729]: 2025-08-13 00:23:17.596 [INFO][4442] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444" Namespace="calico-system" Pod="whisker-656d64bf9b-fss7b" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-whisker--656d64bf9b--fss7b-eth0" Aug 13 00:23:17.632821 containerd[1729]: time="2025-08-13T00:23:17.632592669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:17.632821 containerd[1729]: time="2025-08-13T00:23:17.632686710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:17.632821 containerd[1729]: time="2025-08-13T00:23:17.632706830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:17.635065 containerd[1729]: time="2025-08-13T00:23:17.632901110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:17.668860 systemd[1]: run-containerd-runc-k8s.io-930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444-runc.YuTr4w.mount: Deactivated successfully. Aug 13 00:23:17.681462 systemd[1]: Started cri-containerd-930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444.scope - libcontainer container 930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444. Aug 13 00:23:17.743402 containerd[1729]: time="2025-08-13T00:23:17.743211095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-656d64bf9b-fss7b,Uid:34a3b1b6-a5e7-4c21-b875-707eea70f9d3,Namespace:calico-system,Attempt:0,} returns sandbox id \"930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444\"" Aug 13 00:23:17.762996 containerd[1729]: time="2025-08-13T00:23:17.762858189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:23:18.985508 systemd-networkd[1579]: cali35559a61545: Gained IPv6LL Aug 13 00:23:19.031923 containerd[1729]: time="2025-08-13T00:23:19.031866530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:19.034639 containerd[1729]: time="2025-08-13T00:23:19.034491417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 13 00:23:19.038973 containerd[1729]: time="2025-08-13T00:23:19.038892950Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:19.044838 containerd[1729]: time="2025-08-13T00:23:19.044715686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:19.045719 containerd[1729]: time="2025-08-13T00:23:19.045571768Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.282654779s" Aug 13 00:23:19.045719 containerd[1729]: time="2025-08-13T00:23:19.045613008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 13 00:23:19.058648 containerd[1729]: time="2025-08-13T00:23:19.058495764Z" level=info msg="CreateContainer within sandbox \"930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:23:19.096667 containerd[1729]: time="2025-08-13T00:23:19.096618349Z" level=info msg="CreateContainer within sandbox \"930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7069ec5655edda428f1be29972e07536cb0256faf8687cc4375d774140ae2e8c\"" Aug 13 00:23:19.097461 containerd[1729]: time="2025-08-13T00:23:19.097426791Z" level=info msg="StartContainer for \"7069ec5655edda428f1be29972e07536cb0256faf8687cc4375d774140ae2e8c\"" Aug 13 00:23:19.129346 systemd[1]: Started cri-containerd-7069ec5655edda428f1be29972e07536cb0256faf8687cc4375d774140ae2e8c.scope - libcontainer container 7069ec5655edda428f1be29972e07536cb0256faf8687cc4375d774140ae2e8c. Aug 13 00:23:19.193267 containerd[1729]: time="2025-08-13T00:23:19.192243893Z" level=info msg="StartContainer for \"7069ec5655edda428f1be29972e07536cb0256faf8687cc4375d774140ae2e8c\" returns successfully" Aug 13 00:23:19.194550 containerd[1729]: time="2025-08-13T00:23:19.194522459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:23:20.860362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2322865832.mount: Deactivated successfully. Aug 13 00:23:21.474093 containerd[1729]: time="2025-08-13T00:23:21.473911515Z" level=info msg="StopPodSandbox for \"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\"" Aug 13 00:23:21.826187 containerd[1729]: 2025-08-13 00:23:21.524 [INFO][4728] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Aug 13 00:23:21.826187 containerd[1729]: 2025-08-13 00:23:21.787 [INFO][4728] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" iface="eth0" netns="/var/run/netns/cni-094bd47b-377c-66d8-e106-87902d50f8ee" Aug 13 00:23:21.826187 containerd[1729]: 2025-08-13 00:23:21.787 [INFO][4728] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" iface="eth0" netns="/var/run/netns/cni-094bd47b-377c-66d8-e106-87902d50f8ee" Aug 13 00:23:21.826187 containerd[1729]: 2025-08-13 00:23:21.788 [INFO][4728] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" iface="eth0" netns="/var/run/netns/cni-094bd47b-377c-66d8-e106-87902d50f8ee" Aug 13 00:23:21.826187 containerd[1729]: 2025-08-13 00:23:21.788 [INFO][4728] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Aug 13 00:23:21.826187 containerd[1729]: 2025-08-13 00:23:21.788 [INFO][4728] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Aug 13 00:23:21.826187 containerd[1729]: 2025-08-13 00:23:21.811 [INFO][4736] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" HandleID="k8s-pod-network.66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0" Aug 13 00:23:21.826187 containerd[1729]: 2025-08-13 00:23:21.812 [INFO][4736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:21.826187 containerd[1729]: 2025-08-13 00:23:21.812 [INFO][4736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:21.826187 containerd[1729]: 2025-08-13 00:23:21.820 [WARNING][4736] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" HandleID="k8s-pod-network.66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0" Aug 13 00:23:21.826187 containerd[1729]: 2025-08-13 00:23:21.820 [INFO][4736] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" HandleID="k8s-pod-network.66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0" Aug 13 00:23:21.826187 containerd[1729]: 2025-08-13 00:23:21.821 [INFO][4736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:21.826187 containerd[1729]: 2025-08-13 00:23:21.823 [INFO][4728] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Aug 13 00:23:21.826187 containerd[1729]: time="2025-08-13T00:23:21.824807333Z" level=info msg="TearDown network for sandbox \"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\" successfully" Aug 13 00:23:21.826187 containerd[1729]: time="2025-08-13T00:23:21.824842253Z" level=info msg="StopPodSandbox for \"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\" returns successfully" Aug 13 00:23:21.828096 containerd[1729]: time="2025-08-13T00:23:21.827367100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4rkw9,Uid:f0437c77-5cee-49cc-a3f4-ee6bceab493a,Namespace:kube-system,Attempt:1,}" Aug 13 00:23:21.828496 systemd[1]: run-netns-cni\x2d094bd47b\x2d377c\x2d66d8\x2de106\x2d87902d50f8ee.mount: Deactivated successfully. Aug 13 00:23:22.203179 containerd[1729]: time="2025-08-13T00:23:22.202354465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:22.204611 containerd[1729]: time="2025-08-13T00:23:22.204573551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 13 00:23:22.210163 containerd[1729]: time="2025-08-13T00:23:22.210048806Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:22.228787 containerd[1729]: time="2025-08-13T00:23:22.228595378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:22.230402 containerd[1729]: time="2025-08-13T00:23:22.230358383Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 3.035564083s" Aug 13 00:23:22.230473 containerd[1729]: time="2025-08-13T00:23:22.230406303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 13 00:23:22.236781 containerd[1729]: time="2025-08-13T00:23:22.236636681Z" level=info msg="CreateContainer within sandbox \"930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:23:22.261732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount469577246.mount: Deactivated successfully. Aug 13 00:23:22.273631 containerd[1729]: time="2025-08-13T00:23:22.273579383Z" level=info msg="CreateContainer within sandbox \"930e31652cc7341dbcdd8aec54740b6b0a0006833f64022a88795eca17e51444\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e5bd77453caf737dfd6885edac918898a4d73ee1d6bf2d30b14c08a5f586133b\"" Aug 13 00:23:22.274800 containerd[1729]: time="2025-08-13T00:23:22.274763107Z" level=info msg="StartContainer for \"e5bd77453caf737dfd6885edac918898a4d73ee1d6bf2d30b14c08a5f586133b\"" Aug 13 00:23:22.315345 systemd[1]: Started cri-containerd-e5bd77453caf737dfd6885edac918898a4d73ee1d6bf2d30b14c08a5f586133b.scope - libcontainer container e5bd77453caf737dfd6885edac918898a4d73ee1d6bf2d30b14c08a5f586133b. Aug 13 00:23:22.368018 containerd[1729]: time="2025-08-13T00:23:22.367793526Z" level=info msg="StartContainer for \"e5bd77453caf737dfd6885edac918898a4d73ee1d6bf2d30b14c08a5f586133b\" returns successfully" Aug 13 00:23:22.388163 systemd-networkd[1579]: cali336563c61e6: Link UP Aug 13 00:23:22.389786 systemd-networkd[1579]: cali336563c61e6: Gained carrier Aug 13 00:23:22.414540 containerd[1729]: 2025-08-13 00:23:22.267 [INFO][4742] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:23:22.414540 containerd[1729]: 2025-08-13 00:23:22.293 [INFO][4742] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0 coredns-668d6bf9bc- kube-system f0437c77-5cee-49cc-a3f4-ee6bceab493a 887 0 2025-08-13 00:22:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-a-2fbd311b45 coredns-668d6bf9bc-4rkw9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali336563c61e6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639" Namespace="kube-system" Pod="coredns-668d6bf9bc-4rkw9" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-" Aug 13 00:23:22.414540 containerd[1729]: 2025-08-13 00:23:22.293 [INFO][4742] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639" Namespace="kube-system" Pod="coredns-668d6bf9bc-4rkw9" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0" Aug 13 00:23:22.414540 containerd[1729]: 2025-08-13 00:23:22.331 [INFO][4769] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639" HandleID="k8s-pod-network.7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0" Aug 13 00:23:22.414540 containerd[1729]: 2025-08-13 00:23:22.331 [INFO][4769] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639" HandleID="k8s-pod-network.7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3040), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-a-2fbd311b45", "pod":"coredns-668d6bf9bc-4rkw9", "timestamp":"2025-08-13 00:23:22.331461705 +0000 UTC"}, Hostname:"ci-4081.3.5-a-2fbd311b45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:22.414540 containerd[1729]: 2025-08-13 00:23:22.332 [INFO][4769] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:22.414540 containerd[1729]: 2025-08-13 00:23:22.332 [INFO][4769] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:22.414540 containerd[1729]: 2025-08-13 00:23:22.332 [INFO][4769] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-2fbd311b45' Aug 13 00:23:22.414540 containerd[1729]: 2025-08-13 00:23:22.342 [INFO][4769] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:22.414540 containerd[1729]: 2025-08-13 00:23:22.346 [INFO][4769] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:22.414540 containerd[1729]: 2025-08-13 00:23:22.354 [INFO][4769] ipam/ipam.go 511: Trying affinity for 192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:22.414540 containerd[1729]: 2025-08-13 00:23:22.356 [INFO][4769] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:22.414540 containerd[1729]: 2025-08-13 00:23:22.359 [INFO][4769] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:22.414540 containerd[1729]: 2025-08-13 00:23:22.360 [INFO][4769] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.192/26 handle="k8s-pod-network.7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:22.414540 containerd[1729]: 2025-08-13 00:23:22.362 [INFO][4769] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639 Aug 13 00:23:22.414540 containerd[1729]: 2025-08-13 00:23:22.371 [INFO][4769] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.192/26 handle="k8s-pod-network.7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:22.414540 containerd[1729]: 2025-08-13 00:23:22.381 [INFO][4769] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.194/26] block=192.168.105.192/26 handle="k8s-pod-network.7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:22.414540 containerd[1729]: 2025-08-13 00:23:22.381 [INFO][4769] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.194/26] handle="k8s-pod-network.7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:22.414540 containerd[1729]: 2025-08-13 00:23:22.381 [INFO][4769] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:22.414540 containerd[1729]: 2025-08-13 00:23:22.381 [INFO][4769] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.194/26] IPv6=[] ContainerID="7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639" HandleID="k8s-pod-network.7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0" Aug 13 00:23:22.415195 containerd[1729]: 2025-08-13 00:23:22.384 [INFO][4742] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639" Namespace="kube-system" Pod="coredns-668d6bf9bc-4rkw9" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f0437c77-5cee-49cc-a3f4-ee6bceab493a", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"", Pod:"coredns-668d6bf9bc-4rkw9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali336563c61e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:22.415195 containerd[1729]: 2025-08-13 00:23:22.384 [INFO][4742] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.194/32] ContainerID="7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639" Namespace="kube-system" Pod="coredns-668d6bf9bc-4rkw9" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0" Aug 13 00:23:22.415195 containerd[1729]: 2025-08-13 00:23:22.384 [INFO][4742] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali336563c61e6 ContainerID="7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639" Namespace="kube-system" Pod="coredns-668d6bf9bc-4rkw9" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0" Aug 13 00:23:22.415195 containerd[1729]: 2025-08-13 00:23:22.389 [INFO][4742] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639" Namespace="kube-system" Pod="coredns-668d6bf9bc-4rkw9" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0" Aug 13 00:23:22.415195 containerd[1729]: 2025-08-13 00:23:22.391 [INFO][4742] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639" Namespace="kube-system" Pod="coredns-668d6bf9bc-4rkw9" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f0437c77-5cee-49cc-a3f4-ee6bceab493a", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639", Pod:"coredns-668d6bf9bc-4rkw9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali336563c61e6", MAC:"ca:1a:6c:25:a0:97", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:22.415399 containerd[1729]: 2025-08-13 00:23:22.412 [INFO][4742] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639" Namespace="kube-system" Pod="coredns-668d6bf9bc-4rkw9" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0" Aug 13 00:23:22.444538 containerd[1729]: time="2025-08-13T00:23:22.444087899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:22.444538 containerd[1729]: time="2025-08-13T00:23:22.444187779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:22.444538 containerd[1729]: time="2025-08-13T00:23:22.444214859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:22.444538 containerd[1729]: time="2025-08-13T00:23:22.444310979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:22.463430 systemd[1]: Started cri-containerd-7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639.scope - libcontainer container 7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639. Aug 13 00:23:22.472869 containerd[1729]: time="2025-08-13T00:23:22.472815939Z" level=info msg="StopPodSandbox for \"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\"" Aug 13 00:23:22.504716 containerd[1729]: time="2025-08-13T00:23:22.504659547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4rkw9,Uid:f0437c77-5cee-49cc-a3f4-ee6bceab493a,Namespace:kube-system,Attempt:1,} returns sandbox id \"7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639\"" Aug 13 00:23:22.509689 containerd[1729]: time="2025-08-13T00:23:22.509418841Z" level=info msg="CreateContainer within sandbox \"7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:23:22.544959 containerd[1729]: time="2025-08-13T00:23:22.544795659Z" level=info msg="CreateContainer within sandbox \"7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"54747919bcad9b0949eacb579771b6f8d8555e810a70908e437bd02e67cee6f5\"" Aug 13 00:23:22.546407 containerd[1729]: time="2025-08-13T00:23:22.546363384Z" level=info msg="StartContainer for \"54747919bcad9b0949eacb579771b6f8d8555e810a70908e437bd02e67cee6f5\"" Aug 13 00:23:22.587482 systemd[1]: Started cri-containerd-54747919bcad9b0949eacb579771b6f8d8555e810a70908e437bd02e67cee6f5.scope - libcontainer container 54747919bcad9b0949eacb579771b6f8d8555e810a70908e437bd02e67cee6f5. Aug 13 00:23:22.620216 containerd[1729]: 2025-08-13 00:23:22.565 [INFO][4850] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Aug 13 00:23:22.620216 containerd[1729]: 2025-08-13 00:23:22.565 [INFO][4850] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" iface="eth0" netns="/var/run/netns/cni-d8782d09-136f-52ed-3357-c9f0479f70e5" Aug 13 00:23:22.620216 containerd[1729]: 2025-08-13 00:23:22.566 [INFO][4850] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" iface="eth0" netns="/var/run/netns/cni-d8782d09-136f-52ed-3357-c9f0479f70e5" Aug 13 00:23:22.620216 containerd[1729]: 2025-08-13 00:23:22.566 [INFO][4850] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" iface="eth0" netns="/var/run/netns/cni-d8782d09-136f-52ed-3357-c9f0479f70e5" Aug 13 00:23:22.620216 containerd[1729]: 2025-08-13 00:23:22.566 [INFO][4850] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Aug 13 00:23:22.620216 containerd[1729]: 2025-08-13 00:23:22.566 [INFO][4850] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Aug 13 00:23:22.620216 containerd[1729]: 2025-08-13 00:23:22.603 [INFO][4877] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" HandleID="k8s-pod-network.c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0" Aug 13 00:23:22.620216 containerd[1729]: 2025-08-13 00:23:22.603 [INFO][4877] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:22.620216 containerd[1729]: 2025-08-13 00:23:22.604 [INFO][4877] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:22.620216 containerd[1729]: 2025-08-13 00:23:22.613 [WARNING][4877] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" HandleID="k8s-pod-network.c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0" Aug 13 00:23:22.620216 containerd[1729]: 2025-08-13 00:23:22.613 [INFO][4877] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" HandleID="k8s-pod-network.c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0" Aug 13 00:23:22.620216 containerd[1729]: 2025-08-13 00:23:22.615 [INFO][4877] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:22.620216 containerd[1729]: 2025-08-13 00:23:22.617 [INFO][4850] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Aug 13 00:23:22.628836 containerd[1729]: time="2025-08-13T00:23:22.627725130Z" level=info msg="TearDown network for sandbox \"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\" successfully" Aug 13 00:23:22.628836 containerd[1729]: time="2025-08-13T00:23:22.627777290Z" level=info msg="StopPodSandbox for \"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\" returns successfully" Aug 13 00:23:22.630598 containerd[1729]: time="2025-08-13T00:23:22.630556418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l25zw,Uid:a72bed6b-002b-4a44-960e-b6cc7d136310,Namespace:kube-system,Attempt:1,}" Aug 13 00:23:22.671245 containerd[1729]: time="2025-08-13T00:23:22.671041251Z" level=info msg="StartContainer for \"54747919bcad9b0949eacb579771b6f8d8555e810a70908e437bd02e67cee6f5\" returns successfully" Aug 13 00:23:22.757658 kubelet[3171]: I0813 00:23:22.755453 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4rkw9" podStartSLOduration=38.755431926 podStartE2EDuration="38.755431926s" podCreationTimestamp="2025-08-13 00:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:23:22.754195243 +0000 UTC m=+43.439324043" watchObservedRunningTime="2025-08-13 00:23:22.755431926 +0000 UTC m=+43.440560686" Aug 13 00:23:22.834173 kubelet[3171]: I0813 00:23:22.830308 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-656d64bf9b-fss7b" podStartSLOduration=2.351543549 podStartE2EDuration="6.830287135s" podCreationTimestamp="2025-08-13 00:23:16 +0000 UTC" firstStartedPulling="2025-08-13 00:23:17.753596483 +0000 UTC m=+38.438725243" lastFinishedPulling="2025-08-13 00:23:22.232340069 +0000 UTC m=+42.917468829" observedRunningTime="2025-08-13 00:23:22.829537293 +0000 UTC m=+43.514666013" watchObservedRunningTime="2025-08-13 00:23:22.830287135 +0000 UTC m=+43.515415895" Aug 13 00:23:22.840863 systemd[1]: run-netns-cni\x2dd8782d09\x2d136f\x2d52ed\x2d3357\x2dc9f0479f70e5.mount: Deactivated successfully. Aug 13 00:23:22.893391 systemd-networkd[1579]: calia332504d368: Link UP Aug 13 00:23:22.894507 systemd-networkd[1579]: calia332504d368: Gained carrier Aug 13 00:23:22.921546 containerd[1729]: 2025-08-13 00:23:22.709 [INFO][4906] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:23:22.921546 containerd[1729]: 2025-08-13 00:23:22.728 [INFO][4906] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0 coredns-668d6bf9bc- kube-system a72bed6b-002b-4a44-960e-b6cc7d136310 899 0 2025-08-13 00:22:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-a-2fbd311b45 coredns-668d6bf9bc-l25zw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia332504d368 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff" Namespace="kube-system" Pod="coredns-668d6bf9bc-l25zw" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-" Aug 13 00:23:22.921546 containerd[1729]: 2025-08-13 00:23:22.728 [INFO][4906] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff" Namespace="kube-system" Pod="coredns-668d6bf9bc-l25zw" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0" Aug 13 00:23:22.921546 containerd[1729]: 2025-08-13 00:23:22.776 [INFO][4919] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff" HandleID="k8s-pod-network.ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0" Aug 13 00:23:22.921546 containerd[1729]: 2025-08-13 00:23:22.776 [INFO][4919] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff" HandleID="k8s-pod-network.ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3650), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-a-2fbd311b45", "pod":"coredns-668d6bf9bc-l25zw", "timestamp":"2025-08-13 00:23:22.776230944 +0000 UTC"}, Hostname:"ci-4081.3.5-a-2fbd311b45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:22.921546 containerd[1729]: 2025-08-13 00:23:22.776 [INFO][4919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:22.921546 containerd[1729]: 2025-08-13 00:23:22.776 [INFO][4919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:22.921546 containerd[1729]: 2025-08-13 00:23:22.776 [INFO][4919] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-2fbd311b45' Aug 13 00:23:22.921546 containerd[1729]: 2025-08-13 00:23:22.793 [INFO][4919] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:22.921546 containerd[1729]: 2025-08-13 00:23:22.828 [INFO][4919] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:22.921546 containerd[1729]: 2025-08-13 00:23:22.848 [INFO][4919] ipam/ipam.go 511: Trying affinity for 192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:22.921546 containerd[1729]: 2025-08-13 00:23:22.850 [INFO][4919] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:22.921546 containerd[1729]: 2025-08-13 00:23:22.854 [INFO][4919] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:22.921546 containerd[1729]: 2025-08-13 00:23:22.854 [INFO][4919] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.192/26 handle="k8s-pod-network.ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:22.921546 containerd[1729]: 2025-08-13 00:23:22.862 [INFO][4919] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff Aug 13 00:23:22.921546 containerd[1729]: 2025-08-13 00:23:22.872 [INFO][4919] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.192/26 handle="k8s-pod-network.ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:22.921546 containerd[1729]: 2025-08-13 00:23:22.882 [INFO][4919] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.195/26] block=192.168.105.192/26 handle="k8s-pod-network.ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:22.921546 containerd[1729]: 2025-08-13 00:23:22.882 [INFO][4919] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.195/26] handle="k8s-pod-network.ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:22.921546 containerd[1729]: 2025-08-13 00:23:22.883 [INFO][4919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:22.921546 containerd[1729]: 2025-08-13 00:23:22.883 [INFO][4919] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.195/26] IPv6=[] ContainerID="ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff" HandleID="k8s-pod-network.ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0" Aug 13 00:23:22.922210 containerd[1729]: 2025-08-13 00:23:22.885 [INFO][4906] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff" Namespace="kube-system" Pod="coredns-668d6bf9bc-l25zw" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a72bed6b-002b-4a44-960e-b6cc7d136310", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"", Pod:"coredns-668d6bf9bc-l25zw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia332504d368", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:22.922210 containerd[1729]: 2025-08-13 00:23:22.886 [INFO][4906] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.195/32] ContainerID="ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff" Namespace="kube-system" Pod="coredns-668d6bf9bc-l25zw" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0" Aug 13 00:23:22.922210 containerd[1729]: 2025-08-13 00:23:22.886 [INFO][4906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia332504d368 ContainerID="ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff" Namespace="kube-system" Pod="coredns-668d6bf9bc-l25zw" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0" Aug 13 00:23:22.922210 containerd[1729]: 2025-08-13 00:23:22.897 [INFO][4906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff" Namespace="kube-system" Pod="coredns-668d6bf9bc-l25zw" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0" Aug 13 00:23:22.922210 containerd[1729]: 2025-08-13 00:23:22.897 [INFO][4906] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff" Namespace="kube-system" Pod="coredns-668d6bf9bc-l25zw" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a72bed6b-002b-4a44-960e-b6cc7d136310", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff", Pod:"coredns-668d6bf9bc-l25zw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia332504d368", MAC:"aa:39:76:74:cc:cb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:22.922388 containerd[1729]: 2025-08-13 00:23:22.914 [INFO][4906] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff" Namespace="kube-system" Pod="coredns-668d6bf9bc-l25zw" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0" Aug 13 00:23:22.954601 containerd[1729]: time="2025-08-13T00:23:22.954460121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:22.954601 containerd[1729]: time="2025-08-13T00:23:22.954538521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:22.954601 containerd[1729]: time="2025-08-13T00:23:22.954593721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:22.954853 containerd[1729]: time="2025-08-13T00:23:22.954759562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:22.985388 systemd[1]: Started cri-containerd-ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff.scope - libcontainer container ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff. Aug 13 00:23:23.036328 containerd[1729]: time="2025-08-13T00:23:23.036180909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l25zw,Uid:a72bed6b-002b-4a44-960e-b6cc7d136310,Namespace:kube-system,Attempt:1,} returns sandbox id \"ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff\"" Aug 13 00:23:23.042990 containerd[1729]: time="2025-08-13T00:23:23.042795647Z" level=info msg="CreateContainer within sandbox \"ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:23:23.126333 containerd[1729]: time="2025-08-13T00:23:23.126246920Z" level=info msg="CreateContainer within sandbox \"ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f3079d441186ac8e9e7a27293870b5f26d2792d446f3ecde27d8aba7dceb7dc\"" Aug 13 00:23:23.128207 containerd[1729]: time="2025-08-13T00:23:23.127250762Z" level=info msg="StartContainer for \"8f3079d441186ac8e9e7a27293870b5f26d2792d446f3ecde27d8aba7dceb7dc\"" Aug 13 00:23:23.166350 systemd[1]: Started cri-containerd-8f3079d441186ac8e9e7a27293870b5f26d2792d446f3ecde27d8aba7dceb7dc.scope - libcontainer container 8f3079d441186ac8e9e7a27293870b5f26d2792d446f3ecde27d8aba7dceb7dc. Aug 13 00:23:23.200300 containerd[1729]: time="2025-08-13T00:23:23.199993805Z" level=info msg="StartContainer for \"8f3079d441186ac8e9e7a27293870b5f26d2792d446f3ecde27d8aba7dceb7dc\" returns successfully" Aug 13 00:23:23.472804 containerd[1729]: time="2025-08-13T00:23:23.472519444Z" level=info msg="StopPodSandbox for \"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\"" Aug 13 00:23:23.574380 containerd[1729]: 2025-08-13 00:23:23.530 [INFO][5042] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Aug 13 00:23:23.574380 containerd[1729]: 2025-08-13 00:23:23.530 [INFO][5042] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" iface="eth0" netns="/var/run/netns/cni-9984e4ca-afc1-8139-4f49-45cd0469675d" Aug 13 00:23:23.574380 containerd[1729]: 2025-08-13 00:23:23.530 [INFO][5042] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" iface="eth0" netns="/var/run/netns/cni-9984e4ca-afc1-8139-4f49-45cd0469675d" Aug 13 00:23:23.574380 containerd[1729]: 2025-08-13 00:23:23.530 [INFO][5042] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" iface="eth0" netns="/var/run/netns/cni-9984e4ca-afc1-8139-4f49-45cd0469675d" Aug 13 00:23:23.574380 containerd[1729]: 2025-08-13 00:23:23.530 [INFO][5042] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Aug 13 00:23:23.574380 containerd[1729]: 2025-08-13 00:23:23.530 [INFO][5042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Aug 13 00:23:23.574380 containerd[1729]: 2025-08-13 00:23:23.558 [INFO][5049] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" HandleID="k8s-pod-network.c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0" Aug 13 00:23:23.574380 containerd[1729]: 2025-08-13 00:23:23.558 [INFO][5049] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:23.574380 containerd[1729]: 2025-08-13 00:23:23.558 [INFO][5049] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:23.574380 containerd[1729]: 2025-08-13 00:23:23.566 [WARNING][5049] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" HandleID="k8s-pod-network.c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0" Aug 13 00:23:23.574380 containerd[1729]: 2025-08-13 00:23:23.567 [INFO][5049] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" HandleID="k8s-pod-network.c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0" Aug 13 00:23:23.574380 containerd[1729]: 2025-08-13 00:23:23.570 [INFO][5049] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:23.574380 containerd[1729]: 2025-08-13 00:23:23.572 [INFO][5042] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Aug 13 00:23:23.575303 containerd[1729]: time="2025-08-13T00:23:23.575152050Z" level=info msg="TearDown network for sandbox \"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\" successfully" Aug 13 00:23:23.575303 containerd[1729]: time="2025-08-13T00:23:23.575185851Z" level=info msg="StopPodSandbox for \"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\" returns successfully" Aug 13 00:23:23.576196 containerd[1729]: time="2025-08-13T00:23:23.576071493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-756654fbd8-fsrdw,Uid:7b8d5ee9-2ad0-42de-94dd-d38908c2dfe3,Namespace:calico-system,Attempt:1,}" Aug 13 00:23:23.660280 systemd-networkd[1579]: cali336563c61e6: Gained IPv6LL Aug 13 00:23:23.720834 systemd-networkd[1579]: cali8b53941e37a: Link UP Aug 13 00:23:23.721214 systemd-networkd[1579]: cali8b53941e37a: Gained carrier Aug 13 00:23:23.756529 containerd[1729]: 2025-08-13 00:23:23.627 [INFO][5056] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:23:23.756529 containerd[1729]: 2025-08-13 00:23:23.643 [INFO][5056] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0 calico-kube-controllers-756654fbd8- calico-system 7b8d5ee9-2ad0-42de-94dd-d38908c2dfe3 922 0 2025-08-13 00:23:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:756654fbd8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.5-a-2fbd311b45 calico-kube-controllers-756654fbd8-fsrdw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8b53941e37a [] [] }} ContainerID="c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88" Namespace="calico-system" Pod="calico-kube-controllers-756654fbd8-fsrdw" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-" Aug 13 00:23:23.756529 containerd[1729]: 2025-08-13 00:23:23.643 [INFO][5056] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88" Namespace="calico-system" Pod="calico-kube-controllers-756654fbd8-fsrdw" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0" Aug 13 00:23:23.756529 containerd[1729]: 2025-08-13 00:23:23.674 [INFO][5069] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88" HandleID="k8s-pod-network.c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0" Aug 13 00:23:23.756529 containerd[1729]: 2025-08-13 00:23:23.675 [INFO][5069] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88" HandleID="k8s-pod-network.c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3650), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-2fbd311b45", "pod":"calico-kube-controllers-756654fbd8-fsrdw", "timestamp":"2025-08-13 00:23:23.674886728 +0000 UTC"}, Hostname:"ci-4081.3.5-a-2fbd311b45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:23.756529 containerd[1729]: 2025-08-13 00:23:23.675 [INFO][5069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:23.756529 containerd[1729]: 2025-08-13 00:23:23.675 [INFO][5069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:23.756529 containerd[1729]: 2025-08-13 00:23:23.675 [INFO][5069] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-2fbd311b45' Aug 13 00:23:23.756529 containerd[1729]: 2025-08-13 00:23:23.685 [INFO][5069] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:23.756529 containerd[1729]: 2025-08-13 00:23:23.690 [INFO][5069] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:23.756529 containerd[1729]: 2025-08-13 00:23:23.695 [INFO][5069] ipam/ipam.go 511: Trying affinity for 192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:23.756529 containerd[1729]: 2025-08-13 00:23:23.697 [INFO][5069] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:23.756529 containerd[1729]: 2025-08-13 00:23:23.699 [INFO][5069] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:23.756529 containerd[1729]: 2025-08-13 00:23:23.699 [INFO][5069] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.192/26 handle="k8s-pod-network.c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:23.756529 containerd[1729]: 2025-08-13 00:23:23.700 [INFO][5069] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88 Aug 13 00:23:23.756529 containerd[1729]: 2025-08-13 00:23:23.708 [INFO][5069] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.192/26 handle="k8s-pod-network.c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:23.756529 containerd[1729]: 2025-08-13 00:23:23.715 [INFO][5069] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.196/26] block=192.168.105.192/26 handle="k8s-pod-network.c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:23.756529 containerd[1729]: 2025-08-13 00:23:23.715 [INFO][5069] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.196/26] handle="k8s-pod-network.c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:23.756529 containerd[1729]: 2025-08-13 00:23:23.715 [INFO][5069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:23.756529 containerd[1729]: 2025-08-13 00:23:23.715 [INFO][5069] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.196/26] IPv6=[] ContainerID="c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88" HandleID="k8s-pod-network.c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0" Aug 13 00:23:23.757343 containerd[1729]: 2025-08-13 00:23:23.718 [INFO][5056] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88" Namespace="calico-system" Pod="calico-kube-controllers-756654fbd8-fsrdw" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0", GenerateName:"calico-kube-controllers-756654fbd8-", Namespace:"calico-system", SelfLink:"", UID:"7b8d5ee9-2ad0-42de-94dd-d38908c2dfe3", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"756654fbd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"", Pod:"calico-kube-controllers-756654fbd8-fsrdw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8b53941e37a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:23.757343 containerd[1729]: 2025-08-13 00:23:23.718 [INFO][5056] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.196/32] ContainerID="c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88" Namespace="calico-system" Pod="calico-kube-controllers-756654fbd8-fsrdw" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0" Aug 13 00:23:23.757343 containerd[1729]: 2025-08-13 00:23:23.718 [INFO][5056] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b53941e37a ContainerID="c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88" Namespace="calico-system" Pod="calico-kube-controllers-756654fbd8-fsrdw" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0" Aug 13 00:23:23.757343 containerd[1729]: 2025-08-13 00:23:23.722 [INFO][5056] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88" Namespace="calico-system" Pod="calico-kube-controllers-756654fbd8-fsrdw" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0" Aug 13 00:23:23.757343 containerd[1729]: 2025-08-13 00:23:23.722 [INFO][5056] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88" Namespace="calico-system" Pod="calico-kube-controllers-756654fbd8-fsrdw" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0", GenerateName:"calico-kube-controllers-756654fbd8-", Namespace:"calico-system", SelfLink:"", UID:"7b8d5ee9-2ad0-42de-94dd-d38908c2dfe3", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"756654fbd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88", Pod:"calico-kube-controllers-756654fbd8-fsrdw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8b53941e37a", MAC:"56:75:06:5a:bb:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:23.757343 containerd[1729]: 2025-08-13 00:23:23.750 [INFO][5056] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88" Namespace="calico-system" Pod="calico-kube-controllers-756654fbd8-fsrdw" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0" Aug 13 00:23:23.785932 kubelet[3171]: I0813 00:23:23.783970 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-l25zw" podStartSLOduration=39.783950272 podStartE2EDuration="39.783950272s" podCreationTimestamp="2025-08-13 00:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:23:23.782805869 +0000 UTC m=+44.467934629" watchObservedRunningTime="2025-08-13 00:23:23.783950272 +0000 UTC m=+44.469079032" Aug 13 00:23:23.794704 containerd[1729]: time="2025-08-13T00:23:23.794019980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:23.794704 containerd[1729]: time="2025-08-13T00:23:23.794078181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:23.794704 containerd[1729]: time="2025-08-13T00:23:23.794089421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:23.794704 containerd[1729]: time="2025-08-13T00:23:23.794197221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:23.829344 systemd[1]: Started cri-containerd-c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88.scope - libcontainer container c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88. Aug 13 00:23:23.841408 systemd[1]: run-netns-cni\x2d9984e4ca\x2dafc1\x2d8139\x2d4f49\x2d45cd0469675d.mount: Deactivated successfully. Aug 13 00:23:23.880678 containerd[1729]: time="2025-08-13T00:23:23.880634062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-756654fbd8-fsrdw,Uid:7b8d5ee9-2ad0-42de-94dd-d38908c2dfe3,Namespace:calico-system,Attempt:1,} returns sandbox id \"c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88\"" Aug 13 00:23:23.883875 containerd[1729]: time="2025-08-13T00:23:23.883783151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:23:24.234299 systemd-networkd[1579]: calia332504d368: Gained IPv6LL Aug 13 00:23:24.471983 containerd[1729]: time="2025-08-13T00:23:24.470759706Z" level=info msg="StopPodSandbox for \"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\"" Aug 13 00:23:24.568126 containerd[1729]: 2025-08-13 00:23:24.526 [INFO][5150] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Aug 13 00:23:24.568126 containerd[1729]: 2025-08-13 00:23:24.527 [INFO][5150] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" iface="eth0" netns="/var/run/netns/cni-625e0a14-a3f0-3b0f-863b-8839946129d3" Aug 13 00:23:24.568126 containerd[1729]: 2025-08-13 00:23:24.527 [INFO][5150] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" iface="eth0" netns="/var/run/netns/cni-625e0a14-a3f0-3b0f-863b-8839946129d3" Aug 13 00:23:24.568126 containerd[1729]: 2025-08-13 00:23:24.527 [INFO][5150] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" iface="eth0" netns="/var/run/netns/cni-625e0a14-a3f0-3b0f-863b-8839946129d3" Aug 13 00:23:24.568126 containerd[1729]: 2025-08-13 00:23:24.527 [INFO][5150] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Aug 13 00:23:24.568126 containerd[1729]: 2025-08-13 00:23:24.527 [INFO][5150] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Aug 13 00:23:24.568126 containerd[1729]: 2025-08-13 00:23:24.553 [INFO][5158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" HandleID="k8s-pod-network.6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Workload="ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0" Aug 13 00:23:24.568126 containerd[1729]: 2025-08-13 00:23:24.553 [INFO][5158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:24.568126 containerd[1729]: 2025-08-13 00:23:24.553 [INFO][5158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:24.568126 containerd[1729]: 2025-08-13 00:23:24.562 [WARNING][5158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" HandleID="k8s-pod-network.6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Workload="ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0" Aug 13 00:23:24.568126 containerd[1729]: 2025-08-13 00:23:24.562 [INFO][5158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" HandleID="k8s-pod-network.6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Workload="ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0" Aug 13 00:23:24.568126 containerd[1729]: 2025-08-13 00:23:24.564 [INFO][5158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:24.568126 containerd[1729]: 2025-08-13 00:23:24.566 [INFO][5150] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Aug 13 00:23:24.568705 containerd[1729]: time="2025-08-13T00:23:24.568431298Z" level=info msg="TearDown network for sandbox \"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\" successfully" Aug 13 00:23:24.568705 containerd[1729]: time="2025-08-13T00:23:24.568469298Z" level=info msg="StopPodSandbox for \"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\" returns successfully" Aug 13 00:23:24.573726 systemd[1]: run-netns-cni\x2d625e0a14\x2da3f0\x2d3b0f\x2d863b\x2d8839946129d3.mount: Deactivated successfully. Aug 13 00:23:24.574910 containerd[1729]: time="2025-08-13T00:23:24.574224315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wbgr7,Uid:7bc7ffc9-ee10-44a4-88ba-09de883ee749,Namespace:calico-system,Attempt:1,}" Aug 13 00:23:24.726074 systemd-networkd[1579]: cali67636b3a5a5: Link UP Aug 13 00:23:24.726301 systemd-networkd[1579]: cali67636b3a5a5: Gained carrier Aug 13 00:23:24.747638 containerd[1729]: 2025-08-13 00:23:24.630 [INFO][5169] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:23:24.747638 containerd[1729]: 2025-08-13 00:23:24.651 [INFO][5169] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0 csi-node-driver- calico-system 7bc7ffc9-ee10-44a4-88ba-09de883ee749 938 0 2025-08-13 00:23:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.5-a-2fbd311b45 csi-node-driver-wbgr7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali67636b3a5a5 [] [] }} ContainerID="cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116" Namespace="calico-system" Pod="csi-node-driver-wbgr7" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-" Aug 13 00:23:24.747638 containerd[1729]: 2025-08-13 00:23:24.651 [INFO][5169] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116" Namespace="calico-system" Pod="csi-node-driver-wbgr7" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0" Aug 13 00:23:24.747638 containerd[1729]: 2025-08-13 00:23:24.676 [INFO][5177] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116" HandleID="k8s-pod-network.cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116" Workload="ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0" Aug 13 00:23:24.747638 containerd[1729]: 2025-08-13 00:23:24.676 [INFO][5177] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116" HandleID="k8s-pod-network.cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116" Workload="ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-2fbd311b45", "pod":"csi-node-driver-wbgr7", "timestamp":"2025-08-13 00:23:24.676076158 +0000 UTC"}, Hostname:"ci-4081.3.5-a-2fbd311b45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:24.747638 containerd[1729]: 2025-08-13 00:23:24.676 [INFO][5177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:24.747638 containerd[1729]: 2025-08-13 00:23:24.676 [INFO][5177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:24.747638 containerd[1729]: 2025-08-13 00:23:24.676 [INFO][5177] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-2fbd311b45' Aug 13 00:23:24.747638 containerd[1729]: 2025-08-13 00:23:24.685 [INFO][5177] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:24.747638 containerd[1729]: 2025-08-13 00:23:24.692 [INFO][5177] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:24.747638 containerd[1729]: 2025-08-13 00:23:24.696 [INFO][5177] ipam/ipam.go 511: Trying affinity for 192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:24.747638 containerd[1729]: 2025-08-13 00:23:24.698 [INFO][5177] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:24.747638 containerd[1729]: 2025-08-13 00:23:24.701 [INFO][5177] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:24.747638 containerd[1729]: 2025-08-13 00:23:24.701 [INFO][5177] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.192/26 handle="k8s-pod-network.cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:24.747638 containerd[1729]: 2025-08-13 00:23:24.702 [INFO][5177] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116 Aug 13 00:23:24.747638 containerd[1729]: 2025-08-13 00:23:24.707 [INFO][5177] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.192/26 handle="k8s-pod-network.cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:24.747638 containerd[1729]: 2025-08-13 00:23:24.718 [INFO][5177] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.197/26] block=192.168.105.192/26 handle="k8s-pod-network.cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:24.747638 containerd[1729]: 2025-08-13 00:23:24.718 [INFO][5177] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.197/26] handle="k8s-pod-network.cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:24.747638 containerd[1729]: 2025-08-13 00:23:24.718 [INFO][5177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:24.747638 containerd[1729]: 2025-08-13 00:23:24.718 [INFO][5177] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.197/26] IPv6=[] ContainerID="cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116" HandleID="k8s-pod-network.cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116" Workload="ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0" Aug 13 00:23:24.748827 containerd[1729]: 2025-08-13 00:23:24.720 [INFO][5169] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116" Namespace="calico-system" Pod="csi-node-driver-wbgr7" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7bc7ffc9-ee10-44a4-88ba-09de883ee749", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"", Pod:"csi-node-driver-wbgr7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali67636b3a5a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:24.748827 containerd[1729]: 2025-08-13 00:23:24.720 [INFO][5169] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.197/32] ContainerID="cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116" Namespace="calico-system" Pod="csi-node-driver-wbgr7" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0" Aug 13 00:23:24.748827 containerd[1729]: 2025-08-13 00:23:24.720 [INFO][5169] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67636b3a5a5 ContainerID="cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116" Namespace="calico-system" Pod="csi-node-driver-wbgr7" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0" Aug 13 00:23:24.748827 containerd[1729]: 2025-08-13 00:23:24.727 [INFO][5169] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116" Namespace="calico-system" Pod="csi-node-driver-wbgr7" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0" Aug 13 00:23:24.748827 containerd[1729]: 2025-08-13 00:23:24.728 [INFO][5169] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116" Namespace="calico-system" Pod="csi-node-driver-wbgr7" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7bc7ffc9-ee10-44a4-88ba-09de883ee749", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116", Pod:"csi-node-driver-wbgr7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali67636b3a5a5", MAC:"ce:42:e7:f6:4f:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:24.748827 containerd[1729]: 2025-08-13 00:23:24.743 [INFO][5169] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116" Namespace="calico-system" Pod="csi-node-driver-wbgr7" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0" Aug 13 00:23:24.780046 containerd[1729]: time="2025-08-13T00:23:24.779894928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:24.780046 containerd[1729]: time="2025-08-13T00:23:24.779986128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:24.780046 containerd[1729]: time="2025-08-13T00:23:24.780020008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:24.780643 containerd[1729]: time="2025-08-13T00:23:24.780554129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:24.806391 systemd[1]: Started cri-containerd-cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116.scope - libcontainer container cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116. Aug 13 00:23:24.841433 containerd[1729]: time="2025-08-13T00:23:24.840891538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wbgr7,Uid:7bc7ffc9-ee10-44a4-88ba-09de883ee749,Namespace:calico-system,Attempt:1,} returns sandbox id \"cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116\"" Aug 13 00:23:25.148861 kubelet[3171]: I0813 00:23:25.148479 3171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:23:25.472534 containerd[1729]: time="2025-08-13T00:23:25.472206657Z" level=info msg="StopPodSandbox for \"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\"" Aug 13 00:23:25.476279 containerd[1729]: time="2025-08-13T00:23:25.475952987Z" level=info msg="StopPodSandbox for \"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\"" Aug 13 00:23:25.476729 containerd[1729]: time="2025-08-13T00:23:25.476599269Z" level=info msg="StopPodSandbox for \"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\"" Aug 13 00:23:25.513382 systemd-networkd[1579]: cali8b53941e37a: Gained IPv6LL Aug 13 00:23:25.705082 containerd[1729]: 2025-08-13 00:23:25.626 [INFO][5305] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Aug 13 00:23:25.705082 containerd[1729]: 2025-08-13 00:23:25.629 [INFO][5305] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" iface="eth0" netns="/var/run/netns/cni-1e406fdb-52ae-db90-6d40-2e0999993c96" Aug 13 00:23:25.705082 containerd[1729]: 2025-08-13 00:23:25.632 [INFO][5305] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" iface="eth0" netns="/var/run/netns/cni-1e406fdb-52ae-db90-6d40-2e0999993c96" Aug 13 00:23:25.705082 containerd[1729]: 2025-08-13 00:23:25.632 [INFO][5305] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" iface="eth0" netns="/var/run/netns/cni-1e406fdb-52ae-db90-6d40-2e0999993c96" Aug 13 00:23:25.705082 containerd[1729]: 2025-08-13 00:23:25.632 [INFO][5305] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Aug 13 00:23:25.705082 containerd[1729]: 2025-08-13 00:23:25.633 [INFO][5305] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Aug 13 00:23:25.705082 containerd[1729]: 2025-08-13 00:23:25.671 [INFO][5319] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" HandleID="k8s-pod-network.1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0" Aug 13 00:23:25.705082 containerd[1729]: 2025-08-13 00:23:25.671 [INFO][5319] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:25.705082 containerd[1729]: 2025-08-13 00:23:25.671 [INFO][5319] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:25.705082 containerd[1729]: 2025-08-13 00:23:25.691 [WARNING][5319] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" HandleID="k8s-pod-network.1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0" Aug 13 00:23:25.705082 containerd[1729]: 2025-08-13 00:23:25.691 [INFO][5319] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" HandleID="k8s-pod-network.1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0" Aug 13 00:23:25.705082 containerd[1729]: 2025-08-13 00:23:25.694 [INFO][5319] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:25.705082 containerd[1729]: 2025-08-13 00:23:25.699 [INFO][5305] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Aug 13 00:23:25.708211 containerd[1729]: time="2025-08-13T00:23:25.706193189Z" level=info msg="TearDown network for sandbox \"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\" successfully" Aug 13 00:23:25.708211 containerd[1729]: time="2025-08-13T00:23:25.706247069Z" level=info msg="StopPodSandbox for \"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\" returns successfully" Aug 13 00:23:25.709046 containerd[1729]: time="2025-08-13T00:23:25.708559115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56fd6c9f9d-sgp55,Uid:d0c10593-4a39-4156-937c-70315299f09c,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:23:25.709968 systemd[1]: run-netns-cni\x2d1e406fdb\x2d52ae\x2ddb90\x2d6d40\x2d2e0999993c96.mount: Deactivated successfully. Aug 13 00:23:25.785833 containerd[1729]: 2025-08-13 00:23:25.668 [INFO][5296] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Aug 13 00:23:25.785833 containerd[1729]: 2025-08-13 00:23:25.670 [INFO][5296] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" iface="eth0" netns="/var/run/netns/cni-ae40be8b-cc9d-4f3d-d123-5a28b070ebfb" Aug 13 00:23:25.785833 containerd[1729]: 2025-08-13 00:23:25.671 [INFO][5296] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" iface="eth0" netns="/var/run/netns/cni-ae40be8b-cc9d-4f3d-d123-5a28b070ebfb" Aug 13 00:23:25.785833 containerd[1729]: 2025-08-13 00:23:25.671 [INFO][5296] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" iface="eth0" netns="/var/run/netns/cni-ae40be8b-cc9d-4f3d-d123-5a28b070ebfb" Aug 13 00:23:25.785833 containerd[1729]: 2025-08-13 00:23:25.672 [INFO][5296] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Aug 13 00:23:25.785833 containerd[1729]: 2025-08-13 00:23:25.672 [INFO][5296] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Aug 13 00:23:25.785833 containerd[1729]: 2025-08-13 00:23:25.732 [INFO][5326] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" HandleID="k8s-pod-network.23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0" Aug 13 00:23:25.785833 containerd[1729]: 2025-08-13 00:23:25.732 [INFO][5326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:25.785833 containerd[1729]: 2025-08-13 00:23:25.732 [INFO][5326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:25.785833 containerd[1729]: 2025-08-13 00:23:25.754 [WARNING][5326] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" HandleID="k8s-pod-network.23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0" Aug 13 00:23:25.785833 containerd[1729]: 2025-08-13 00:23:25.756 [INFO][5326] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" HandleID="k8s-pod-network.23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0" Aug 13 00:23:25.785833 containerd[1729]: 2025-08-13 00:23:25.767 [INFO][5326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:25.785833 containerd[1729]: 2025-08-13 00:23:25.779 [INFO][5296] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Aug 13 00:23:25.787653 containerd[1729]: time="2025-08-13T00:23:25.787613136Z" level=info msg="TearDown network for sandbox \"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\" successfully" Aug 13 00:23:25.788507 containerd[1729]: time="2025-08-13T00:23:25.788096057Z" level=info msg="StopPodSandbox for \"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\" returns successfully" Aug 13 00:23:25.791551 systemd[1]: run-netns-cni\x2dae40be8b\x2dcc9d\x2d4f3d\x2dd123\x2d5a28b070ebfb.mount: Deactivated successfully. Aug 13 00:23:25.794195 containerd[1729]: time="2025-08-13T00:23:25.792451189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56fd6c9f9d-tgxh9,Uid:92ee9526-40a5-4de2-be91-45eaf7973f17,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:23:25.797792 containerd[1729]: 2025-08-13 00:23:25.677 [INFO][5295] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Aug 13 00:23:25.797792 containerd[1729]: 2025-08-13 00:23:25.677 [INFO][5295] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" iface="eth0" netns="/var/run/netns/cni-cdfe4c24-099e-42ad-1bf6-caaa8854adc4" Aug 13 00:23:25.797792 containerd[1729]: 2025-08-13 00:23:25.678 [INFO][5295] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" iface="eth0" netns="/var/run/netns/cni-cdfe4c24-099e-42ad-1bf6-caaa8854adc4" Aug 13 00:23:25.797792 containerd[1729]: 2025-08-13 00:23:25.681 [INFO][5295] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" iface="eth0" netns="/var/run/netns/cni-cdfe4c24-099e-42ad-1bf6-caaa8854adc4" Aug 13 00:23:25.797792 containerd[1729]: 2025-08-13 00:23:25.681 [INFO][5295] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Aug 13 00:23:25.797792 containerd[1729]: 2025-08-13 00:23:25.681 [INFO][5295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Aug 13 00:23:25.797792 containerd[1729]: 2025-08-13 00:23:25.760 [INFO][5331] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" HandleID="k8s-pod-network.65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Workload="ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0" Aug 13 00:23:25.797792 containerd[1729]: 2025-08-13 00:23:25.760 [INFO][5331] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:25.797792 containerd[1729]: 2025-08-13 00:23:25.768 [INFO][5331] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:25.797792 containerd[1729]: 2025-08-13 00:23:25.784 [WARNING][5331] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" HandleID="k8s-pod-network.65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Workload="ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0" Aug 13 00:23:25.797792 containerd[1729]: 2025-08-13 00:23:25.784 [INFO][5331] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" HandleID="k8s-pod-network.65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Workload="ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0" Aug 13 00:23:25.797792 containerd[1729]: 2025-08-13 00:23:25.787 [INFO][5331] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:25.797792 containerd[1729]: 2025-08-13 00:23:25.794 [INFO][5295] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Aug 13 00:23:25.800310 containerd[1729]: time="2025-08-13T00:23:25.798304606Z" level=info msg="TearDown network for sandbox \"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\" successfully" Aug 13 00:23:25.800310 containerd[1729]: time="2025-08-13T00:23:25.798335206Z" level=info msg="StopPodSandbox for \"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\" returns successfully" Aug 13 00:23:25.800310 containerd[1729]: time="2025-08-13T00:23:25.799851690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-l455f,Uid:8bf5886c-e31f-4c51-ba8d-b8b7e72967c4,Namespace:calico-system,Attempt:1,}" Aug 13 00:23:25.802554 systemd[1]: run-netns-cni\x2dcdfe4c24\x2d099e\x2d42ad\x2d1bf6\x2dcaaa8854adc4.mount: Deactivated successfully. Aug 13 00:23:26.188369 systemd-networkd[1579]: cali2f2f714cd55: Link UP Aug 13 00:23:26.192713 systemd-networkd[1579]: cali2f2f714cd55: Gained carrier Aug 13 00:23:26.234382 containerd[1729]: 2025-08-13 00:23:25.928 [INFO][5341] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:23:26.234382 containerd[1729]: 2025-08-13 00:23:25.976 [INFO][5341] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0 calico-apiserver-56fd6c9f9d- calico-apiserver d0c10593-4a39-4156-937c-70315299f09c 953 0 2025-08-13 00:22:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56fd6c9f9d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-a-2fbd311b45 calico-apiserver-56fd6c9f9d-sgp55 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2f2f714cd55 [] [] }} ContainerID="cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d" Namespace="calico-apiserver" Pod="calico-apiserver-56fd6c9f9d-sgp55" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-" Aug 13 00:23:26.234382 containerd[1729]: 2025-08-13 00:23:25.976 [INFO][5341] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d" Namespace="calico-apiserver" Pod="calico-apiserver-56fd6c9f9d-sgp55" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0" Aug 13 00:23:26.234382 containerd[1729]: 2025-08-13 00:23:26.087 [INFO][5379] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d" HandleID="k8s-pod-network.cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0" Aug 13 00:23:26.234382 containerd[1729]: 2025-08-13 00:23:26.087 [INFO][5379] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d" HandleID="k8s-pod-network.cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3810), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-a-2fbd311b45", "pod":"calico-apiserver-56fd6c9f9d-sgp55", "timestamp":"2025-08-13 00:23:26.087616212 +0000 UTC"}, Hostname:"ci-4081.3.5-a-2fbd311b45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:26.234382 containerd[1729]: 2025-08-13 00:23:26.087 [INFO][5379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:26.234382 containerd[1729]: 2025-08-13 00:23:26.087 [INFO][5379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:26.234382 containerd[1729]: 2025-08-13 00:23:26.087 [INFO][5379] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-2fbd311b45' Aug 13 00:23:26.234382 containerd[1729]: 2025-08-13 00:23:26.111 [INFO][5379] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.234382 containerd[1729]: 2025-08-13 00:23:26.120 [INFO][5379] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.234382 containerd[1729]: 2025-08-13 00:23:26.139 [INFO][5379] ipam/ipam.go 511: Trying affinity for 192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.234382 containerd[1729]: 2025-08-13 00:23:26.144 [INFO][5379] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.234382 containerd[1729]: 2025-08-13 00:23:26.148 [INFO][5379] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.234382 containerd[1729]: 2025-08-13 00:23:26.148 [INFO][5379] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.192/26 handle="k8s-pod-network.cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.234382 containerd[1729]: 2025-08-13 00:23:26.151 [INFO][5379] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d Aug 13 00:23:26.234382 containerd[1729]: 2025-08-13 00:23:26.161 [INFO][5379] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.192/26 handle="k8s-pod-network.cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.234382 containerd[1729]: 2025-08-13 00:23:26.176 [INFO][5379] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.198/26] block=192.168.105.192/26 handle="k8s-pod-network.cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.234382 containerd[1729]: 2025-08-13 00:23:26.177 [INFO][5379] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.198/26] handle="k8s-pod-network.cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.234382 containerd[1729]: 2025-08-13 00:23:26.177 [INFO][5379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:26.234382 containerd[1729]: 2025-08-13 00:23:26.177 [INFO][5379] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.198/26] IPv6=[] ContainerID="cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d" HandleID="k8s-pod-network.cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0" Aug 13 00:23:26.237064 containerd[1729]: 2025-08-13 00:23:26.182 [INFO][5341] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d" Namespace="calico-apiserver" Pod="calico-apiserver-56fd6c9f9d-sgp55" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0", GenerateName:"calico-apiserver-56fd6c9f9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0c10593-4a39-4156-937c-70315299f09c", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56fd6c9f9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"", Pod:"calico-apiserver-56fd6c9f9d-sgp55", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f2f714cd55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:26.237064 containerd[1729]: 2025-08-13 00:23:26.182 [INFO][5341] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.198/32] ContainerID="cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d" Namespace="calico-apiserver" Pod="calico-apiserver-56fd6c9f9d-sgp55" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0" Aug 13 00:23:26.237064 containerd[1729]: 2025-08-13 00:23:26.182 [INFO][5341] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f2f714cd55 ContainerID="cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d" Namespace="calico-apiserver" Pod="calico-apiserver-56fd6c9f9d-sgp55" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0" Aug 13 00:23:26.237064 containerd[1729]: 2025-08-13 00:23:26.193 [INFO][5341] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d" Namespace="calico-apiserver" Pod="calico-apiserver-56fd6c9f9d-sgp55" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0" Aug 13 00:23:26.237064 containerd[1729]: 2025-08-13 00:23:26.198 [INFO][5341] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d" Namespace="calico-apiserver" Pod="calico-apiserver-56fd6c9f9d-sgp55" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0", GenerateName:"calico-apiserver-56fd6c9f9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0c10593-4a39-4156-937c-70315299f09c", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56fd6c9f9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d", Pod:"calico-apiserver-56fd6c9f9d-sgp55", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f2f714cd55", MAC:"aa:8e:a2:8b:e6:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:26.237064 containerd[1729]: 2025-08-13 00:23:26.226 [INFO][5341] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d" Namespace="calico-apiserver" Pod="calico-apiserver-56fd6c9f9d-sgp55" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0" Aug 13 00:23:26.299100 containerd[1729]: time="2025-08-13T00:23:26.297258956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:26.299100 containerd[1729]: time="2025-08-13T00:23:26.297404756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:26.299100 containerd[1729]: time="2025-08-13T00:23:26.297422796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:26.299100 containerd[1729]: time="2025-08-13T00:23:26.297586157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:26.301427 systemd-networkd[1579]: cali61cb0939c11: Link UP Aug 13 00:23:26.307621 systemd-networkd[1579]: cali61cb0939c11: Gained carrier Aug 13 00:23:26.355323 containerd[1729]: 2025-08-13 00:23:25.957 [INFO][5361] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:23:26.355323 containerd[1729]: 2025-08-13 00:23:25.995 [INFO][5361] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0 goldmane-768f4c5c69- calico-system 8bf5886c-e31f-4c51-ba8d-b8b7e72967c4 955 0 2025-08-13 00:23:00 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.5-a-2fbd311b45 goldmane-768f4c5c69-l455f eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali61cb0939c11 [] [] }} ContainerID="623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917" Namespace="calico-system" Pod="goldmane-768f4c5c69-l455f" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-" Aug 13 00:23:26.355323 containerd[1729]: 2025-08-13 00:23:25.996 [INFO][5361] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917" Namespace="calico-system" Pod="goldmane-768f4c5c69-l455f" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0" Aug 13 00:23:26.355323 containerd[1729]: 2025-08-13 00:23:26.111 [INFO][5386] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917" HandleID="k8s-pod-network.623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917" Workload="ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0" Aug 13 00:23:26.355323 containerd[1729]: 2025-08-13 00:23:26.115 [INFO][5386] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917" HandleID="k8s-pod-network.623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917" Workload="ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa5b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-2fbd311b45", "pod":"goldmane-768f4c5c69-l455f", "timestamp":"2025-08-13 00:23:26.111069637 +0000 UTC"}, Hostname:"ci-4081.3.5-a-2fbd311b45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:26.355323 containerd[1729]: 2025-08-13 00:23:26.115 [INFO][5386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:26.355323 containerd[1729]: 2025-08-13 00:23:26.177 [INFO][5386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:26.355323 containerd[1729]: 2025-08-13 00:23:26.178 [INFO][5386] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-2fbd311b45' Aug 13 00:23:26.355323 containerd[1729]: 2025-08-13 00:23:26.211 [INFO][5386] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.355323 containerd[1729]: 2025-08-13 00:23:26.228 [INFO][5386] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.355323 containerd[1729]: 2025-08-13 00:23:26.246 [INFO][5386] ipam/ipam.go 511: Trying affinity for 192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.355323 containerd[1729]: 2025-08-13 00:23:26.250 [INFO][5386] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.355323 containerd[1729]: 2025-08-13 00:23:26.255 [INFO][5386] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.355323 containerd[1729]: 2025-08-13 00:23:26.255 [INFO][5386] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.192/26 handle="k8s-pod-network.623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.355323 containerd[1729]: 2025-08-13 00:23:26.259 [INFO][5386] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917 Aug 13 00:23:26.355323 containerd[1729]: 2025-08-13 00:23:26.270 [INFO][5386] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.192/26 handle="k8s-pod-network.623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.355323 containerd[1729]: 2025-08-13 00:23:26.283 [INFO][5386] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.199/26] block=192.168.105.192/26 handle="k8s-pod-network.623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.355323 containerd[1729]: 2025-08-13 00:23:26.283 [INFO][5386] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.199/26] handle="k8s-pod-network.623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.355323 containerd[1729]: 2025-08-13 00:23:26.283 [INFO][5386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:26.355323 containerd[1729]: 2025-08-13 00:23:26.283 [INFO][5386] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.199/26] IPv6=[] ContainerID="623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917" HandleID="k8s-pod-network.623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917" Workload="ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0" Aug 13 00:23:26.356471 containerd[1729]: 2025-08-13 00:23:26.289 [INFO][5361] cni-plugin/k8s.go 418: Populated endpoint ContainerID="623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917" Namespace="calico-system" Pod="goldmane-768f4c5c69-l455f" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"8bf5886c-e31f-4c51-ba8d-b8b7e72967c4", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"", Pod:"goldmane-768f4c5c69-l455f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali61cb0939c11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:26.356471 containerd[1729]: 2025-08-13 00:23:26.291 [INFO][5361] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.199/32] ContainerID="623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917" Namespace="calico-system" Pod="goldmane-768f4c5c69-l455f" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0" Aug 13 00:23:26.356471 containerd[1729]: 2025-08-13 00:23:26.291 [INFO][5361] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61cb0939c11 ContainerID="623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917" Namespace="calico-system" Pod="goldmane-768f4c5c69-l455f" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0" Aug 13 00:23:26.356471 containerd[1729]: 2025-08-13 00:23:26.319 [INFO][5361] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917" Namespace="calico-system" Pod="goldmane-768f4c5c69-l455f" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0" Aug 13 00:23:26.356471 containerd[1729]: 2025-08-13 00:23:26.319 [INFO][5361] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917" Namespace="calico-system" Pod="goldmane-768f4c5c69-l455f" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"8bf5886c-e31f-4c51-ba8d-b8b7e72967c4", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917", Pod:"goldmane-768f4c5c69-l455f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali61cb0939c11", MAC:"de:9c:c3:d3:17:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:26.356471 containerd[1729]: 2025-08-13 00:23:26.345 [INFO][5361] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917" Namespace="calico-system" Pod="goldmane-768f4c5c69-l455f" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0" Aug 13 00:23:26.371386 systemd[1]: Started cri-containerd-cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d.scope - libcontainer container cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d. Aug 13 00:23:26.406787 containerd[1729]: time="2025-08-13T00:23:26.405968099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:26.407688 containerd[1729]: time="2025-08-13T00:23:26.407623944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:26.407688 containerd[1729]: time="2025-08-13T00:23:26.407667624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:26.408344 containerd[1729]: time="2025-08-13T00:23:26.408266545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:26.431203 systemd-networkd[1579]: cali4f582317deb: Link UP Aug 13 00:23:26.433285 systemd-networkd[1579]: cali4f582317deb: Gained carrier Aug 13 00:23:26.471389 systemd[1]: Started cri-containerd-623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917.scope - libcontainer container 623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917. Aug 13 00:23:26.476151 containerd[1729]: 2025-08-13 00:23:25.994 [INFO][5352] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:23:26.476151 containerd[1729]: 2025-08-13 00:23:26.033 [INFO][5352] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0 calico-apiserver-56fd6c9f9d- calico-apiserver 92ee9526-40a5-4de2-be91-45eaf7973f17 954 0 2025-08-13 00:22:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56fd6c9f9d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-a-2fbd311b45 calico-apiserver-56fd6c9f9d-tgxh9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4f582317deb [] [] }} ContainerID="a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f" Namespace="calico-apiserver" Pod="calico-apiserver-56fd6c9f9d-tgxh9" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-" Aug 13 00:23:26.476151 containerd[1729]: 2025-08-13 00:23:26.033 [INFO][5352] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f" Namespace="calico-apiserver" Pod="calico-apiserver-56fd6c9f9d-tgxh9" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0" Aug 13 00:23:26.476151 containerd[1729]: 2025-08-13 00:23:26.152 [INFO][5391] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f" HandleID="k8s-pod-network.a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0" Aug 13 00:23:26.476151 containerd[1729]: 2025-08-13 00:23:26.153 [INFO][5391] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f" HandleID="k8s-pod-network.a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000325e30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-a-2fbd311b45", "pod":"calico-apiserver-56fd6c9f9d-tgxh9", "timestamp":"2025-08-13 00:23:26.152213112 +0000 UTC"}, Hostname:"ci-4081.3.5-a-2fbd311b45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:26.476151 containerd[1729]: 2025-08-13 00:23:26.153 [INFO][5391] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:26.476151 containerd[1729]: 2025-08-13 00:23:26.283 [INFO][5391] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:26.476151 containerd[1729]: 2025-08-13 00:23:26.283 [INFO][5391] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-2fbd311b45' Aug 13 00:23:26.476151 containerd[1729]: 2025-08-13 00:23:26.328 [INFO][5391] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.476151 containerd[1729]: 2025-08-13 00:23:26.350 [INFO][5391] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.476151 containerd[1729]: 2025-08-13 00:23:26.360 [INFO][5391] ipam/ipam.go 511: Trying affinity for 192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.476151 containerd[1729]: 2025-08-13 00:23:26.370 [INFO][5391] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.476151 containerd[1729]: 2025-08-13 00:23:26.378 [INFO][5391] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.192/26 host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.476151 containerd[1729]: 2025-08-13 00:23:26.379 [INFO][5391] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.192/26 handle="k8s-pod-network.a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.476151 containerd[1729]: 2025-08-13 00:23:26.387 [INFO][5391] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f Aug 13 00:23:26.476151 containerd[1729]: 2025-08-13 00:23:26.398 [INFO][5391] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.192/26 handle="k8s-pod-network.a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.476151 containerd[1729]: 2025-08-13 00:23:26.411 [INFO][5391] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.200/26] block=192.168.105.192/26 handle="k8s-pod-network.a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.476151 containerd[1729]: 2025-08-13 00:23:26.413 [INFO][5391] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.200/26] handle="k8s-pod-network.a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f" host="ci-4081.3.5-a-2fbd311b45" Aug 13 00:23:26.476151 containerd[1729]: 2025-08-13 00:23:26.413 [INFO][5391] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:26.476151 containerd[1729]: 2025-08-13 00:23:26.413 [INFO][5391] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.200/26] IPv6=[] ContainerID="a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f" HandleID="k8s-pod-network.a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0" Aug 13 00:23:26.478067 containerd[1729]: 2025-08-13 00:23:26.423 [INFO][5352] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f" Namespace="calico-apiserver" Pod="calico-apiserver-56fd6c9f9d-tgxh9" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0", GenerateName:"calico-apiserver-56fd6c9f9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"92ee9526-40a5-4de2-be91-45eaf7973f17", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56fd6c9f9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"", Pod:"calico-apiserver-56fd6c9f9d-tgxh9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f582317deb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:26.478067 containerd[1729]: 2025-08-13 00:23:26.425 [INFO][5352] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.200/32] ContainerID="a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f" Namespace="calico-apiserver" Pod="calico-apiserver-56fd6c9f9d-tgxh9" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0" Aug 13 00:23:26.478067 containerd[1729]: 2025-08-13 00:23:26.425 [INFO][5352] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f582317deb ContainerID="a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f" Namespace="calico-apiserver" Pod="calico-apiserver-56fd6c9f9d-tgxh9" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0" Aug 13 00:23:26.478067 containerd[1729]: 2025-08-13 00:23:26.432 [INFO][5352] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f" Namespace="calico-apiserver" Pod="calico-apiserver-56fd6c9f9d-tgxh9" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0" Aug 13 00:23:26.478067 containerd[1729]: 2025-08-13 00:23:26.436 [INFO][5352] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f" Namespace="calico-apiserver" Pod="calico-apiserver-56fd6c9f9d-tgxh9" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0", GenerateName:"calico-apiserver-56fd6c9f9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"92ee9526-40a5-4de2-be91-45eaf7973f17", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56fd6c9f9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f", Pod:"calico-apiserver-56fd6c9f9d-tgxh9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f582317deb", MAC:"d2:e9:6c:a5:da:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:26.478067 containerd[1729]: 2025-08-13 00:23:26.466 [INFO][5352] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f" Namespace="calico-apiserver" Pod="calico-apiserver-56fd6c9f9d-tgxh9" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0" Aug 13 00:23:26.534759 containerd[1729]: time="2025-08-13T00:23:26.533255894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:26.535323 containerd[1729]: time="2025-08-13T00:23:26.534699378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:26.535323 containerd[1729]: time="2025-08-13T00:23:26.534728458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:26.536119 containerd[1729]: time="2025-08-13T00:23:26.535948981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:26.551246 containerd[1729]: time="2025-08-13T00:23:26.551107983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56fd6c9f9d-sgp55,Uid:d0c10593-4a39-4156-937c-70315299f09c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d\"" Aug 13 00:23:26.567652 kernel: bpftool[5528]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 00:23:26.582495 systemd[1]: Started cri-containerd-a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f.scope - libcontainer container a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f. Aug 13 00:23:26.629283 containerd[1729]: time="2025-08-13T00:23:26.629128161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-l455f,Uid:8bf5886c-e31f-4c51-ba8d-b8b7e72967c4,Namespace:calico-system,Attempt:1,} returns sandbox id \"623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917\"" Aug 13 00:23:26.665329 systemd-networkd[1579]: cali67636b3a5a5: Gained IPv6LL Aug 13 00:23:26.678529 containerd[1729]: time="2025-08-13T00:23:26.678457938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56fd6c9f9d-tgxh9,Uid:92ee9526-40a5-4de2-be91-45eaf7973f17,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f\"" Aug 13 00:23:26.991168 containerd[1729]: time="2025-08-13T00:23:26.990952969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:26.993755 containerd[1729]: time="2025-08-13T00:23:26.993698337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 13 00:23:26.995925 containerd[1729]: time="2025-08-13T00:23:26.995865423Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:26.999870 containerd[1729]: time="2025-08-13T00:23:26.999798874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:27.001114 containerd[1729]: time="2025-08-13T00:23:27.000515956Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 3.116660085s" Aug 13 00:23:27.001114 containerd[1729]: time="2025-08-13T00:23:27.000557916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 13 00:23:27.003345 containerd[1729]: time="2025-08-13T00:23:27.003110283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:23:27.023259 containerd[1729]: time="2025-08-13T00:23:27.023208379Z" level=info msg="CreateContainer within sandbox \"c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:23:27.212995 containerd[1729]: time="2025-08-13T00:23:27.212268426Z" level=info msg="CreateContainer within sandbox \"c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d04ab27c78abf9f2c77b9adc180cef544d595e90dcdec5c71acb84c8f5c69154\"" Aug 13 00:23:27.214592 containerd[1729]: time="2025-08-13T00:23:27.213925710Z" level=info msg="StartContainer for \"d04ab27c78abf9f2c77b9adc180cef544d595e90dcdec5c71acb84c8f5c69154\"" Aug 13 00:23:27.247530 systemd[1]: Started cri-containerd-d04ab27c78abf9f2c77b9adc180cef544d595e90dcdec5c71acb84c8f5c69154.scope - libcontainer container d04ab27c78abf9f2c77b9adc180cef544d595e90dcdec5c71acb84c8f5c69154. Aug 13 00:23:27.250015 systemd-networkd[1579]: vxlan.calico: Link UP Aug 13 00:23:27.250031 systemd-networkd[1579]: vxlan.calico: Gained carrier Aug 13 00:23:27.326723 containerd[1729]: time="2025-08-13T00:23:27.326645224Z" level=info msg="StartContainer for \"d04ab27c78abf9f2c77b9adc180cef544d595e90dcdec5c71acb84c8f5c69154\" returns successfully" Aug 13 00:23:27.497276 systemd-networkd[1579]: cali2f2f714cd55: Gained IPv6LL Aug 13 00:23:27.881351 systemd-networkd[1579]: cali4f582317deb: Gained IPv6LL Aug 13 00:23:27.886266 kubelet[3171]: I0813 00:23:27.885858 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-756654fbd8-fsrdw" podStartSLOduration=24.766478456 podStartE2EDuration="27.885838229s" podCreationTimestamp="2025-08-13 00:23:00 +0000 UTC" firstStartedPulling="2025-08-13 00:23:23.882906428 +0000 UTC m=+44.568035188" lastFinishedPulling="2025-08-13 00:23:27.002266241 +0000 UTC m=+47.687394961" observedRunningTime="2025-08-13 00:23:27.826067819 +0000 UTC m=+48.511196659" watchObservedRunningTime="2025-08-13 00:23:27.885838229 +0000 UTC m=+48.570966989" Aug 13 00:23:28.182160 containerd[1729]: time="2025-08-13T00:23:28.181998032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:28.184199 containerd[1729]: time="2025-08-13T00:23:28.184155798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 13 00:23:28.186986 containerd[1729]: time="2025-08-13T00:23:28.186831285Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:28.191196 containerd[1729]: time="2025-08-13T00:23:28.191127018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:28.191980 containerd[1729]: time="2025-08-13T00:23:28.191723859Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.188557696s" Aug 13 00:23:28.191980 containerd[1729]: time="2025-08-13T00:23:28.191757699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 13 00:23:28.194810 containerd[1729]: time="2025-08-13T00:23:28.194771268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:23:28.196238 containerd[1729]: time="2025-08-13T00:23:28.195778191Z" level=info msg="CreateContainer within sandbox \"cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:23:28.221937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount604610915.mount: Deactivated successfully. Aug 13 00:23:28.231374 containerd[1729]: time="2025-08-13T00:23:28.231303652Z" level=info msg="CreateContainer within sandbox \"cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"53bffd8f07c030e7038eef09c760f5c04ef057ea9a0e8334a2de25f1ee1cba55\"" Aug 13 00:23:28.232722 containerd[1729]: time="2025-08-13T00:23:28.232037414Z" level=info msg="StartContainer for \"53bffd8f07c030e7038eef09c760f5c04ef057ea9a0e8334a2de25f1ee1cba55\"" Aug 13 00:23:28.267363 systemd[1]: Started cri-containerd-53bffd8f07c030e7038eef09c760f5c04ef057ea9a0e8334a2de25f1ee1cba55.scope - libcontainer container 53bffd8f07c030e7038eef09c760f5c04ef057ea9a0e8334a2de25f1ee1cba55. Aug 13 00:23:28.297121 containerd[1729]: time="2025-08-13T00:23:28.297046799Z" level=info msg="StartContainer for \"53bffd8f07c030e7038eef09c760f5c04ef057ea9a0e8334a2de25f1ee1cba55\" returns successfully" Aug 13 00:23:28.330346 systemd-networkd[1579]: cali61cb0939c11: Gained IPv6LL Aug 13 00:23:29.161307 systemd-networkd[1579]: vxlan.calico: Gained IPv6LL Aug 13 00:23:30.889461 containerd[1729]: time="2025-08-13T00:23:30.889406212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:30.891745 containerd[1729]: time="2025-08-13T00:23:30.891585499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 13 00:23:30.894263 containerd[1729]: time="2025-08-13T00:23:30.894202066Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:30.899419 containerd[1729]: time="2025-08-13T00:23:30.899262120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.704450052s" Aug 13 00:23:30.899419 containerd[1729]: time="2025-08-13T00:23:30.899315601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:23:30.902248 containerd[1729]: time="2025-08-13T00:23:30.902008888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:23:30.904966 containerd[1729]: time="2025-08-13T00:23:30.904650536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:30.906563 containerd[1729]: time="2025-08-13T00:23:30.906510461Z" level=info msg="CreateContainer within sandbox \"cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:23:30.934343 containerd[1729]: time="2025-08-13T00:23:30.934206780Z" level=info msg="CreateContainer within sandbox \"cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5fefb3a5db8c2cde84c78e9126ee155c96d03fe5355336e75fe3be8cc509d64c\"" Aug 13 00:23:30.936117 containerd[1729]: time="2025-08-13T00:23:30.936073025Z" level=info msg="StartContainer for \"5fefb3a5db8c2cde84c78e9126ee155c96d03fe5355336e75fe3be8cc509d64c\"" Aug 13 00:23:31.005372 systemd[1]: Started cri-containerd-5fefb3a5db8c2cde84c78e9126ee155c96d03fe5355336e75fe3be8cc509d64c.scope - libcontainer container 5fefb3a5db8c2cde84c78e9126ee155c96d03fe5355336e75fe3be8cc509d64c. Aug 13 00:23:31.044928 containerd[1729]: time="2025-08-13T00:23:31.044856975Z" level=info msg="StartContainer for \"5fefb3a5db8c2cde84c78e9126ee155c96d03fe5355336e75fe3be8cc509d64c\" returns successfully" Aug 13 00:23:31.828429 kubelet[3171]: I0813 00:23:31.828343 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56fd6c9f9d-sgp55" podStartSLOduration=32.494528146 podStartE2EDuration="36.828325003s" podCreationTimestamp="2025-08-13 00:22:55 +0000 UTC" firstStartedPulling="2025-08-13 00:23:26.566501586 +0000 UTC m=+47.251630346" lastFinishedPulling="2025-08-13 00:23:30.900298443 +0000 UTC m=+51.585427203" observedRunningTime="2025-08-13 00:23:31.827614641 +0000 UTC m=+52.512743401" watchObservedRunningTime="2025-08-13 00:23:31.828325003 +0000 UTC m=+52.513453763" Aug 13 00:23:32.814249 kubelet[3171]: I0813 00:23:32.814211 3171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:23:34.074960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount43102585.mount: Deactivated successfully. Aug 13 00:23:34.589771 containerd[1729]: time="2025-08-13T00:23:34.588649934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:34.590997 containerd[1729]: time="2025-08-13T00:23:34.590949341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 13 00:23:34.594671 containerd[1729]: time="2025-08-13T00:23:34.594604831Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:34.602377 containerd[1729]: time="2025-08-13T00:23:34.601871492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:34.603026 containerd[1729]: time="2025-08-13T00:23:34.602876615Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.700815527s" Aug 13 00:23:34.603026 containerd[1729]: time="2025-08-13T00:23:34.602920535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 13 00:23:34.606248 containerd[1729]: time="2025-08-13T00:23:34.606212744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:23:34.609060 containerd[1729]: time="2025-08-13T00:23:34.609011152Z" level=info msg="CreateContainer within sandbox \"623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:23:34.647731 containerd[1729]: time="2025-08-13T00:23:34.647677102Z" level=info msg="CreateContainer within sandbox \"623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"9de90623f4ff1a7f333d1aafae08e33d73064eee22742ca6da55bc26a2fb363d\"" Aug 13 00:23:34.649102 containerd[1729]: time="2025-08-13T00:23:34.648856226Z" level=info msg="StartContainer for \"9de90623f4ff1a7f333d1aafae08e33d73064eee22742ca6da55bc26a2fb363d\"" Aug 13 00:23:34.697276 systemd[1]: Started cri-containerd-9de90623f4ff1a7f333d1aafae08e33d73064eee22742ca6da55bc26a2fb363d.scope - libcontainer container 9de90623f4ff1a7f333d1aafae08e33d73064eee22742ca6da55bc26a2fb363d. Aug 13 00:23:34.764697 containerd[1729]: time="2025-08-13T00:23:34.764539115Z" level=info msg="StartContainer for \"9de90623f4ff1a7f333d1aafae08e33d73064eee22742ca6da55bc26a2fb363d\" returns successfully" Aug 13 00:23:34.912645 containerd[1729]: time="2025-08-13T00:23:34.912509655Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:34.915533 containerd[1729]: time="2025-08-13T00:23:34.914990463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 00:23:34.917323 containerd[1729]: time="2025-08-13T00:23:34.917288389Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 310.898964ms" Aug 13 00:23:34.917450 containerd[1729]: time="2025-08-13T00:23:34.917435269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:23:34.919491 containerd[1729]: time="2025-08-13T00:23:34.919443075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:23:34.920969 containerd[1729]: time="2025-08-13T00:23:34.920943439Z" level=info msg="CreateContainer within sandbox \"a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:23:34.948988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3822099113.mount: Deactivated successfully. Aug 13 00:23:34.955690 containerd[1729]: time="2025-08-13T00:23:34.953184571Z" level=info msg="CreateContainer within sandbox \"a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c826ffc78b30ebf37eb862351c41f2535de269f592714c2ae3d01ca481948b27\"" Aug 13 00:23:34.955690 containerd[1729]: time="2025-08-13T00:23:34.954047134Z" level=info msg="StartContainer for \"c826ffc78b30ebf37eb862351c41f2535de269f592714c2ae3d01ca481948b27\"" Aug 13 00:23:34.990369 systemd[1]: Started cri-containerd-c826ffc78b30ebf37eb862351c41f2535de269f592714c2ae3d01ca481948b27.scope - libcontainer container c826ffc78b30ebf37eb862351c41f2535de269f592714c2ae3d01ca481948b27. Aug 13 00:23:35.030025 containerd[1729]: time="2025-08-13T00:23:35.029958390Z" level=info msg="StartContainer for \"c826ffc78b30ebf37eb862351c41f2535de269f592714c2ae3d01ca481948b27\" returns successfully" Aug 13 00:23:35.870523 kubelet[3171]: I0813 00:23:35.869574 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-l455f" podStartSLOduration=27.896701122 podStartE2EDuration="35.869552774s" podCreationTimestamp="2025-08-13 00:23:00 +0000 UTC" firstStartedPulling="2025-08-13 00:23:26.63246069 +0000 UTC m=+47.317589450" lastFinishedPulling="2025-08-13 00:23:34.605312342 +0000 UTC m=+55.290441102" observedRunningTime="2025-08-13 00:23:34.849272916 +0000 UTC m=+55.534401676" watchObservedRunningTime="2025-08-13 00:23:35.869552774 +0000 UTC m=+56.554681534" Aug 13 00:23:36.507224 containerd[1729]: time="2025-08-13T00:23:36.506953082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:36.509058 containerd[1729]: time="2025-08-13T00:23:36.509016048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 13 00:23:36.512163 containerd[1729]: time="2025-08-13T00:23:36.512061217Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:36.516046 containerd[1729]: time="2025-08-13T00:23:36.515999428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:36.517287 containerd[1729]: time="2025-08-13T00:23:36.516747790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.596510953s" Aug 13 00:23:36.517287 containerd[1729]: time="2025-08-13T00:23:36.516791190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 13 00:23:36.520494 containerd[1729]: time="2025-08-13T00:23:36.520451920Z" level=info msg="CreateContainer within sandbox \"cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:23:36.561313 containerd[1729]: time="2025-08-13T00:23:36.561220794Z" level=info msg="CreateContainer within sandbox \"cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a6e8750716b19726d8c1e7734e931eadb7e3a2a298a001f4d0452857775d8bc6\"" Aug 13 00:23:36.564163 containerd[1729]: time="2025-08-13T00:23:36.562385518Z" level=info msg="StartContainer for \"a6e8750716b19726d8c1e7734e931eadb7e3a2a298a001f4d0452857775d8bc6\"" Aug 13 00:23:36.602365 systemd[1]: Started cri-containerd-a6e8750716b19726d8c1e7734e931eadb7e3a2a298a001f4d0452857775d8bc6.scope - libcontainer container a6e8750716b19726d8c1e7734e931eadb7e3a2a298a001f4d0452857775d8bc6. Aug 13 00:23:36.738296 containerd[1729]: time="2025-08-13T00:23:36.738208331Z" level=info msg="StartContainer for \"a6e8750716b19726d8c1e7734e931eadb7e3a2a298a001f4d0452857775d8bc6\" returns successfully" Aug 13 00:23:36.840380 kubelet[3171]: I0813 00:23:36.840239 3171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:23:36.859381 kubelet[3171]: I0813 00:23:36.859148 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wbgr7" podStartSLOduration=25.184370421 podStartE2EDuration="36.85911867s" podCreationTimestamp="2025-08-13 00:23:00 +0000 UTC" firstStartedPulling="2025-08-13 00:23:24.842744143 +0000 UTC m=+45.527872903" lastFinishedPulling="2025-08-13 00:23:36.517492392 +0000 UTC m=+57.202621152" observedRunningTime="2025-08-13 00:23:36.858731989 +0000 UTC m=+57.543860749" watchObservedRunningTime="2025-08-13 00:23:36.85911867 +0000 UTC m=+57.544247430" Aug 13 00:23:36.859866 kubelet[3171]: I0813 00:23:36.859567 3171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56fd6c9f9d-tgxh9" podStartSLOduration=33.623580828 podStartE2EDuration="41.859558031s" podCreationTimestamp="2025-08-13 00:22:55 +0000 UTC" firstStartedPulling="2025-08-13 00:23:26.682349549 +0000 UTC m=+47.367478309" lastFinishedPulling="2025-08-13 00:23:34.918326752 +0000 UTC m=+55.603455512" observedRunningTime="2025-08-13 00:23:35.871802021 +0000 UTC m=+56.556930941" watchObservedRunningTime="2025-08-13 00:23:36.859558031 +0000 UTC m=+57.544686791" Aug 13 00:23:36.878275 systemd[1]: run-containerd-runc-k8s.io-9de90623f4ff1a7f333d1aafae08e33d73064eee22742ca6da55bc26a2fb363d-runc.CDmpiK.mount: Deactivated successfully. Aug 13 00:23:37.594603 kubelet[3171]: I0813 00:23:37.594503 3171 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:23:37.598043 kubelet[3171]: I0813 00:23:37.598012 3171 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:23:39.502015 containerd[1729]: time="2025-08-13T00:23:39.501971443Z" level=info msg="StopPodSandbox for \"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\"" Aug 13 00:23:39.593455 containerd[1729]: 2025-08-13 00:23:39.540 [WARNING][5992] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0", GenerateName:"calico-kube-controllers-756654fbd8-", Namespace:"calico-system", SelfLink:"", UID:"7b8d5ee9-2ad0-42de-94dd-d38908c2dfe3", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"756654fbd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88", Pod:"calico-kube-controllers-756654fbd8-fsrdw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8b53941e37a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:39.593455 containerd[1729]: 2025-08-13 00:23:39.541 [INFO][5992] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Aug 13 00:23:39.593455 containerd[1729]: 2025-08-13 00:23:39.541 [INFO][5992] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" iface="eth0" netns="" Aug 13 00:23:39.593455 containerd[1729]: 2025-08-13 00:23:39.541 [INFO][5992] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Aug 13 00:23:39.593455 containerd[1729]: 2025-08-13 00:23:39.541 [INFO][5992] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Aug 13 00:23:39.593455 containerd[1729]: 2025-08-13 00:23:39.577 [INFO][5999] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" HandleID="k8s-pod-network.c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0" Aug 13 00:23:39.593455 containerd[1729]: 2025-08-13 00:23:39.577 [INFO][5999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:39.593455 containerd[1729]: 2025-08-13 00:23:39.577 [INFO][5999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:39.593455 containerd[1729]: 2025-08-13 00:23:39.586 [WARNING][5999] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" HandleID="k8s-pod-network.c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0" Aug 13 00:23:39.593455 containerd[1729]: 2025-08-13 00:23:39.586 [INFO][5999] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" HandleID="k8s-pod-network.c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0" Aug 13 00:23:39.593455 containerd[1729]: 2025-08-13 00:23:39.589 [INFO][5999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:39.593455 containerd[1729]: 2025-08-13 00:23:39.591 [INFO][5992] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Aug 13 00:23:39.594196 containerd[1729]: time="2025-08-13T00:23:39.593504500Z" level=info msg="TearDown network for sandbox \"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\" successfully" Aug 13 00:23:39.594196 containerd[1729]: time="2025-08-13T00:23:39.593531380Z" level=info msg="StopPodSandbox for \"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\" returns successfully" Aug 13 00:23:39.594702 containerd[1729]: time="2025-08-13T00:23:39.594483823Z" level=info msg="RemovePodSandbox for \"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\"" Aug 13 00:23:39.594702 containerd[1729]: time="2025-08-13T00:23:39.594521903Z" level=info msg="Forcibly stopping sandbox \"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\"" Aug 13 00:23:39.664283 containerd[1729]: 2025-08-13 00:23:39.631 [WARNING][6013] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0", GenerateName:"calico-kube-controllers-756654fbd8-", Namespace:"calico-system", SelfLink:"", UID:"7b8d5ee9-2ad0-42de-94dd-d38908c2dfe3", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"756654fbd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"c7bae8035948630a4ec3f945444df2d8e63046aae0bd9da68e6aebf2f2b07c88", Pod:"calico-kube-controllers-756654fbd8-fsrdw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8b53941e37a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:39.664283 containerd[1729]: 2025-08-13 00:23:39.631 [INFO][6013] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Aug 13 00:23:39.664283 containerd[1729]: 2025-08-13 00:23:39.631 [INFO][6013] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" iface="eth0" netns="" Aug 13 00:23:39.664283 containerd[1729]: 2025-08-13 00:23:39.631 [INFO][6013] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Aug 13 00:23:39.664283 containerd[1729]: 2025-08-13 00:23:39.631 [INFO][6013] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Aug 13 00:23:39.664283 containerd[1729]: 2025-08-13 00:23:39.650 [INFO][6021] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" HandleID="k8s-pod-network.c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0" Aug 13 00:23:39.664283 containerd[1729]: 2025-08-13 00:23:39.650 [INFO][6021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:39.664283 containerd[1729]: 2025-08-13 00:23:39.650 [INFO][6021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:39.664283 containerd[1729]: 2025-08-13 00:23:39.659 [WARNING][6021] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" HandleID="k8s-pod-network.c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0" Aug 13 00:23:39.664283 containerd[1729]: 2025-08-13 00:23:39.659 [INFO][6021] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" HandleID="k8s-pod-network.c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--kube--controllers--756654fbd8--fsrdw-eth0" Aug 13 00:23:39.664283 containerd[1729]: 2025-08-13 00:23:39.661 [INFO][6021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:39.664283 containerd[1729]: 2025-08-13 00:23:39.662 [INFO][6013] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa" Aug 13 00:23:39.664777 containerd[1729]: time="2025-08-13T00:23:39.664331779Z" level=info msg="TearDown network for sandbox \"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\" successfully" Aug 13 00:23:39.675613 containerd[1729]: time="2025-08-13T00:23:39.675549810Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:39.675972 containerd[1729]: time="2025-08-13T00:23:39.675633690Z" level=info msg="RemovePodSandbox \"c1e621c32df091f33e3391145307d27569ceac5483adabf8213a43d633a4e1aa\" returns successfully" Aug 13 00:23:39.676614 containerd[1729]: time="2025-08-13T00:23:39.676397893Z" level=info msg="StopPodSandbox for \"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\"" Aug 13 00:23:39.755219 containerd[1729]: 2025-08-13 00:23:39.713 [WARNING][6035] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a72bed6b-002b-4a44-960e-b6cc7d136310", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff", Pod:"coredns-668d6bf9bc-l25zw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia332504d368", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:39.755219 containerd[1729]: 2025-08-13 00:23:39.713 [INFO][6035] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Aug 13 00:23:39.755219 containerd[1729]: 2025-08-13 00:23:39.713 [INFO][6035] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" iface="eth0" netns="" Aug 13 00:23:39.755219 containerd[1729]: 2025-08-13 00:23:39.713 [INFO][6035] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Aug 13 00:23:39.755219 containerd[1729]: 2025-08-13 00:23:39.713 [INFO][6035] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Aug 13 00:23:39.755219 containerd[1729]: 2025-08-13 00:23:39.732 [INFO][6043] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" HandleID="k8s-pod-network.c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0" Aug 13 00:23:39.755219 containerd[1729]: 2025-08-13 00:23:39.732 [INFO][6043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:39.755219 containerd[1729]: 2025-08-13 00:23:39.732 [INFO][6043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:39.755219 containerd[1729]: 2025-08-13 00:23:39.743 [WARNING][6043] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" HandleID="k8s-pod-network.c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0" Aug 13 00:23:39.755219 containerd[1729]: 2025-08-13 00:23:39.744 [INFO][6043] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" HandleID="k8s-pod-network.c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0" Aug 13 00:23:39.755219 containerd[1729]: 2025-08-13 00:23:39.747 [INFO][6043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:39.755219 containerd[1729]: 2025-08-13 00:23:39.752 [INFO][6035] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Aug 13 00:23:39.755219 containerd[1729]: time="2025-08-13T00:23:39.755003793Z" level=info msg="TearDown network for sandbox \"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\" successfully" Aug 13 00:23:39.755219 containerd[1729]: time="2025-08-13T00:23:39.755035473Z" level=info msg="StopPodSandbox for \"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\" returns successfully" Aug 13 00:23:39.757386 containerd[1729]: time="2025-08-13T00:23:39.756269757Z" level=info msg="RemovePodSandbox for \"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\"" Aug 13 00:23:39.757386 containerd[1729]: time="2025-08-13T00:23:39.756407957Z" level=info msg="Forcibly stopping sandbox \"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\"" Aug 13 00:23:39.844857 containerd[1729]: 2025-08-13 00:23:39.809 [WARNING][6057] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a72bed6b-002b-4a44-960e-b6cc7d136310", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"ea4a9a5e3726928901c0a61df83bb1409633da5f7879a44cd1eec33e8da0ffff", Pod:"coredns-668d6bf9bc-l25zw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia332504d368", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:39.844857 containerd[1729]: 2025-08-13 00:23:39.809 [INFO][6057] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Aug 13 00:23:39.844857 containerd[1729]: 2025-08-13 00:23:39.809 [INFO][6057] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" iface="eth0" netns="" Aug 13 00:23:39.844857 containerd[1729]: 2025-08-13 00:23:39.809 [INFO][6057] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Aug 13 00:23:39.844857 containerd[1729]: 2025-08-13 00:23:39.809 [INFO][6057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Aug 13 00:23:39.844857 containerd[1729]: 2025-08-13 00:23:39.831 [INFO][6066] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" HandleID="k8s-pod-network.c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0" Aug 13 00:23:39.844857 containerd[1729]: 2025-08-13 00:23:39.831 [INFO][6066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:39.844857 containerd[1729]: 2025-08-13 00:23:39.831 [INFO][6066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:39.844857 containerd[1729]: 2025-08-13 00:23:39.839 [WARNING][6066] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" HandleID="k8s-pod-network.c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0" Aug 13 00:23:39.844857 containerd[1729]: 2025-08-13 00:23:39.839 [INFO][6066] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" HandleID="k8s-pod-network.c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--l25zw-eth0" Aug 13 00:23:39.844857 containerd[1729]: 2025-08-13 00:23:39.841 [INFO][6066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:39.844857 containerd[1729]: 2025-08-13 00:23:39.842 [INFO][6057] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047" Aug 13 00:23:39.844857 containerd[1729]: time="2025-08-13T00:23:39.844802365Z" level=info msg="TearDown network for sandbox \"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\" successfully" Aug 13 00:23:39.856006 containerd[1729]: time="2025-08-13T00:23:39.855716356Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:39.856006 containerd[1729]: time="2025-08-13T00:23:39.855853476Z" level=info msg="RemovePodSandbox \"c1bb92dcb252297d7684942ce60c7159b737b5c699c0451f04121899d93dd047\" returns successfully" Aug 13 00:23:39.856862 containerd[1729]: time="2025-08-13T00:23:39.856613398Z" level=info msg="StopPodSandbox for \"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\"" Aug 13 00:23:39.924608 containerd[1729]: 2025-08-13 00:23:39.890 [WARNING][6080] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f0437c77-5cee-49cc-a3f4-ee6bceab493a", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639", Pod:"coredns-668d6bf9bc-4rkw9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali336563c61e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:39.924608 containerd[1729]: 2025-08-13 00:23:39.890 [INFO][6080] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Aug 13 00:23:39.924608 containerd[1729]: 2025-08-13 00:23:39.890 [INFO][6080] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" iface="eth0" netns="" Aug 13 00:23:39.924608 containerd[1729]: 2025-08-13 00:23:39.890 [INFO][6080] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Aug 13 00:23:39.924608 containerd[1729]: 2025-08-13 00:23:39.890 [INFO][6080] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Aug 13 00:23:39.924608 containerd[1729]: 2025-08-13 00:23:39.910 [INFO][6087] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" HandleID="k8s-pod-network.66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0" Aug 13 00:23:39.924608 containerd[1729]: 2025-08-13 00:23:39.910 [INFO][6087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:39.924608 containerd[1729]: 2025-08-13 00:23:39.910 [INFO][6087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:39.924608 containerd[1729]: 2025-08-13 00:23:39.919 [WARNING][6087] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" HandleID="k8s-pod-network.66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0" Aug 13 00:23:39.924608 containerd[1729]: 2025-08-13 00:23:39.919 [INFO][6087] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" HandleID="k8s-pod-network.66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0" Aug 13 00:23:39.924608 containerd[1729]: 2025-08-13 00:23:39.921 [INFO][6087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:39.924608 containerd[1729]: 2025-08-13 00:23:39.922 [INFO][6080] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Aug 13 00:23:39.924608 containerd[1729]: time="2025-08-13T00:23:39.924476108Z" level=info msg="TearDown network for sandbox \"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\" successfully" Aug 13 00:23:39.924608 containerd[1729]: time="2025-08-13T00:23:39.924505469Z" level=info msg="StopPodSandbox for \"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\" returns successfully" Aug 13 00:23:39.925874 containerd[1729]: time="2025-08-13T00:23:39.925521591Z" level=info msg="RemovePodSandbox for \"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\"" Aug 13 00:23:39.925874 containerd[1729]: time="2025-08-13T00:23:39.925554511Z" level=info msg="Forcibly stopping sandbox \"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\"" Aug 13 00:23:39.999758 containerd[1729]: 2025-08-13 00:23:39.964 [WARNING][6101] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f0437c77-5cee-49cc-a3f4-ee6bceab493a", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"7487a9bf6ca2a8cae800993ca8d2f5834f691bc797c37032c0200f3e12ebb639", Pod:"coredns-668d6bf9bc-4rkw9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali336563c61e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:39.999758 containerd[1729]: 2025-08-13 00:23:39.965 [INFO][6101] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Aug 13 00:23:39.999758 containerd[1729]: 2025-08-13 00:23:39.965 [INFO][6101] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" iface="eth0" netns="" Aug 13 00:23:39.999758 containerd[1729]: 2025-08-13 00:23:39.965 [INFO][6101] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Aug 13 00:23:39.999758 containerd[1729]: 2025-08-13 00:23:39.965 [INFO][6101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Aug 13 00:23:39.999758 containerd[1729]: 2025-08-13 00:23:39.984 [INFO][6108] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" HandleID="k8s-pod-network.66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0" Aug 13 00:23:39.999758 containerd[1729]: 2025-08-13 00:23:39.985 [INFO][6108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:39.999758 containerd[1729]: 2025-08-13 00:23:39.985 [INFO][6108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:39.999758 containerd[1729]: 2025-08-13 00:23:39.994 [WARNING][6108] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" HandleID="k8s-pod-network.66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0" Aug 13 00:23:39.999758 containerd[1729]: 2025-08-13 00:23:39.994 [INFO][6108] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" HandleID="k8s-pod-network.66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Workload="ci--4081.3.5--a--2fbd311b45-k8s-coredns--668d6bf9bc--4rkw9-eth0" Aug 13 00:23:39.999758 containerd[1729]: 2025-08-13 00:23:39.996 [INFO][6108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:39.999758 containerd[1729]: 2025-08-13 00:23:39.998 [INFO][6101] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad" Aug 13 00:23:40.001656 containerd[1729]: time="2025-08-13T00:23:40.000204721Z" level=info msg="TearDown network for sandbox \"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\" successfully" Aug 13 00:23:40.008485 containerd[1729]: time="2025-08-13T00:23:40.007757622Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:40.008683 containerd[1729]: time="2025-08-13T00:23:40.008659425Z" level=info msg="RemovePodSandbox \"66f12ee18652804d5fdf1d7a19e3b189dfcfd49be8fdae80bded6402a05261ad\" returns successfully" Aug 13 00:23:40.009546 containerd[1729]: time="2025-08-13T00:23:40.009257146Z" level=info msg="StopPodSandbox for \"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\"" Aug 13 00:23:40.078016 containerd[1729]: 2025-08-13 00:23:40.043 [WARNING][6122] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7bc7ffc9-ee10-44a4-88ba-09de883ee749", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116", Pod:"csi-node-driver-wbgr7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali67636b3a5a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:40.078016 containerd[1729]: 2025-08-13 00:23:40.043 [INFO][6122] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Aug 13 00:23:40.078016 containerd[1729]: 2025-08-13 00:23:40.043 [INFO][6122] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" iface="eth0" netns="" Aug 13 00:23:40.078016 containerd[1729]: 2025-08-13 00:23:40.043 [INFO][6122] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Aug 13 00:23:40.078016 containerd[1729]: 2025-08-13 00:23:40.043 [INFO][6122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Aug 13 00:23:40.078016 containerd[1729]: 2025-08-13 00:23:40.064 [INFO][6129] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" HandleID="k8s-pod-network.6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Workload="ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0" Aug 13 00:23:40.078016 containerd[1729]: 2025-08-13 00:23:40.064 [INFO][6129] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:40.078016 containerd[1729]: 2025-08-13 00:23:40.064 [INFO][6129] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:40.078016 containerd[1729]: 2025-08-13 00:23:40.073 [WARNING][6129] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" HandleID="k8s-pod-network.6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Workload="ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0" Aug 13 00:23:40.078016 containerd[1729]: 2025-08-13 00:23:40.073 [INFO][6129] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" HandleID="k8s-pod-network.6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Workload="ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0" Aug 13 00:23:40.078016 containerd[1729]: 2025-08-13 00:23:40.074 [INFO][6129] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:40.078016 containerd[1729]: 2025-08-13 00:23:40.076 [INFO][6122] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Aug 13 00:23:40.078691 containerd[1729]: time="2025-08-13T00:23:40.078597461Z" level=info msg="TearDown network for sandbox \"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\" successfully" Aug 13 00:23:40.078691 containerd[1729]: time="2025-08-13T00:23:40.078646141Z" level=info msg="StopPodSandbox for \"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\" returns successfully" Aug 13 00:23:40.079454 containerd[1729]: time="2025-08-13T00:23:40.079231263Z" level=info msg="RemovePodSandbox for \"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\"" Aug 13 00:23:40.079454 containerd[1729]: time="2025-08-13T00:23:40.079268063Z" level=info msg="Forcibly stopping sandbox \"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\"" Aug 13 00:23:40.153930 containerd[1729]: 2025-08-13 00:23:40.119 [WARNING][6143] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7bc7ffc9-ee10-44a4-88ba-09de883ee749", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"cd8f4b438d55303f40089bb3a1373e5418ae1cd95acb08b535c8461c302e3116", Pod:"csi-node-driver-wbgr7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali67636b3a5a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:40.153930 containerd[1729]: 2025-08-13 00:23:40.119 [INFO][6143] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Aug 13 00:23:40.153930 containerd[1729]: 2025-08-13 00:23:40.119 [INFO][6143] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" iface="eth0" netns="" Aug 13 00:23:40.153930 containerd[1729]: 2025-08-13 00:23:40.119 [INFO][6143] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Aug 13 00:23:40.153930 containerd[1729]: 2025-08-13 00:23:40.119 [INFO][6143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Aug 13 00:23:40.153930 containerd[1729]: 2025-08-13 00:23:40.139 [INFO][6150] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" HandleID="k8s-pod-network.6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Workload="ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0" Aug 13 00:23:40.153930 containerd[1729]: 2025-08-13 00:23:40.139 [INFO][6150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:40.153930 containerd[1729]: 2025-08-13 00:23:40.139 [INFO][6150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:40.153930 containerd[1729]: 2025-08-13 00:23:40.148 [WARNING][6150] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" HandleID="k8s-pod-network.6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Workload="ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0" Aug 13 00:23:40.153930 containerd[1729]: 2025-08-13 00:23:40.148 [INFO][6150] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" HandleID="k8s-pod-network.6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Workload="ci--4081.3.5--a--2fbd311b45-k8s-csi--node--driver--wbgr7-eth0" Aug 13 00:23:40.153930 containerd[1729]: 2025-08-13 00:23:40.150 [INFO][6150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:40.153930 containerd[1729]: 2025-08-13 00:23:40.152 [INFO][6143] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab" Aug 13 00:23:40.155665 containerd[1729]: time="2025-08-13T00:23:40.154128913Z" level=info msg="TearDown network for sandbox \"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\" successfully" Aug 13 00:23:40.171230 containerd[1729]: time="2025-08-13T00:23:40.171181880Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:40.171452 containerd[1729]: time="2025-08-13T00:23:40.171433561Z" level=info msg="RemovePodSandbox \"6a5f8afb713b7fd0849b400b70e518ab30effc0337a56ef5b04ee5e99be94dab\" returns successfully" Aug 13 00:23:40.172013 containerd[1729]: time="2025-08-13T00:23:40.171988083Z" level=info msg="StopPodSandbox for \"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\"" Aug 13 00:23:40.252119 containerd[1729]: 2025-08-13 00:23:40.217 [WARNING][6164] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0", GenerateName:"calico-apiserver-56fd6c9f9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"92ee9526-40a5-4de2-be91-45eaf7973f17", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56fd6c9f9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f", Pod:"calico-apiserver-56fd6c9f9d-tgxh9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f582317deb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:40.252119 containerd[1729]: 2025-08-13 00:23:40.218 [INFO][6164] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Aug 13 00:23:40.252119 containerd[1729]: 2025-08-13 00:23:40.218 [INFO][6164] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" iface="eth0" netns="" Aug 13 00:23:40.252119 containerd[1729]: 2025-08-13 00:23:40.218 [INFO][6164] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Aug 13 00:23:40.252119 containerd[1729]: 2025-08-13 00:23:40.218 [INFO][6164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Aug 13 00:23:40.252119 containerd[1729]: 2025-08-13 00:23:40.238 [INFO][6171] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" HandleID="k8s-pod-network.23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0" Aug 13 00:23:40.252119 containerd[1729]: 2025-08-13 00:23:40.238 [INFO][6171] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:40.252119 containerd[1729]: 2025-08-13 00:23:40.238 [INFO][6171] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:40.252119 containerd[1729]: 2025-08-13 00:23:40.247 [WARNING][6171] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" HandleID="k8s-pod-network.23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0" Aug 13 00:23:40.252119 containerd[1729]: 2025-08-13 00:23:40.247 [INFO][6171] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" HandleID="k8s-pod-network.23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0" Aug 13 00:23:40.252119 containerd[1729]: 2025-08-13 00:23:40.249 [INFO][6171] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:40.252119 containerd[1729]: 2025-08-13 00:23:40.250 [INFO][6164] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Aug 13 00:23:40.252774 containerd[1729]: time="2025-08-13T00:23:40.252697669Z" level=info msg="TearDown network for sandbox \"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\" successfully" Aug 13 00:23:40.252774 containerd[1729]: time="2025-08-13T00:23:40.252733629Z" level=info msg="StopPodSandbox for \"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\" returns successfully" Aug 13 00:23:40.253714 containerd[1729]: time="2025-08-13T00:23:40.253405591Z" level=info msg="RemovePodSandbox for \"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\"" Aug 13 00:23:40.253714 containerd[1729]: time="2025-08-13T00:23:40.253439431Z" level=info msg="Forcibly stopping sandbox \"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\"" Aug 13 00:23:40.323081 containerd[1729]: 2025-08-13 00:23:40.288 [WARNING][6185] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0", GenerateName:"calico-apiserver-56fd6c9f9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"92ee9526-40a5-4de2-be91-45eaf7973f17", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56fd6c9f9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"a230f5af1df5dbfa08f62e9406ae3bc25163cf01d9723ad7daa4c500babe046f", Pod:"calico-apiserver-56fd6c9f9d-tgxh9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f582317deb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:40.323081 containerd[1729]: 2025-08-13 00:23:40.288 [INFO][6185] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Aug 13 00:23:40.323081 containerd[1729]: 2025-08-13 00:23:40.288 [INFO][6185] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" iface="eth0" netns="" Aug 13 00:23:40.323081 containerd[1729]: 2025-08-13 00:23:40.288 [INFO][6185] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Aug 13 00:23:40.323081 containerd[1729]: 2025-08-13 00:23:40.288 [INFO][6185] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Aug 13 00:23:40.323081 containerd[1729]: 2025-08-13 00:23:40.308 [INFO][6192] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" HandleID="k8s-pod-network.23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0" Aug 13 00:23:40.323081 containerd[1729]: 2025-08-13 00:23:40.308 [INFO][6192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:40.323081 containerd[1729]: 2025-08-13 00:23:40.308 [INFO][6192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:40.323081 containerd[1729]: 2025-08-13 00:23:40.318 [WARNING][6192] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" HandleID="k8s-pod-network.23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0" Aug 13 00:23:40.323081 containerd[1729]: 2025-08-13 00:23:40.318 [INFO][6192] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" HandleID="k8s-pod-network.23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--tgxh9-eth0" Aug 13 00:23:40.323081 containerd[1729]: 2025-08-13 00:23:40.320 [INFO][6192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:40.323081 containerd[1729]: 2025-08-13 00:23:40.321 [INFO][6185] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83" Aug 13 00:23:40.325277 containerd[1729]: time="2025-08-13T00:23:40.324777471Z" level=info msg="TearDown network for sandbox \"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\" successfully" Aug 13 00:23:40.333435 containerd[1729]: time="2025-08-13T00:23:40.333240335Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:40.333435 containerd[1729]: time="2025-08-13T00:23:40.333325575Z" level=info msg="RemovePodSandbox \"23d949bd96853ddacca389ca19dacf82984fb09c4288b2d29058ebf1c37bee83\" returns successfully" Aug 13 00:23:40.333820 containerd[1729]: time="2025-08-13T00:23:40.333791177Z" level=info msg="StopPodSandbox for \"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\"" Aug 13 00:23:40.409101 containerd[1729]: 2025-08-13 00:23:40.371 [WARNING][6206] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0", GenerateName:"calico-apiserver-56fd6c9f9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0c10593-4a39-4156-937c-70315299f09c", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56fd6c9f9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d", Pod:"calico-apiserver-56fd6c9f9d-sgp55", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f2f714cd55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:40.409101 containerd[1729]: 2025-08-13 00:23:40.372 [INFO][6206] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Aug 13 00:23:40.409101 containerd[1729]: 2025-08-13 00:23:40.372 [INFO][6206] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" iface="eth0" netns="" Aug 13 00:23:40.409101 containerd[1729]: 2025-08-13 00:23:40.372 [INFO][6206] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Aug 13 00:23:40.409101 containerd[1729]: 2025-08-13 00:23:40.372 [INFO][6206] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Aug 13 00:23:40.409101 containerd[1729]: 2025-08-13 00:23:40.392 [INFO][6213] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" HandleID="k8s-pod-network.1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0" Aug 13 00:23:40.409101 containerd[1729]: 2025-08-13 00:23:40.394 [INFO][6213] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:40.409101 containerd[1729]: 2025-08-13 00:23:40.394 [INFO][6213] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:40.409101 containerd[1729]: 2025-08-13 00:23:40.403 [WARNING][6213] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" HandleID="k8s-pod-network.1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0" Aug 13 00:23:40.409101 containerd[1729]: 2025-08-13 00:23:40.403 [INFO][6213] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" HandleID="k8s-pod-network.1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0" Aug 13 00:23:40.409101 containerd[1729]: 2025-08-13 00:23:40.405 [INFO][6213] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:40.409101 containerd[1729]: 2025-08-13 00:23:40.407 [INFO][6206] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Aug 13 00:23:40.409753 containerd[1729]: time="2025-08-13T00:23:40.409163428Z" level=info msg="TearDown network for sandbox \"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\" successfully" Aug 13 00:23:40.409753 containerd[1729]: time="2025-08-13T00:23:40.409191348Z" level=info msg="StopPodSandbox for \"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\" returns successfully" Aug 13 00:23:40.410489 containerd[1729]: time="2025-08-13T00:23:40.409963630Z" level=info msg="RemovePodSandbox for \"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\"" Aug 13 00:23:40.410489 containerd[1729]: time="2025-08-13T00:23:40.409995710Z" level=info msg="Forcibly stopping sandbox \"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\"" Aug 13 00:23:40.480729 containerd[1729]: 2025-08-13 00:23:40.446 [WARNING][6227] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0", GenerateName:"calico-apiserver-56fd6c9f9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0c10593-4a39-4156-937c-70315299f09c", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56fd6c9f9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"cd8dcaae18ce4b721c645cdf04cc33685f6c4aa50af27b14f1deebe866646a3d", Pod:"calico-apiserver-56fd6c9f9d-sgp55", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f2f714cd55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:40.480729 containerd[1729]: 2025-08-13 00:23:40.447 [INFO][6227] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Aug 13 00:23:40.480729 containerd[1729]: 2025-08-13 00:23:40.447 [INFO][6227] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" iface="eth0" netns="" Aug 13 00:23:40.480729 containerd[1729]: 2025-08-13 00:23:40.447 [INFO][6227] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Aug 13 00:23:40.480729 containerd[1729]: 2025-08-13 00:23:40.447 [INFO][6227] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Aug 13 00:23:40.480729 containerd[1729]: 2025-08-13 00:23:40.467 [INFO][6234] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" HandleID="k8s-pod-network.1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0" Aug 13 00:23:40.480729 containerd[1729]: 2025-08-13 00:23:40.467 [INFO][6234] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:40.480729 containerd[1729]: 2025-08-13 00:23:40.467 [INFO][6234] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:40.480729 containerd[1729]: 2025-08-13 00:23:40.475 [WARNING][6234] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" HandleID="k8s-pod-network.1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0" Aug 13 00:23:40.480729 containerd[1729]: 2025-08-13 00:23:40.476 [INFO][6234] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" HandleID="k8s-pod-network.1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Workload="ci--4081.3.5--a--2fbd311b45-k8s-calico--apiserver--56fd6c9f9d--sgp55-eth0" Aug 13 00:23:40.480729 containerd[1729]: 2025-08-13 00:23:40.477 [INFO][6234] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:40.480729 containerd[1729]: 2025-08-13 00:23:40.478 [INFO][6227] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2" Aug 13 00:23:40.481452 containerd[1729]: time="2025-08-13T00:23:40.481048390Z" level=info msg="TearDown network for sandbox \"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\" successfully" Aug 13 00:23:40.490099 containerd[1729]: time="2025-08-13T00:23:40.489900774Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:40.490099 containerd[1729]: time="2025-08-13T00:23:40.489983735Z" level=info msg="RemovePodSandbox \"1bf5feda9701a2540d92a5739a8f31cbdc624d2f1928cfdb2fe4fe3f96b5fce2\" returns successfully" Aug 13 00:23:40.490501 containerd[1729]: time="2025-08-13T00:23:40.490472096Z" level=info msg="StopPodSandbox for \"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\"" Aug 13 00:23:40.569004 containerd[1729]: 2025-08-13 00:23:40.526 [WARNING][6248] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"8bf5886c-e31f-4c51-ba8d-b8b7e72967c4", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917", Pod:"goldmane-768f4c5c69-l455f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali61cb0939c11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:40.569004 containerd[1729]: 2025-08-13 00:23:40.527 [INFO][6248] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Aug 13 00:23:40.569004 containerd[1729]: 2025-08-13 00:23:40.527 [INFO][6248] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" iface="eth0" netns="" Aug 13 00:23:40.569004 containerd[1729]: 2025-08-13 00:23:40.527 [INFO][6248] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Aug 13 00:23:40.569004 containerd[1729]: 2025-08-13 00:23:40.527 [INFO][6248] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Aug 13 00:23:40.569004 containerd[1729]: 2025-08-13 00:23:40.552 [INFO][6255] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" HandleID="k8s-pod-network.65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Workload="ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0" Aug 13 00:23:40.569004 containerd[1729]: 2025-08-13 00:23:40.552 [INFO][6255] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:40.569004 containerd[1729]: 2025-08-13 00:23:40.552 [INFO][6255] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:40.569004 containerd[1729]: 2025-08-13 00:23:40.562 [WARNING][6255] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" HandleID="k8s-pod-network.65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Workload="ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0" Aug 13 00:23:40.569004 containerd[1729]: 2025-08-13 00:23:40.562 [INFO][6255] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" HandleID="k8s-pod-network.65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Workload="ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0" Aug 13 00:23:40.569004 containerd[1729]: 2025-08-13 00:23:40.565 [INFO][6255] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:40.569004 containerd[1729]: 2025-08-13 00:23:40.566 [INFO][6248] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Aug 13 00:23:40.570580 containerd[1729]: time="2025-08-13T00:23:40.569271717Z" level=info msg="TearDown network for sandbox \"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\" successfully" Aug 13 00:23:40.570580 containerd[1729]: time="2025-08-13T00:23:40.569303917Z" level=info msg="StopPodSandbox for \"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\" returns successfully" Aug 13 00:23:40.571293 containerd[1729]: time="2025-08-13T00:23:40.570862762Z" level=info msg="RemovePodSandbox for \"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\"" Aug 13 00:23:40.571293 containerd[1729]: time="2025-08-13T00:23:40.570899682Z" level=info msg="Forcibly stopping sandbox \"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\"" Aug 13 00:23:40.643244 containerd[1729]: 2025-08-13 00:23:40.604 [WARNING][6270] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"8bf5886c-e31f-4c51-ba8d-b8b7e72967c4", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2fbd311b45", ContainerID:"623222fe2b4aef525d44e42bf8d9b94cb52f8186ee417ad375e64e6e3ad17917", Pod:"goldmane-768f4c5c69-l455f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali61cb0939c11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:40.643244 containerd[1729]: 2025-08-13 00:23:40.606 [INFO][6270] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Aug 13 00:23:40.643244 containerd[1729]: 2025-08-13 00:23:40.606 [INFO][6270] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" iface="eth0" netns="" Aug 13 00:23:40.643244 containerd[1729]: 2025-08-13 00:23:40.606 [INFO][6270] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Aug 13 00:23:40.643244 containerd[1729]: 2025-08-13 00:23:40.606 [INFO][6270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Aug 13 00:23:40.643244 containerd[1729]: 2025-08-13 00:23:40.626 [INFO][6278] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" HandleID="k8s-pod-network.65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Workload="ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0" Aug 13 00:23:40.643244 containerd[1729]: 2025-08-13 00:23:40.626 [INFO][6278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:40.643244 containerd[1729]: 2025-08-13 00:23:40.626 [INFO][6278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:40.643244 containerd[1729]: 2025-08-13 00:23:40.637 [WARNING][6278] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" HandleID="k8s-pod-network.65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Workload="ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0" Aug 13 00:23:40.643244 containerd[1729]: 2025-08-13 00:23:40.638 [INFO][6278] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" HandleID="k8s-pod-network.65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Workload="ci--4081.3.5--a--2fbd311b45-k8s-goldmane--768f4c5c69--l455f-eth0" Aug 13 00:23:40.643244 containerd[1729]: 2025-08-13 00:23:40.639 [INFO][6278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:40.643244 containerd[1729]: 2025-08-13 00:23:40.640 [INFO][6270] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619" Aug 13 00:23:40.643244 containerd[1729]: time="2025-08-13T00:23:40.642624283Z" level=info msg="TearDown network for sandbox \"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\" successfully" Aug 13 00:23:40.653278 containerd[1729]: time="2025-08-13T00:23:40.653043272Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:40.653278 containerd[1729]: time="2025-08-13T00:23:40.653175232Z" level=info msg="RemovePodSandbox \"65e81665b6482a1740ef08254d13c7677e48b56bf9c48fa86ba2c75a55c2c619\" returns successfully" Aug 13 00:23:40.653659 containerd[1729]: time="2025-08-13T00:23:40.653633594Z" level=info msg="StopPodSandbox for \"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\"" Aug 13 00:23:40.722409 containerd[1729]: 2025-08-13 00:23:40.690 [WARNING][6292] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-whisker--5dc58ff4cb--d6pst-eth0" Aug 13 00:23:40.722409 containerd[1729]: 2025-08-13 00:23:40.690 [INFO][6292] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Aug 13 00:23:40.722409 containerd[1729]: 2025-08-13 00:23:40.690 [INFO][6292] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" iface="eth0" netns="" Aug 13 00:23:40.722409 containerd[1729]: 2025-08-13 00:23:40.690 [INFO][6292] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Aug 13 00:23:40.722409 containerd[1729]: 2025-08-13 00:23:40.690 [INFO][6292] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Aug 13 00:23:40.722409 containerd[1729]: 2025-08-13 00:23:40.708 [INFO][6299] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" HandleID="k8s-pod-network.e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Workload="ci--4081.3.5--a--2fbd311b45-k8s-whisker--5dc58ff4cb--d6pst-eth0" Aug 13 00:23:40.722409 containerd[1729]: 2025-08-13 00:23:40.708 [INFO][6299] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:40.722409 containerd[1729]: 2025-08-13 00:23:40.709 [INFO][6299] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:40.722409 containerd[1729]: 2025-08-13 00:23:40.717 [WARNING][6299] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" HandleID="k8s-pod-network.e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Workload="ci--4081.3.5--a--2fbd311b45-k8s-whisker--5dc58ff4cb--d6pst-eth0" Aug 13 00:23:40.722409 containerd[1729]: 2025-08-13 00:23:40.717 [INFO][6299] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" HandleID="k8s-pod-network.e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Workload="ci--4081.3.5--a--2fbd311b45-k8s-whisker--5dc58ff4cb--d6pst-eth0" Aug 13 00:23:40.722409 containerd[1729]: 2025-08-13 00:23:40.719 [INFO][6299] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:40.722409 containerd[1729]: 2025-08-13 00:23:40.720 [INFO][6292] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Aug 13 00:23:40.722786 containerd[1729]: time="2025-08-13T00:23:40.722460427Z" level=info msg="TearDown network for sandbox \"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\" successfully" Aug 13 00:23:40.722786 containerd[1729]: time="2025-08-13T00:23:40.722488387Z" level=info msg="StopPodSandbox for \"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\" returns successfully" Aug 13 00:23:40.722990 containerd[1729]: time="2025-08-13T00:23:40.722961228Z" level=info msg="RemovePodSandbox for \"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\"" Aug 13 00:23:40.723033 containerd[1729]: time="2025-08-13T00:23:40.723001508Z" level=info msg="Forcibly stopping sandbox \"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\"" Aug 13 00:23:40.791792 containerd[1729]: 2025-08-13 00:23:40.758 [WARNING][6313] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" WorkloadEndpoint="ci--4081.3.5--a--2fbd311b45-k8s-whisker--5dc58ff4cb--d6pst-eth0" Aug 13 00:23:40.791792 containerd[1729]: 2025-08-13 00:23:40.759 [INFO][6313] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Aug 13 00:23:40.791792 containerd[1729]: 2025-08-13 00:23:40.759 [INFO][6313] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" iface="eth0" netns="" Aug 13 00:23:40.791792 containerd[1729]: 2025-08-13 00:23:40.759 [INFO][6313] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Aug 13 00:23:40.791792 containerd[1729]: 2025-08-13 00:23:40.759 [INFO][6313] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Aug 13 00:23:40.791792 containerd[1729]: 2025-08-13 00:23:40.776 [INFO][6320] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" HandleID="k8s-pod-network.e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Workload="ci--4081.3.5--a--2fbd311b45-k8s-whisker--5dc58ff4cb--d6pst-eth0" Aug 13 00:23:40.791792 containerd[1729]: 2025-08-13 00:23:40.777 [INFO][6320] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:40.791792 containerd[1729]: 2025-08-13 00:23:40.777 [INFO][6320] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:40.791792 containerd[1729]: 2025-08-13 00:23:40.786 [WARNING][6320] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" HandleID="k8s-pod-network.e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Workload="ci--4081.3.5--a--2fbd311b45-k8s-whisker--5dc58ff4cb--d6pst-eth0" Aug 13 00:23:40.791792 containerd[1729]: 2025-08-13 00:23:40.786 [INFO][6320] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" HandleID="k8s-pod-network.e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Workload="ci--4081.3.5--a--2fbd311b45-k8s-whisker--5dc58ff4cb--d6pst-eth0" Aug 13 00:23:40.791792 containerd[1729]: 2025-08-13 00:23:40.788 [INFO][6320] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:40.791792 containerd[1729]: 2025-08-13 00:23:40.789 [INFO][6313] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194" Aug 13 00:23:40.792236 containerd[1729]: time="2025-08-13T00:23:40.791857861Z" level=info msg="TearDown network for sandbox \"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\" successfully" Aug 13 00:23:40.799619 containerd[1729]: time="2025-08-13T00:23:40.799572683Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:40.799709 containerd[1729]: time="2025-08-13T00:23:40.799658203Z" level=info msg="RemovePodSandbox \"e3d0e23625536e970e1fe784e289924c267965b1c91ce78f9f40c5efe8ef4194\" returns successfully" Aug 13 00:23:48.662306 kubelet[3171]: I0813 00:23:48.661991 3171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:23:51.250562 kubelet[3171]: I0813 00:23:51.249701 3171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:24:04.140118 systemd[1]: run-containerd-runc-k8s.io-9de90623f4ff1a7f333d1aafae08e33d73064eee22742ca6da55bc26a2fb363d-runc.YP9eqW.mount: Deactivated successfully. Aug 13 00:24:26.592905 systemd[1]: Started sshd@7-10.200.20.40:22-10.200.16.10:58234.service - OpenSSH per-connection server daemon (10.200.16.10:58234). Aug 13 00:24:27.068350 sshd[6486]: Accepted publickey for core from 10.200.16.10 port 58234 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:27.071298 sshd[6486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:27.077773 systemd-logind[1697]: New session 10 of user core. Aug 13 00:24:27.084396 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:24:27.529402 sshd[6486]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:27.536562 systemd[1]: sshd@7-10.200.20.40:22-10.200.16.10:58234.service: Deactivated successfully. Aug 13 00:24:27.537396 systemd-logind[1697]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:24:27.540735 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:24:27.545920 systemd-logind[1697]: Removed session 10. Aug 13 00:24:32.616507 systemd[1]: Started sshd@8-10.200.20.40:22-10.200.16.10:38322.service - OpenSSH per-connection server daemon (10.200.16.10:38322). Aug 13 00:24:33.066589 sshd[6518]: Accepted publickey for core from 10.200.16.10 port 38322 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:33.068391 sshd[6518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:33.073267 systemd-logind[1697]: New session 11 of user core. Aug 13 00:24:33.081386 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:24:33.492345 sshd[6518]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:33.498729 systemd-logind[1697]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:24:33.500284 systemd[1]: sshd@8-10.200.20.40:22-10.200.16.10:38322.service: Deactivated successfully. Aug 13 00:24:33.507934 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:24:33.511445 systemd-logind[1697]: Removed session 11. Aug 13 00:24:38.579051 systemd[1]: Started sshd@9-10.200.20.40:22-10.200.16.10:38326.service - OpenSSH per-connection server daemon (10.200.16.10:38326). Aug 13 00:24:39.063625 sshd[6553]: Accepted publickey for core from 10.200.16.10 port 38326 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:39.065280 sshd[6553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:39.070476 systemd-logind[1697]: New session 12 of user core. Aug 13 00:24:39.078441 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:24:39.476503 sshd[6553]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:39.484169 systemd[1]: sshd@9-10.200.20.40:22-10.200.16.10:38326.service: Deactivated successfully. Aug 13 00:24:39.484196 systemd-logind[1697]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:24:39.487436 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:24:39.489276 systemd-logind[1697]: Removed session 12. Aug 13 00:24:44.561460 systemd[1]: Started sshd@10-10.200.20.40:22-10.200.16.10:43504.service - OpenSSH per-connection server daemon (10.200.16.10:43504). Aug 13 00:24:45.012421 sshd[6568]: Accepted publickey for core from 10.200.16.10 port 43504 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:45.014305 sshd[6568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:45.020400 systemd-logind[1697]: New session 13 of user core. Aug 13 00:24:45.024404 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:24:45.423859 sshd[6568]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:45.427612 systemd[1]: sshd@10-10.200.20.40:22-10.200.16.10:43504.service: Deactivated successfully. Aug 13 00:24:45.429604 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:24:45.430346 systemd-logind[1697]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:24:45.431939 systemd-logind[1697]: Removed session 13. Aug 13 00:24:45.509773 systemd[1]: Started sshd@11-10.200.20.40:22-10.200.16.10:43506.service - OpenSSH per-connection server daemon (10.200.16.10:43506). Aug 13 00:24:45.985009 sshd[6584]: Accepted publickey for core from 10.200.16.10 port 43506 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:45.986292 sshd[6584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:45.991699 systemd-logind[1697]: New session 14 of user core. Aug 13 00:24:46.000340 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:24:46.440494 sshd[6584]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:46.447950 systemd[1]: sshd@11-10.200.20.40:22-10.200.16.10:43506.service: Deactivated successfully. Aug 13 00:24:46.452767 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:24:46.456054 systemd-logind[1697]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:24:46.458004 systemd-logind[1697]: Removed session 14. Aug 13 00:24:46.529792 systemd[1]: Started sshd@12-10.200.20.40:22-10.200.16.10:43522.service - OpenSSH per-connection server daemon (10.200.16.10:43522). Aug 13 00:24:47.028877 sshd[6598]: Accepted publickey for core from 10.200.16.10 port 43522 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:47.030618 sshd[6598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:47.036034 systemd-logind[1697]: New session 15 of user core. Aug 13 00:24:47.040364 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:24:47.445339 sshd[6598]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:47.449321 systemd[1]: sshd@12-10.200.20.40:22-10.200.16.10:43522.service: Deactivated successfully. Aug 13 00:24:47.451361 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:24:47.452952 systemd-logind[1697]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:24:47.454661 systemd-logind[1697]: Removed session 15. Aug 13 00:24:52.528723 systemd[1]: Started sshd@13-10.200.20.40:22-10.200.16.10:45616.service - OpenSSH per-connection server daemon (10.200.16.10:45616). Aug 13 00:24:52.991028 sshd[6642]: Accepted publickey for core from 10.200.16.10 port 45616 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:52.992535 sshd[6642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:52.998297 systemd-logind[1697]: New session 16 of user core. Aug 13 00:24:53.002302 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:24:53.422258 sshd[6642]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:53.425713 systemd[1]: sshd@13-10.200.20.40:22-10.200.16.10:45616.service: Deactivated successfully. Aug 13 00:24:53.428695 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:24:53.430830 systemd-logind[1697]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:24:53.432366 systemd-logind[1697]: Removed session 16. Aug 13 00:24:53.500246 systemd[1]: Started sshd@14-10.200.20.40:22-10.200.16.10:45630.service - OpenSSH per-connection server daemon (10.200.16.10:45630). Aug 13 00:24:53.930788 sshd[6655]: Accepted publickey for core from 10.200.16.10 port 45630 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:53.932755 sshd[6655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:53.940304 systemd-logind[1697]: New session 17 of user core. Aug 13 00:24:53.946380 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:24:54.475955 sshd[6655]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:54.480339 systemd[1]: sshd@14-10.200.20.40:22-10.200.16.10:45630.service: Deactivated successfully. Aug 13 00:24:54.483594 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:24:54.484777 systemd-logind[1697]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:24:54.485998 systemd-logind[1697]: Removed session 17. Aug 13 00:24:54.559420 systemd[1]: Started sshd@15-10.200.20.40:22-10.200.16.10:45640.service - OpenSSH per-connection server daemon (10.200.16.10:45640). Aug 13 00:24:54.991096 sshd[6666]: Accepted publickey for core from 10.200.16.10 port 45640 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:54.992705 sshd[6666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:54.997754 systemd-logind[1697]: New session 18 of user core. Aug 13 00:24:55.006320 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:24:55.988446 sshd[6666]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:55.992342 systemd[1]: sshd@15-10.200.20.40:22-10.200.16.10:45640.service: Deactivated successfully. Aug 13 00:24:55.994740 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:24:55.995779 systemd-logind[1697]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:24:55.997590 systemd-logind[1697]: Removed session 18. Aug 13 00:24:56.070952 systemd[1]: Started sshd@16-10.200.20.40:22-10.200.16.10:45656.service - OpenSSH per-connection server daemon (10.200.16.10:45656). Aug 13 00:24:56.527524 sshd[6684]: Accepted publickey for core from 10.200.16.10 port 45656 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:56.529185 sshd[6684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:56.534967 systemd-logind[1697]: New session 19 of user core. Aug 13 00:24:56.540355 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:24:57.053397 sshd[6684]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:57.057347 systemd[1]: sshd@16-10.200.20.40:22-10.200.16.10:45656.service: Deactivated successfully. Aug 13 00:24:57.061754 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:24:57.062894 systemd-logind[1697]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:24:57.063922 systemd-logind[1697]: Removed session 19. Aug 13 00:24:57.142435 systemd[1]: Started sshd@17-10.200.20.40:22-10.200.16.10:45670.service - OpenSSH per-connection server daemon (10.200.16.10:45670). Aug 13 00:24:57.609092 sshd[6695]: Accepted publickey for core from 10.200.16.10 port 45670 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:57.611802 sshd[6695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:57.620877 systemd-logind[1697]: New session 20 of user core. Aug 13 00:24:57.627453 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:24:58.042689 sshd[6695]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:58.046105 systemd[1]: sshd@17-10.200.20.40:22-10.200.16.10:45670.service: Deactivated successfully. Aug 13 00:24:58.052731 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:24:58.056066 systemd-logind[1697]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:24:58.058957 systemd-logind[1697]: Removed session 20. Aug 13 00:25:03.133071 systemd[1]: Started sshd@18-10.200.20.40:22-10.200.16.10:48720.service - OpenSSH per-connection server daemon (10.200.16.10:48720). Aug 13 00:25:03.608172 sshd[6750]: Accepted publickey for core from 10.200.16.10 port 48720 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:25:03.608988 sshd[6750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:25:03.613384 systemd-logind[1697]: New session 21 of user core. Aug 13 00:25:03.621479 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:25:04.040257 sshd[6750]: pam_unix(sshd:session): session closed for user core Aug 13 00:25:04.045329 systemd[1]: sshd@18-10.200.20.40:22-10.200.16.10:48720.service: Deactivated successfully. Aug 13 00:25:04.048276 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:25:04.049077 systemd-logind[1697]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:25:04.052508 systemd-logind[1697]: Removed session 21. Aug 13 00:25:09.129453 systemd[1]: Started sshd@19-10.200.20.40:22-10.200.16.10:48726.service - OpenSSH per-connection server daemon (10.200.16.10:48726). Aug 13 00:25:09.604926 sshd[6802]: Accepted publickey for core from 10.200.16.10 port 48726 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:25:09.605996 sshd[6802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:25:09.610671 systemd-logind[1697]: New session 22 of user core. Aug 13 00:25:09.615348 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:25:10.010626 sshd[6802]: pam_unix(sshd:session): session closed for user core Aug 13 00:25:10.015893 systemd-logind[1697]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:25:10.016589 systemd[1]: sshd@19-10.200.20.40:22-10.200.16.10:48726.service: Deactivated successfully. Aug 13 00:25:10.018975 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:25:10.020814 systemd-logind[1697]: Removed session 22. Aug 13 00:25:15.108526 systemd[1]: Started sshd@20-10.200.20.40:22-10.200.16.10:43240.service - OpenSSH per-connection server daemon (10.200.16.10:43240). Aug 13 00:25:15.456797 systemd[1]: run-containerd-runc-k8s.io-d04ab27c78abf9f2c77b9adc180cef544d595e90dcdec5c71acb84c8f5c69154-runc.t6ZSmt.mount: Deactivated successfully. Aug 13 00:25:15.582886 sshd[6815]: Accepted publickey for core from 10.200.16.10 port 43240 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:25:15.584857 sshd[6815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:25:15.589218 systemd-logind[1697]: New session 23 of user core. Aug 13 00:25:15.594319 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:25:16.000833 sshd[6815]: pam_unix(sshd:session): session closed for user core Aug 13 00:25:16.005060 systemd-logind[1697]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:25:16.005326 systemd[1]: sshd@20-10.200.20.40:22-10.200.16.10:43240.service: Deactivated successfully. Aug 13 00:25:16.007770 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:25:16.010228 systemd-logind[1697]: Removed session 23. Aug 13 00:25:21.094476 systemd[1]: Started sshd@21-10.200.20.40:22-10.200.16.10:57998.service - OpenSSH per-connection server daemon (10.200.16.10:57998). Aug 13 00:25:21.583191 sshd[6872]: Accepted publickey for core from 10.200.16.10 port 57998 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:25:21.584927 sshd[6872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:25:21.590731 systemd-logind[1697]: New session 24 of user core. Aug 13 00:25:21.595627 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:25:22.002446 sshd[6872]: pam_unix(sshd:session): session closed for user core Aug 13 00:25:22.007464 systemd[1]: sshd@21-10.200.20.40:22-10.200.16.10:57998.service: Deactivated successfully. Aug 13 00:25:22.010947 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:25:22.013199 systemd-logind[1697]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:25:22.016431 systemd-logind[1697]: Removed session 24. Aug 13 00:25:27.091256 systemd[1]: Started sshd@22-10.200.20.40:22-10.200.16.10:58012.service - OpenSSH per-connection server daemon (10.200.16.10:58012). Aug 13 00:25:27.545746 sshd[6885]: Accepted publickey for core from 10.200.16.10 port 58012 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:25:27.548888 sshd[6885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:25:27.556327 systemd-logind[1697]: New session 25 of user core. Aug 13 00:25:27.567438 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:25:27.826710 systemd[1]: run-containerd-runc-k8s.io-d04ab27c78abf9f2c77b9adc180cef544d595e90dcdec5c71acb84c8f5c69154-runc.dK13nk.mount: Deactivated successfully. Aug 13 00:25:27.976993 sshd[6885]: pam_unix(sshd:session): session closed for user core Aug 13 00:25:27.980708 systemd[1]: sshd@22-10.200.20.40:22-10.200.16.10:58012.service: Deactivated successfully. Aug 13 00:25:27.983575 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:25:27.984793 systemd-logind[1697]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:25:27.985806 systemd-logind[1697]: Removed session 25. Aug 13 00:25:33.073476 systemd[1]: Started sshd@23-10.200.20.40:22-10.200.16.10:58626.service - OpenSSH per-connection server daemon (10.200.16.10:58626). Aug 13 00:25:33.564669 sshd[6917]: Accepted publickey for core from 10.200.16.10 port 58626 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:25:33.567659 sshd[6917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:25:33.578875 systemd-logind[1697]: New session 26 of user core. Aug 13 00:25:33.584369 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:25:34.003540 sshd[6917]: pam_unix(sshd:session): session closed for user core Aug 13 00:25:34.008546 systemd[1]: sshd@23-10.200.20.40:22-10.200.16.10:58626.service: Deactivated successfully. Aug 13 00:25:34.013050 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:25:34.016900 systemd-logind[1697]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:25:34.018506 systemd-logind[1697]: Removed session 26. Aug 13 00:25:36.882266 systemd[1]: run-containerd-runc-k8s.io-9de90623f4ff1a7f333d1aafae08e33d73064eee22742ca6da55bc26a2fb363d-runc.yoBv9i.mount: Deactivated successfully.