Jul 7 05:54:13.371274 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 7 05:54:13.371298 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Sun Jul 6 22:28:26 -00 2025 Jul 7 05:54:13.371306 kernel: KASLR enabled Jul 7 05:54:13.371312 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 7 05:54:13.371319 kernel: printk: bootconsole [pl11] enabled Jul 7 05:54:13.371325 kernel: efi: EFI v2.7 by EDK II Jul 7 05:54:13.371332 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jul 7 05:54:13.371338 kernel: random: crng init done Jul 7 05:54:13.371344 kernel: ACPI: Early table checksum verification disabled Jul 7 05:54:13.371350 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 7 05:54:13.371357 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:54:13.371363 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:54:13.371370 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 7 05:54:13.371377 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:54:13.371384 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:54:13.371390 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:54:13.371397 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:54:13.371405 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:54:13.371411 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:54:13.371418 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 7 05:54:13.371424 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:54:13.371431 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 7 05:54:13.371437 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 7 05:54:13.371443 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 7 05:54:13.371450 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 7 05:54:13.371456 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 7 05:54:13.371463 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 7 05:54:13.371469 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 7 05:54:13.371477 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 7 05:54:13.371483 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 7 05:54:13.371490 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 7 05:54:13.371496 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 7 05:54:13.371503 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 7 05:54:13.371509 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 7 05:54:13.371515 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jul 7 05:54:13.371521 kernel: Zone ranges: Jul 7 05:54:13.371528 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 7 05:54:13.371534 kernel: DMA32 empty Jul 7 05:54:13.371540 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 7 05:54:13.371547 kernel: Movable zone start for each node Jul 7 05:54:13.371557 kernel: Early memory node ranges Jul 7 05:54:13.371564 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 7 05:54:13.371571 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jul 7 05:54:13.371577 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 7 05:54:13.371584 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 7 05:54:13.371592 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 7 05:54:13.371599 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 7 05:54:13.371606 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 7 05:54:13.371613 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 7 05:54:13.371619 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 7 05:54:13.371626 kernel: psci: probing for conduit method from ACPI. Jul 7 05:54:13.371633 kernel: psci: PSCIv1.1 detected in firmware. Jul 7 05:54:13.371640 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 05:54:13.371647 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 7 05:54:13.371654 kernel: psci: SMC Calling Convention v1.4 Jul 7 05:54:13.371661 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 7 05:54:13.371668 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 7 05:54:13.371676 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 7 05:54:13.373726 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 7 05:54:13.373736 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 7 05:54:13.373743 kernel: Detected PIPT I-cache on CPU0 Jul 7 05:54:13.373750 kernel: CPU features: detected: GIC system register CPU interface Jul 7 05:54:13.373757 kernel: CPU features: detected: Hardware dirty bit management Jul 7 05:54:13.373764 kernel: CPU features: detected: Spectre-BHB Jul 7 05:54:13.373771 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 7 05:54:13.373778 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 7 05:54:13.373785 kernel: CPU features: detected: ARM erratum 1418040 Jul 7 05:54:13.373792 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 7 05:54:13.373803 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 7 05:54:13.373811 kernel: alternatives: applying boot alternatives Jul 7 05:54:13.373819 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 05:54:13.373828 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 05:54:13.373834 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 05:54:13.373842 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 05:54:13.373849 kernel: Fallback order for Node 0: 0 Jul 7 05:54:13.373855 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 7 05:54:13.373862 kernel: Policy zone: Normal Jul 7 05:54:13.373869 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 05:54:13.373876 kernel: software IO TLB: area num 2. Jul 7 05:54:13.373884 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jul 7 05:54:13.373892 kernel: Memory: 3982628K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 211532K reserved, 0K cma-reserved) Jul 7 05:54:13.373899 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 05:54:13.373905 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 05:54:13.373913 kernel: rcu: RCU event tracing is enabled. Jul 7 05:54:13.373920 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 05:54:13.373927 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 05:54:13.373934 kernel: Tracing variant of Tasks RCU enabled. Jul 7 05:54:13.373941 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 05:54:13.373947 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 05:54:13.373954 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 05:54:13.373963 kernel: GICv3: 960 SPIs implemented Jul 7 05:54:13.373969 kernel: GICv3: 0 Extended SPIs implemented Jul 7 05:54:13.373976 kernel: Root IRQ handler: gic_handle_irq Jul 7 05:54:13.373983 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 7 05:54:13.373990 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 7 05:54:13.373997 kernel: ITS: No ITS available, not enabling LPIs Jul 7 05:54:13.374004 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 05:54:13.374011 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 05:54:13.374017 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 7 05:54:13.374024 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 7 05:54:13.374032 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 7 05:54:13.374040 kernel: Console: colour dummy device 80x25 Jul 7 05:54:13.374047 kernel: printk: console [tty1] enabled Jul 7 05:54:13.374055 kernel: ACPI: Core revision 20230628 Jul 7 05:54:13.374062 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 7 05:54:13.374069 kernel: pid_max: default: 32768 minimum: 301 Jul 7 05:54:13.374076 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 05:54:13.374083 kernel: landlock: Up and running. Jul 7 05:54:13.374090 kernel: SELinux: Initializing. Jul 7 05:54:13.374097 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 05:54:13.374104 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 05:54:13.374113 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 05:54:13.374120 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 05:54:13.374128 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 7 05:54:13.374135 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Jul 7 05:54:13.374141 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 7 05:54:13.374149 kernel: rcu: Hierarchical SRCU implementation. Jul 7 05:54:13.374156 kernel: rcu: Max phase no-delay instances is 400. Jul 7 05:54:13.374170 kernel: Remapping and enabling EFI services. Jul 7 05:54:13.374177 kernel: smp: Bringing up secondary CPUs ... Jul 7 05:54:13.374185 kernel: Detected PIPT I-cache on CPU1 Jul 7 05:54:13.374192 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 7 05:54:13.374201 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 05:54:13.374208 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 7 05:54:13.374216 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 05:54:13.374223 kernel: SMP: Total of 2 processors activated. Jul 7 05:54:13.374231 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 05:54:13.374240 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 7 05:54:13.374248 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 7 05:54:13.374255 kernel: CPU features: detected: CRC32 instructions Jul 7 05:54:13.374263 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 7 05:54:13.374270 kernel: CPU features: detected: LSE atomic instructions Jul 7 05:54:13.374277 kernel: CPU features: detected: Privileged Access Never Jul 7 05:54:13.374285 kernel: CPU: All CPU(s) started at EL1 Jul 7 05:54:13.374292 kernel: alternatives: applying system-wide alternatives Jul 7 05:54:13.374299 kernel: devtmpfs: initialized Jul 7 05:54:13.374309 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 05:54:13.374317 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 05:54:13.374324 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 05:54:13.374332 kernel: SMBIOS 3.1.0 present. Jul 7 05:54:13.374340 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 7 05:54:13.374347 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 05:54:13.374355 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 7 05:54:13.374362 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 05:54:13.374370 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 05:54:13.374379 kernel: audit: initializing netlink subsys (disabled) Jul 7 05:54:13.374386 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jul 7 05:54:13.374394 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 05:54:13.374401 kernel: cpuidle: using governor menu Jul 7 05:54:13.374409 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 05:54:13.374416 kernel: ASID allocator initialised with 32768 entries Jul 7 05:54:13.374424 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 05:54:13.374431 kernel: Serial: AMBA PL011 UART driver Jul 7 05:54:13.374439 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 7 05:54:13.374447 kernel: Modules: 0 pages in range for non-PLT usage Jul 7 05:54:13.374455 kernel: Modules: 509008 pages in range for PLT usage Jul 7 05:54:13.374462 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 05:54:13.374470 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 05:54:13.374477 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 05:54:13.374485 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 05:54:13.374492 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 05:54:13.374499 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 05:54:13.374507 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 05:54:13.374516 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 05:54:13.374523 kernel: ACPI: Added _OSI(Module Device) Jul 7 05:54:13.374531 kernel: ACPI: Added _OSI(Processor Device) Jul 7 05:54:13.374538 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 05:54:13.374545 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 05:54:13.374553 kernel: ACPI: Interpreter enabled Jul 7 05:54:13.374560 kernel: ACPI: Using GIC for interrupt routing Jul 7 05:54:13.374568 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 7 05:54:13.374575 kernel: printk: console [ttyAMA0] enabled Jul 7 05:54:13.374584 kernel: printk: bootconsole [pl11] disabled Jul 7 05:54:13.374591 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 7 05:54:13.374599 kernel: iommu: Default domain type: Translated Jul 7 05:54:13.374606 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 05:54:13.374614 kernel: efivars: Registered efivars operations Jul 7 05:54:13.374621 kernel: vgaarb: loaded Jul 7 05:54:13.374629 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 05:54:13.374636 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 05:54:13.374644 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 05:54:13.374654 kernel: pnp: PnP ACPI init Jul 7 05:54:13.374662 kernel: pnp: PnP ACPI: found 0 devices Jul 7 05:54:13.374669 kernel: NET: Registered PF_INET protocol family Jul 7 05:54:13.374677 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 05:54:13.374696 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 05:54:13.374704 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 05:54:13.374712 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 05:54:13.374719 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 05:54:13.374727 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 05:54:13.374736 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 05:54:13.374744 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 05:54:13.374751 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 05:54:13.374759 kernel: PCI: CLS 0 bytes, default 64 Jul 7 05:54:13.374766 kernel: kvm [1]: HYP mode not available Jul 7 05:54:13.374773 kernel: Initialise system trusted keyrings Jul 7 05:54:13.374781 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 05:54:13.374788 kernel: Key type asymmetric registered Jul 7 05:54:13.374795 kernel: Asymmetric key parser 'x509' registered Jul 7 05:54:13.374804 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 05:54:13.374811 kernel: io scheduler mq-deadline registered Jul 7 05:54:13.374819 kernel: io scheduler kyber registered Jul 7 05:54:13.374826 kernel: io scheduler bfq registered Jul 7 05:54:13.374833 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 05:54:13.374841 kernel: thunder_xcv, ver 1.0 Jul 7 05:54:13.374848 kernel: thunder_bgx, ver 1.0 Jul 7 05:54:13.374855 kernel: nicpf, ver 1.0 Jul 7 05:54:13.374862 kernel: nicvf, ver 1.0 Jul 7 05:54:13.375012 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 05:54:13.375088 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T05:54:12 UTC (1751867652) Jul 7 05:54:13.375098 kernel: efifb: probing for efifb Jul 7 05:54:13.375106 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 7 05:54:13.375114 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 7 05:54:13.375121 kernel: efifb: scrolling: redraw Jul 7 05:54:13.375129 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 05:54:13.375137 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 05:54:13.375146 kernel: fb0: EFI VGA frame buffer device Jul 7 05:54:13.375154 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 7 05:54:13.375161 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 05:54:13.375169 kernel: No ACPI PMU IRQ for CPU0 Jul 7 05:54:13.375176 kernel: No ACPI PMU IRQ for CPU1 Jul 7 05:54:13.375184 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 7 05:54:13.375191 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 7 05:54:13.375199 kernel: watchdog: Hard watchdog permanently disabled Jul 7 05:54:13.375206 kernel: NET: Registered PF_INET6 protocol family Jul 7 05:54:13.375215 kernel: Segment Routing with IPv6 Jul 7 05:54:13.375223 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 05:54:13.375230 kernel: NET: Registered PF_PACKET protocol family Jul 7 05:54:13.375238 kernel: Key type dns_resolver registered Jul 7 05:54:13.375245 kernel: registered taskstats version 1 Jul 7 05:54:13.375253 kernel: Loading compiled-in X.509 certificates Jul 7 05:54:13.375260 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 238b9dc1e5bb098e9decff566778e6505241ab94' Jul 7 05:54:13.375267 kernel: Key type .fscrypt registered Jul 7 05:54:13.375275 kernel: Key type fscrypt-provisioning registered Jul 7 05:54:13.375284 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 05:54:13.375292 kernel: ima: Allocated hash algorithm: sha1 Jul 7 05:54:13.375299 kernel: ima: No architecture policies found Jul 7 05:54:13.375307 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 05:54:13.375314 kernel: clk: Disabling unused clocks Jul 7 05:54:13.375321 kernel: Freeing unused kernel memory: 39424K Jul 7 05:54:13.375329 kernel: Run /init as init process Jul 7 05:54:13.375336 kernel: with arguments: Jul 7 05:54:13.375343 kernel: /init Jul 7 05:54:13.375352 kernel: with environment: Jul 7 05:54:13.375359 kernel: HOME=/ Jul 7 05:54:13.375366 kernel: TERM=linux Jul 7 05:54:13.375374 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 05:54:13.375383 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 05:54:13.375393 systemd[1]: Detected virtualization microsoft. Jul 7 05:54:13.375401 systemd[1]: Detected architecture arm64. Jul 7 05:54:13.375409 systemd[1]: Running in initrd. Jul 7 05:54:13.375418 systemd[1]: No hostname configured, using default hostname. Jul 7 05:54:13.375426 systemd[1]: Hostname set to . Jul 7 05:54:13.375434 systemd[1]: Initializing machine ID from random generator. Jul 7 05:54:13.375442 systemd[1]: Queued start job for default target initrd.target. Jul 7 05:54:13.375450 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:54:13.375458 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:54:13.375467 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 05:54:13.375475 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 05:54:13.375484 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 05:54:13.375493 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 05:54:13.375502 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 05:54:13.375511 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 05:54:13.375519 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:54:13.375527 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:54:13.375537 systemd[1]: Reached target paths.target - Path Units. Jul 7 05:54:13.375545 systemd[1]: Reached target slices.target - Slice Units. Jul 7 05:54:13.375553 systemd[1]: Reached target swap.target - Swaps. Jul 7 05:54:13.375561 systemd[1]: Reached target timers.target - Timer Units. Jul 7 05:54:13.375569 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 05:54:13.375578 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 05:54:13.375586 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 05:54:13.375594 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 05:54:13.375602 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:54:13.375612 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 05:54:13.375621 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:54:13.375629 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 05:54:13.375637 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 05:54:13.375645 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 05:54:13.375653 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 05:54:13.375661 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 05:54:13.375669 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 05:54:13.375677 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 05:54:13.377764 systemd-journald[216]: Collecting audit messages is disabled. Jul 7 05:54:13.377787 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:54:13.377797 systemd-journald[216]: Journal started Jul 7 05:54:13.377818 systemd-journald[216]: Runtime Journal (/run/log/journal/eb8fa030aeea4bd6a2d1fd9920c62ea1) is 8.0M, max 78.5M, 70.5M free. Jul 7 05:54:13.388080 systemd-modules-load[217]: Inserted module 'overlay' Jul 7 05:54:13.405905 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 05:54:13.418702 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 05:54:13.420721 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 05:54:13.435728 kernel: Bridge firewalling registered Jul 7 05:54:13.429315 systemd-modules-load[217]: Inserted module 'br_netfilter' Jul 7 05:54:13.430218 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:54:13.443344 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 05:54:13.453228 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 05:54:13.466533 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:54:13.492058 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:54:13.500102 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:54:13.530135 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 05:54:13.551734 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 05:54:13.570374 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:54:13.578839 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:54:13.596967 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:54:13.608035 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:54:13.637979 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 05:54:13.646869 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 05:54:13.670932 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 05:54:13.687085 dracut-cmdline[252]: dracut-dracut-053 Jul 7 05:54:13.694472 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 05:54:13.728937 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:54:13.732811 systemd-resolved[253]: Positive Trust Anchors: Jul 7 05:54:13.732821 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 05:54:13.732857 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 05:54:13.735101 systemd-resolved[253]: Defaulting to hostname 'linux'. Jul 7 05:54:13.742963 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 05:54:13.757330 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:54:13.885694 kernel: SCSI subsystem initialized Jul 7 05:54:13.892714 kernel: Loading iSCSI transport class v2.0-870. Jul 7 05:54:13.903711 kernel: iscsi: registered transport (tcp) Jul 7 05:54:13.921437 kernel: iscsi: registered transport (qla4xxx) Jul 7 05:54:13.921474 kernel: QLogic iSCSI HBA Driver Jul 7 05:54:13.964310 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 05:54:13.979943 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 05:54:14.021340 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 05:54:14.021405 kernel: device-mapper: uevent: version 1.0.3 Jul 7 05:54:14.028106 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 05:54:14.078716 kernel: raid6: neonx8 gen() 15793 MB/s Jul 7 05:54:14.097691 kernel: raid6: neonx4 gen() 15667 MB/s Jul 7 05:54:14.117691 kernel: raid6: neonx2 gen() 13253 MB/s Jul 7 05:54:14.138692 kernel: raid6: neonx1 gen() 10475 MB/s Jul 7 05:54:14.159691 kernel: raid6: int64x8 gen() 6955 MB/s Jul 7 05:54:14.179692 kernel: raid6: int64x4 gen() 7350 MB/s Jul 7 05:54:14.200693 kernel: raid6: int64x2 gen() 6128 MB/s Jul 7 05:54:14.225050 kernel: raid6: int64x1 gen() 5059 MB/s Jul 7 05:54:14.225061 kernel: raid6: using algorithm neonx8 gen() 15793 MB/s Jul 7 05:54:14.249579 kernel: raid6: .... xor() 11939 MB/s, rmw enabled Jul 7 05:54:14.249601 kernel: raid6: using neon recovery algorithm Jul 7 05:54:14.262776 kernel: xor: measuring software checksum speed Jul 7 05:54:14.262792 kernel: 8regs : 19816 MB/sec Jul 7 05:54:14.266937 kernel: 32regs : 19636 MB/sec Jul 7 05:54:14.270881 kernel: arm64_neon : 27034 MB/sec Jul 7 05:54:14.275423 kernel: xor: using function: arm64_neon (27034 MB/sec) Jul 7 05:54:14.326697 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 05:54:14.337074 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 05:54:14.355867 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:54:14.390932 systemd-udevd[438]: Using default interface naming scheme 'v255'. Jul 7 05:54:14.397421 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:54:14.417977 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 05:54:14.443404 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Jul 7 05:54:14.474365 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 05:54:14.493052 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 05:54:14.542094 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:54:14.569880 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 05:54:14.595529 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 05:54:14.610430 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 05:54:14.623803 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:54:14.647774 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 05:54:14.674703 kernel: hv_vmbus: Vmbus version:5.3 Jul 7 05:54:14.683104 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 05:54:14.699532 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 05:54:14.727761 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 7 05:54:14.727812 kernel: hv_vmbus: registering driver hid_hyperv Jul 7 05:54:14.727823 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 7 05:54:14.727844 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 7 05:54:14.730431 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 05:54:14.771220 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jul 7 05:54:14.771256 kernel: hv_vmbus: registering driver hv_storvsc Jul 7 05:54:14.771266 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 7 05:54:14.771408 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jul 7 05:54:14.771420 kernel: hv_vmbus: registering driver hv_netvsc Jul 7 05:54:14.771432 kernel: scsi host1: storvsc_host_t Jul 7 05:54:14.730592 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:54:14.789580 kernel: scsi host0: storvsc_host_t Jul 7 05:54:14.789764 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 7 05:54:14.802754 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 7 05:54:14.803461 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:54:14.820043 kernel: PTP clock support registered Jul 7 05:54:14.822361 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:54:14.848019 kernel: hv_utils: Registering HyperV Utility Driver Jul 7 05:54:14.848044 kernel: hv_vmbus: registering driver hv_utils Jul 7 05:54:14.822744 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:54:15.366827 kernel: hv_utils: Heartbeat IC version 3.0 Jul 7 05:54:15.366863 kernel: hv_utils: Shutdown IC version 3.2 Jul 7 05:54:15.366875 kernel: hv_utils: TimeSync IC version 4.0 Jul 7 05:54:15.366888 kernel: hv_netvsc 000d3ac5-c88a-000d-3ac5-c88a000d3ac5 eth0: VF slot 1 added Jul 7 05:54:14.863491 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:54:15.366835 systemd-resolved[253]: Clock change detected. Flushing caches. Jul 7 05:54:15.410121 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 7 05:54:15.410317 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 05:54:15.410329 kernel: hv_vmbus: registering driver hv_pci Jul 7 05:54:15.377906 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:54:15.429351 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 7 05:54:15.429542 kernel: hv_pci 15d71d33-c426-4b9f-95b7-042d9b3cf930: PCI VMBus probing: Using version 0x10004 Jul 7 05:54:15.403137 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:54:15.403255 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:54:15.435640 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:54:15.487540 kernel: hv_pci 15d71d33-c426-4b9f-95b7-042d9b3cf930: PCI host bridge to bus c426:00 Jul 7 05:54:15.487738 kernel: pci_bus c426:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 7 05:54:15.487846 kernel: pci_bus c426:00: No busn resource found for root bus, will use [bus 00-ff] Jul 7 05:54:15.487924 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 7 05:54:15.488031 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 7 05:54:15.501365 kernel: pci c426:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 7 05:54:15.501408 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 7 05:54:15.501520 kernel: pci c426:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 7 05:54:15.509130 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 7 05:54:15.514325 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:54:15.538528 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 7 05:54:15.538920 kernel: pci c426:00:02.0: enabling Extended Tags Jul 7 05:54:15.540414 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:54:15.582261 kernel: pci c426:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c426:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 7 05:54:15.582460 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 05:54:15.582471 kernel: pci_bus c426:00: busn_res: [bus 00-ff] end is updated to 00 Jul 7 05:54:15.589327 kernel: pci c426:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 7 05:54:15.594791 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 7 05:54:15.613662 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:54:15.648443 kernel: mlx5_core c426:00:02.0: enabling device (0000 -> 0002) Jul 7 05:54:15.655101 kernel: mlx5_core c426:00:02.0: firmware version: 16.30.1284 Jul 7 05:54:15.853049 kernel: hv_netvsc 000d3ac5-c88a-000d-3ac5-c88a000d3ac5 eth0: VF registering: eth1 Jul 7 05:54:15.853278 kernel: mlx5_core c426:00:02.0 eth1: joined to eth0 Jul 7 05:54:15.861237 kernel: mlx5_core c426:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 7 05:54:15.871108 kernel: mlx5_core c426:00:02.0 enP50214s1: renamed from eth1 Jul 7 05:54:16.080065 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 7 05:54:16.121350 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (489) Jul 7 05:54:16.137169 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 7 05:54:16.166174 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 7 05:54:16.192221 kernel: BTRFS: device fsid 8b9ce65a-b4d6-4744-987c-133e7f159d2d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (502) Jul 7 05:54:16.195583 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 7 05:54:16.203369 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 7 05:54:16.241242 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 05:54:16.264191 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 05:54:16.272108 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 05:54:17.282176 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 05:54:17.284710 disk-uuid[601]: The operation has completed successfully. Jul 7 05:54:17.348328 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 05:54:17.350272 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 05:54:17.384296 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 05:54:17.397296 sh[687]: Success Jul 7 05:54:17.426130 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 7 05:54:17.585395 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 05:54:17.608231 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 05:54:17.618353 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 05:54:17.649842 kernel: BTRFS info (device dm-0): first mount of filesystem 8b9ce65a-b4d6-4744-987c-133e7f159d2d Jul 7 05:54:17.649902 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:54:17.657048 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 05:54:17.662158 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 05:54:17.666581 kernel: BTRFS info (device dm-0): using free space tree Jul 7 05:54:17.897838 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 05:54:17.903733 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 05:54:17.921362 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 05:54:17.929299 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 05:54:17.968322 kernel: BTRFS info (device sda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:54:17.968389 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:54:17.973122 kernel: BTRFS info (device sda6): using free space tree Jul 7 05:54:18.003299 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 05:54:18.011036 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 05:54:18.025336 kernel: BTRFS info (device sda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:54:18.031170 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 05:54:18.050576 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 05:54:18.069460 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 05:54:18.084758 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 05:54:18.127275 systemd-networkd[871]: lo: Link UP Jul 7 05:54:18.127286 systemd-networkd[871]: lo: Gained carrier Jul 7 05:54:18.128839 systemd-networkd[871]: Enumeration completed Jul 7 05:54:18.129470 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:54:18.129473 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:54:18.134803 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 05:54:18.141046 systemd[1]: Reached target network.target - Network. Jul 7 05:54:18.218102 kernel: mlx5_core c426:00:02.0 enP50214s1: Link up Jul 7 05:54:18.257104 kernel: hv_netvsc 000d3ac5-c88a-000d-3ac5-c88a000d3ac5 eth0: Data path switched to VF: enP50214s1 Jul 7 05:54:18.257802 systemd-networkd[871]: enP50214s1: Link UP Jul 7 05:54:18.257881 systemd-networkd[871]: eth0: Link UP Jul 7 05:54:18.258005 systemd-networkd[871]: eth0: Gained carrier Jul 7 05:54:18.258015 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:54:18.271227 systemd-networkd[871]: enP50214s1: Gained carrier Jul 7 05:54:18.291171 systemd-networkd[871]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 7 05:54:18.635011 ignition[864]: Ignition 2.19.0 Jul 7 05:54:18.635022 ignition[864]: Stage: fetch-offline Jul 7 05:54:18.639234 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 05:54:18.635059 ignition[864]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:54:18.635068 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:54:18.635183 ignition[864]: parsed url from cmdline: "" Jul 7 05:54:18.635186 ignition[864]: no config URL provided Jul 7 05:54:18.635191 ignition[864]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 05:54:18.668436 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 05:54:18.635199 ignition[864]: no config at "/usr/lib/ignition/user.ign" Jul 7 05:54:18.635203 ignition[864]: failed to fetch config: resource requires networking Jul 7 05:54:18.635614 ignition[864]: Ignition finished successfully Jul 7 05:54:18.697017 ignition[880]: Ignition 2.19.0 Jul 7 05:54:18.697024 ignition[880]: Stage: fetch Jul 7 05:54:18.697425 ignition[880]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:54:18.697440 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:54:18.697558 ignition[880]: parsed url from cmdline: "" Jul 7 05:54:18.697562 ignition[880]: no config URL provided Jul 7 05:54:18.697571 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 05:54:18.697579 ignition[880]: no config at "/usr/lib/ignition/user.ign" Jul 7 05:54:18.697606 ignition[880]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 7 05:54:18.832182 ignition[880]: GET result: OK Jul 7 05:54:18.832260 ignition[880]: config has been read from IMDS userdata Jul 7 05:54:18.832305 ignition[880]: parsing config with SHA512: f8ed95ade9a123c279de443cdb84295f66a17cb5d9efd4ff1396b12351b50e87d26bebd50df7ab5292faae3835e37498f853aa0cb2eddf3dcc680ec307029a2c Jul 7 05:54:18.836414 unknown[880]: fetched base config from "system" Jul 7 05:54:18.836831 ignition[880]: fetch: fetch complete Jul 7 05:54:18.836421 unknown[880]: fetched base config from "system" Jul 7 05:54:18.836836 ignition[880]: fetch: fetch passed Jul 7 05:54:18.836427 unknown[880]: fetched user config from "azure" Jul 7 05:54:18.836883 ignition[880]: Ignition finished successfully Jul 7 05:54:18.842626 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 05:54:18.863663 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 05:54:18.892716 ignition[886]: Ignition 2.19.0 Jul 7 05:54:18.899684 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 05:54:18.892722 ignition[886]: Stage: kargs Jul 7 05:54:18.892973 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:54:18.892986 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:54:18.894037 ignition[886]: kargs: kargs passed Jul 7 05:54:18.894106 ignition[886]: Ignition finished successfully Jul 7 05:54:18.928371 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 05:54:18.950815 ignition[892]: Ignition 2.19.0 Jul 7 05:54:18.950839 ignition[892]: Stage: disks Jul 7 05:54:18.953770 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 05:54:18.951054 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:54:18.960029 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 05:54:18.951075 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:54:18.968853 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 05:54:18.952103 ignition[892]: disks: disks passed Jul 7 05:54:18.980700 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 05:54:18.952155 ignition[892]: Ignition finished successfully Jul 7 05:54:18.990633 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 05:54:19.002062 systemd[1]: Reached target basic.target - Basic System. Jul 7 05:54:19.030479 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 05:54:19.096414 systemd-fsck[901]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 7 05:54:19.100749 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 05:54:19.121339 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 05:54:19.176109 kernel: EXT4-fs (sda9): mounted filesystem bea371b7-1069-4e98-84b2-bf5b94f934f3 r/w with ordered data mode. Quota mode: none. Jul 7 05:54:19.176513 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 05:54:19.181643 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 05:54:19.223165 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 05:54:19.232378 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 05:54:19.255441 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 05:54:19.278154 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (912) Jul 7 05:54:19.278179 kernel: BTRFS info (device sda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:54:19.271191 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 05:54:19.308840 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:54:19.308865 kernel: BTRFS info (device sda6): using free space tree Jul 7 05:54:19.271237 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 05:54:19.308563 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 05:54:19.331098 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 05:54:19.333341 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 05:54:19.340724 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 05:54:19.605222 systemd-networkd[871]: eth0: Gained IPv6LL Jul 7 05:54:19.733228 systemd-networkd[871]: enP50214s1: Gained IPv6LL Jul 7 05:54:19.787945 coreos-metadata[914]: Jul 07 05:54:19.787 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 7 05:54:19.798778 coreos-metadata[914]: Jul 07 05:54:19.798 INFO Fetch successful Jul 7 05:54:19.798778 coreos-metadata[914]: Jul 07 05:54:19.798 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 7 05:54:19.815883 coreos-metadata[914]: Jul 07 05:54:19.810 INFO Fetch successful Jul 7 05:54:19.823941 coreos-metadata[914]: Jul 07 05:54:19.823 INFO wrote hostname ci-4081.3.4-a-2bf61d9e54 to /sysroot/etc/hostname Jul 7 05:54:19.833313 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 05:54:19.955048 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 05:54:19.980165 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Jul 7 05:54:19.989784 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 05:54:19.996189 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 05:54:20.705573 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 05:54:20.719565 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 05:54:20.733179 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 05:54:20.745802 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 05:54:20.763314 kernel: BTRFS info (device sda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:54:20.780479 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 05:54:20.797234 ignition[1034]: INFO : Ignition 2.19.0 Jul 7 05:54:20.797234 ignition[1034]: INFO : Stage: mount Jul 7 05:54:20.813950 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:54:20.813950 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:54:20.813950 ignition[1034]: INFO : mount: mount passed Jul 7 05:54:20.813950 ignition[1034]: INFO : Ignition finished successfully Jul 7 05:54:20.802837 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 05:54:20.830259 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 05:54:20.853183 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 05:54:20.887247 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1042) Jul 7 05:54:20.887308 kernel: BTRFS info (device sda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:54:20.893293 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:54:20.897604 kernel: BTRFS info (device sda6): using free space tree Jul 7 05:54:20.904100 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 05:54:20.906551 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 05:54:20.930585 ignition[1060]: INFO : Ignition 2.19.0 Jul 7 05:54:20.930585 ignition[1060]: INFO : Stage: files Jul 7 05:54:20.938658 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:54:20.938658 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:54:20.938658 ignition[1060]: DEBUG : files: compiled without relabeling support, skipping Jul 7 05:54:20.957471 ignition[1060]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 05:54:20.957471 ignition[1060]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 05:54:20.987625 ignition[1060]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 05:54:20.995460 ignition[1060]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 05:54:20.995460 ignition[1060]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 05:54:20.988043 unknown[1060]: wrote ssh authorized keys file for user: core Jul 7 05:54:21.019851 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 05:54:21.030768 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 7 05:54:21.202080 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 05:54:22.126309 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 05:54:22.126309 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 7 05:54:22.700841 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 05:54:22.913755 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:54:22.913755 ignition[1060]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 05:54:22.933075 ignition[1060]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 05:54:22.933075 ignition[1060]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 05:54:22.933075 ignition[1060]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 05:54:22.933075 ignition[1060]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 7 05:54:22.933075 ignition[1060]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 05:54:22.988354 ignition[1060]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 05:54:22.988354 ignition[1060]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 05:54:22.988354 ignition[1060]: INFO : files: files passed Jul 7 05:54:22.988354 ignition[1060]: INFO : Ignition finished successfully Jul 7 05:54:22.961122 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 05:54:22.998369 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 05:54:23.016277 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 05:54:23.036752 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 05:54:23.079996 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:54:23.079996 initrd-setup-root-after-ignition[1087]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:54:23.036847 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 05:54:23.105179 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:54:23.053309 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 05:54:23.061795 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 05:54:23.098396 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 05:54:23.136560 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 05:54:23.136704 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 05:54:23.147667 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 05:54:23.158287 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 05:54:23.170839 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 05:54:23.192381 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 05:54:23.211576 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 05:54:23.222316 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 05:54:23.244740 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:54:23.257884 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:54:23.264960 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 05:54:23.276292 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 05:54:23.276487 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 05:54:23.293116 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 05:54:23.299121 systemd[1]: Stopped target basic.target - Basic System. Jul 7 05:54:23.310797 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 05:54:23.322527 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 05:54:23.333632 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 05:54:23.346332 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 05:54:23.358248 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 05:54:23.371244 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 05:54:23.382687 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 05:54:23.395291 systemd[1]: Stopped target swap.target - Swaps. Jul 7 05:54:23.405176 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 05:54:23.405365 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 05:54:23.420723 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:54:23.432249 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:54:23.444248 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 05:54:23.444363 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:54:23.456899 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 05:54:23.457080 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 05:54:23.474301 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 05:54:23.474484 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 05:54:23.490059 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 05:54:23.490261 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 05:54:23.501329 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 05:54:23.501487 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 05:54:23.535236 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 05:54:23.595589 ignition[1112]: INFO : Ignition 2.19.0 Jul 7 05:54:23.595589 ignition[1112]: INFO : Stage: umount Jul 7 05:54:23.595589 ignition[1112]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:54:23.595589 ignition[1112]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:54:23.546308 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 05:54:23.649881 ignition[1112]: INFO : umount: umount passed Jul 7 05:54:23.649881 ignition[1112]: INFO : Ignition finished successfully Jul 7 05:54:23.566917 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 05:54:23.567241 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:54:23.585884 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 05:54:23.586025 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 05:54:23.601976 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 05:54:23.606236 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 05:54:23.613792 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 05:54:23.616812 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 05:54:23.616902 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 05:54:23.624877 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 05:54:23.624940 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 05:54:23.637342 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 05:54:23.637393 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 05:54:23.643487 systemd[1]: Stopped target network.target - Network. Jul 7 05:54:23.654998 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 05:54:23.655076 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 05:54:23.670303 systemd[1]: Stopped target paths.target - Path Units. Jul 7 05:54:23.680345 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 05:54:23.686074 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:54:23.693120 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 05:54:23.703517 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 05:54:23.714523 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 05:54:23.714577 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 05:54:23.724951 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 05:54:23.724994 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 05:54:23.735662 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 05:54:23.735719 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 05:54:23.745920 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 05:54:23.745974 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 05:54:23.756978 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 05:54:23.767688 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 05:54:23.780541 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 05:54:23.780653 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 05:54:23.784999 systemd-networkd[871]: eth0: DHCPv6 lease lost Jul 7 05:54:23.793480 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 05:54:23.793572 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 05:54:24.042803 kernel: hv_netvsc 000d3ac5-c88a-000d-3ac5-c88a000d3ac5 eth0: Data path switched from VF: enP50214s1 Jul 7 05:54:23.805884 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 05:54:23.806065 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 05:54:23.816596 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 05:54:23.816691 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 05:54:23.828973 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 05:54:23.829509 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:54:23.838875 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 05:54:23.838950 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 05:54:23.875331 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 05:54:23.885220 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 05:54:23.885315 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 05:54:23.896930 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 05:54:23.896988 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:54:23.908304 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 05:54:23.908362 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 05:54:23.919559 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 05:54:23.919625 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:54:23.931981 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:54:23.968661 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 05:54:23.968847 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:54:23.979919 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 05:54:23.979973 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 05:54:23.992015 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 05:54:23.992065 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:54:24.002855 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 05:54:24.002916 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 05:54:24.027829 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 05:54:24.027900 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 05:54:24.042849 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 05:54:24.042914 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:54:24.072488 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 05:54:24.311962 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Jul 7 05:54:24.087448 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 05:54:24.087547 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:54:24.106133 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 05:54:24.106192 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:54:24.118453 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 05:54:24.118511 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:54:24.131701 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:54:24.131756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:54:24.144548 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 05:54:24.144667 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 05:54:24.155793 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 05:54:24.157105 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 05:54:24.170736 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 05:54:24.193390 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 05:54:24.224144 systemd[1]: Switching root. Jul 7 05:54:24.403922 systemd-journald[216]: Journal stopped Jul 7 05:54:13.371274 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 7 05:54:13.371298 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Sun Jul 6 22:28:26 -00 2025 Jul 7 05:54:13.371306 kernel: KASLR enabled Jul 7 05:54:13.371312 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 7 05:54:13.371319 kernel: printk: bootconsole [pl11] enabled Jul 7 05:54:13.371325 kernel: efi: EFI v2.7 by EDK II Jul 7 05:54:13.371332 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jul 7 05:54:13.371338 kernel: random: crng init done Jul 7 05:54:13.371344 kernel: ACPI: Early table checksum verification disabled Jul 7 05:54:13.371350 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 7 05:54:13.371357 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:54:13.371363 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:54:13.371370 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 7 05:54:13.371377 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:54:13.371384 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:54:13.371390 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:54:13.371397 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:54:13.371405 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:54:13.371411 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:54:13.371418 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 7 05:54:13.371424 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:54:13.371431 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 7 05:54:13.371437 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 7 05:54:13.371443 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 7 05:54:13.371450 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 7 05:54:13.371456 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 7 05:54:13.371463 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 7 05:54:13.371469 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 7 05:54:13.371477 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 7 05:54:13.371483 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 7 05:54:13.371490 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 7 05:54:13.371496 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 7 05:54:13.371503 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 7 05:54:13.371509 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 7 05:54:13.371515 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jul 7 05:54:13.371521 kernel: Zone ranges: Jul 7 05:54:13.371528 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 7 05:54:13.371534 kernel: DMA32 empty Jul 7 05:54:13.371540 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 7 05:54:13.371547 kernel: Movable zone start for each node Jul 7 05:54:13.371557 kernel: Early memory node ranges Jul 7 05:54:13.371564 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 7 05:54:13.371571 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jul 7 05:54:13.371577 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 7 05:54:13.371584 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 7 05:54:13.371592 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 7 05:54:13.371599 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 7 05:54:13.371606 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 7 05:54:13.371613 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 7 05:54:13.371619 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 7 05:54:13.371626 kernel: psci: probing for conduit method from ACPI. Jul 7 05:54:13.371633 kernel: psci: PSCIv1.1 detected in firmware. Jul 7 05:54:13.371640 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 05:54:13.371647 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 7 05:54:13.371654 kernel: psci: SMC Calling Convention v1.4 Jul 7 05:54:13.371661 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 7 05:54:13.371668 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 7 05:54:13.371676 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 7 05:54:13.373726 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 7 05:54:13.373736 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 7 05:54:13.373743 kernel: Detected PIPT I-cache on CPU0 Jul 7 05:54:13.373750 kernel: CPU features: detected: GIC system register CPU interface Jul 7 05:54:13.373757 kernel: CPU features: detected: Hardware dirty bit management Jul 7 05:54:13.373764 kernel: CPU features: detected: Spectre-BHB Jul 7 05:54:13.373771 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 7 05:54:13.373778 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 7 05:54:13.373785 kernel: CPU features: detected: ARM erratum 1418040 Jul 7 05:54:13.373792 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 7 05:54:13.373803 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 7 05:54:13.373811 kernel: alternatives: applying boot alternatives Jul 7 05:54:13.373819 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 05:54:13.373828 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 05:54:13.373834 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 05:54:13.373842 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 05:54:13.373849 kernel: Fallback order for Node 0: 0 Jul 7 05:54:13.373855 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 7 05:54:13.373862 kernel: Policy zone: Normal Jul 7 05:54:13.373869 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 05:54:13.373876 kernel: software IO TLB: area num 2. Jul 7 05:54:13.373884 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jul 7 05:54:13.373892 kernel: Memory: 3982628K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 211532K reserved, 0K cma-reserved) Jul 7 05:54:13.373899 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 05:54:13.373905 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 05:54:13.373913 kernel: rcu: RCU event tracing is enabled. Jul 7 05:54:13.373920 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 05:54:13.373927 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 05:54:13.373934 kernel: Tracing variant of Tasks RCU enabled. Jul 7 05:54:13.373941 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 05:54:13.373947 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 05:54:13.373954 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 05:54:13.373963 kernel: GICv3: 960 SPIs implemented Jul 7 05:54:13.373969 kernel: GICv3: 0 Extended SPIs implemented Jul 7 05:54:13.373976 kernel: Root IRQ handler: gic_handle_irq Jul 7 05:54:13.373983 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 7 05:54:13.373990 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 7 05:54:13.373997 kernel: ITS: No ITS available, not enabling LPIs Jul 7 05:54:13.374004 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 05:54:13.374011 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 05:54:13.374017 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 7 05:54:13.374024 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 7 05:54:13.374032 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 7 05:54:13.374040 kernel: Console: colour dummy device 80x25 Jul 7 05:54:13.374047 kernel: printk: console [tty1] enabled Jul 7 05:54:13.374055 kernel: ACPI: Core revision 20230628 Jul 7 05:54:13.374062 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 7 05:54:13.374069 kernel: pid_max: default: 32768 minimum: 301 Jul 7 05:54:13.374076 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 05:54:13.374083 kernel: landlock: Up and running. Jul 7 05:54:13.374090 kernel: SELinux: Initializing. Jul 7 05:54:13.374097 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 05:54:13.374104 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 05:54:13.374113 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 05:54:13.374120 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 05:54:13.374128 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 7 05:54:13.374135 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Jul 7 05:54:13.374141 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 7 05:54:13.374149 kernel: rcu: Hierarchical SRCU implementation. Jul 7 05:54:13.374156 kernel: rcu: Max phase no-delay instances is 400. Jul 7 05:54:13.374170 kernel: Remapping and enabling EFI services. Jul 7 05:54:13.374177 kernel: smp: Bringing up secondary CPUs ... Jul 7 05:54:13.374185 kernel: Detected PIPT I-cache on CPU1 Jul 7 05:54:13.374192 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 7 05:54:13.374201 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 05:54:13.374208 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 7 05:54:13.374216 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 05:54:13.374223 kernel: SMP: Total of 2 processors activated. Jul 7 05:54:13.374231 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 05:54:13.374240 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 7 05:54:13.374248 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 7 05:54:13.374255 kernel: CPU features: detected: CRC32 instructions Jul 7 05:54:13.374263 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 7 05:54:13.374270 kernel: CPU features: detected: LSE atomic instructions Jul 7 05:54:13.374277 kernel: CPU features: detected: Privileged Access Never Jul 7 05:54:13.374285 kernel: CPU: All CPU(s) started at EL1 Jul 7 05:54:13.374292 kernel: alternatives: applying system-wide alternatives Jul 7 05:54:13.374299 kernel: devtmpfs: initialized Jul 7 05:54:13.374309 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 05:54:13.374317 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 05:54:13.374324 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 05:54:13.374332 kernel: SMBIOS 3.1.0 present. Jul 7 05:54:13.374340 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 7 05:54:13.374347 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 05:54:13.374355 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 7 05:54:13.374362 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 05:54:13.374370 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 05:54:13.374379 kernel: audit: initializing netlink subsys (disabled) Jul 7 05:54:13.374386 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jul 7 05:54:13.374394 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 05:54:13.374401 kernel: cpuidle: using governor menu Jul 7 05:54:13.374409 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 05:54:13.374416 kernel: ASID allocator initialised with 32768 entries Jul 7 05:54:13.374424 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 05:54:13.374431 kernel: Serial: AMBA PL011 UART driver Jul 7 05:54:13.374439 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 7 05:54:13.374447 kernel: Modules: 0 pages in range for non-PLT usage Jul 7 05:54:13.374455 kernel: Modules: 509008 pages in range for PLT usage Jul 7 05:54:13.374462 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 05:54:13.374470 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 05:54:13.374477 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 05:54:13.374485 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 05:54:13.374492 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 05:54:13.374499 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 05:54:13.374507 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 05:54:13.374516 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 05:54:13.374523 kernel: ACPI: Added _OSI(Module Device) Jul 7 05:54:13.374531 kernel: ACPI: Added _OSI(Processor Device) Jul 7 05:54:13.374538 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 05:54:13.374545 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 05:54:13.374553 kernel: ACPI: Interpreter enabled Jul 7 05:54:13.374560 kernel: ACPI: Using GIC for interrupt routing Jul 7 05:54:13.374568 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 7 05:54:13.374575 kernel: printk: console [ttyAMA0] enabled Jul 7 05:54:13.374584 kernel: printk: bootconsole [pl11] disabled Jul 7 05:54:13.374591 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 7 05:54:13.374599 kernel: iommu: Default domain type: Translated Jul 7 05:54:13.374606 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 05:54:13.374614 kernel: efivars: Registered efivars operations Jul 7 05:54:13.374621 kernel: vgaarb: loaded Jul 7 05:54:13.374629 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 05:54:13.374636 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 05:54:13.374644 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 05:54:13.374654 kernel: pnp: PnP ACPI init Jul 7 05:54:13.374662 kernel: pnp: PnP ACPI: found 0 devices Jul 7 05:54:13.374669 kernel: NET: Registered PF_INET protocol family Jul 7 05:54:13.374677 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 05:54:13.374696 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 05:54:13.374704 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 05:54:13.374712 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 05:54:13.374719 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 05:54:13.374727 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 05:54:13.374736 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 05:54:13.374744 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 05:54:13.374751 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 05:54:13.374759 kernel: PCI: CLS 0 bytes, default 64 Jul 7 05:54:13.374766 kernel: kvm [1]: HYP mode not available Jul 7 05:54:13.374773 kernel: Initialise system trusted keyrings Jul 7 05:54:13.374781 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 05:54:13.374788 kernel: Key type asymmetric registered Jul 7 05:54:13.374795 kernel: Asymmetric key parser 'x509' registered Jul 7 05:54:13.374804 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 05:54:13.374811 kernel: io scheduler mq-deadline registered Jul 7 05:54:13.374819 kernel: io scheduler kyber registered Jul 7 05:54:13.374826 kernel: io scheduler bfq registered Jul 7 05:54:13.374833 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 05:54:13.374841 kernel: thunder_xcv, ver 1.0 Jul 7 05:54:13.374848 kernel: thunder_bgx, ver 1.0 Jul 7 05:54:13.374855 kernel: nicpf, ver 1.0 Jul 7 05:54:13.374862 kernel: nicvf, ver 1.0 Jul 7 05:54:13.375012 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 05:54:13.375088 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T05:54:12 UTC (1751867652) Jul 7 05:54:13.375098 kernel: efifb: probing for efifb Jul 7 05:54:13.375106 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 7 05:54:13.375114 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 7 05:54:13.375121 kernel: efifb: scrolling: redraw Jul 7 05:54:13.375129 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 05:54:13.375137 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 05:54:13.375146 kernel: fb0: EFI VGA frame buffer device Jul 7 05:54:13.375154 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 7 05:54:13.375161 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 05:54:13.375169 kernel: No ACPI PMU IRQ for CPU0 Jul 7 05:54:13.375176 kernel: No ACPI PMU IRQ for CPU1 Jul 7 05:54:13.375184 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 7 05:54:13.375191 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 7 05:54:13.375199 kernel: watchdog: Hard watchdog permanently disabled Jul 7 05:54:13.375206 kernel: NET: Registered PF_INET6 protocol family Jul 7 05:54:13.375215 kernel: Segment Routing with IPv6 Jul 7 05:54:13.375223 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 05:54:13.375230 kernel: NET: Registered PF_PACKET protocol family Jul 7 05:54:13.375238 kernel: Key type dns_resolver registered Jul 7 05:54:13.375245 kernel: registered taskstats version 1 Jul 7 05:54:13.375253 kernel: Loading compiled-in X.509 certificates Jul 7 05:54:13.375260 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 238b9dc1e5bb098e9decff566778e6505241ab94' Jul 7 05:54:13.375267 kernel: Key type .fscrypt registered Jul 7 05:54:13.375275 kernel: Key type fscrypt-provisioning registered Jul 7 05:54:13.375284 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 05:54:13.375292 kernel: ima: Allocated hash algorithm: sha1 Jul 7 05:54:13.375299 kernel: ima: No architecture policies found Jul 7 05:54:13.375307 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 05:54:13.375314 kernel: clk: Disabling unused clocks Jul 7 05:54:13.375321 kernel: Freeing unused kernel memory: 39424K Jul 7 05:54:13.375329 kernel: Run /init as init process Jul 7 05:54:13.375336 kernel: with arguments: Jul 7 05:54:13.375343 kernel: /init Jul 7 05:54:13.375352 kernel: with environment: Jul 7 05:54:13.375359 kernel: HOME=/ Jul 7 05:54:13.375366 kernel: TERM=linux Jul 7 05:54:13.375374 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 05:54:13.375383 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 05:54:13.375393 systemd[1]: Detected virtualization microsoft. Jul 7 05:54:13.375401 systemd[1]: Detected architecture arm64. Jul 7 05:54:13.375409 systemd[1]: Running in initrd. Jul 7 05:54:13.375418 systemd[1]: No hostname configured, using default hostname. Jul 7 05:54:13.375426 systemd[1]: Hostname set to . Jul 7 05:54:13.375434 systemd[1]: Initializing machine ID from random generator. Jul 7 05:54:13.375442 systemd[1]: Queued start job for default target initrd.target. Jul 7 05:54:13.375450 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:54:13.375458 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:54:13.375467 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 05:54:13.375475 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 05:54:13.375484 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 05:54:13.375493 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 05:54:13.375502 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 05:54:13.375511 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 05:54:13.375519 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:54:13.375527 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:54:13.375537 systemd[1]: Reached target paths.target - Path Units. Jul 7 05:54:13.375545 systemd[1]: Reached target slices.target - Slice Units. Jul 7 05:54:13.375553 systemd[1]: Reached target swap.target - Swaps. Jul 7 05:54:13.375561 systemd[1]: Reached target timers.target - Timer Units. Jul 7 05:54:13.375569 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 05:54:13.375578 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 05:54:13.375586 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 05:54:13.375594 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 05:54:13.375602 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:54:13.375612 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 05:54:13.375621 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:54:13.375629 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 05:54:13.375637 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 05:54:13.375645 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 05:54:13.375653 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 05:54:13.375661 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 05:54:13.375669 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 05:54:13.375677 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 05:54:13.377764 systemd-journald[216]: Collecting audit messages is disabled. Jul 7 05:54:13.377787 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:54:13.377797 systemd-journald[216]: Journal started Jul 7 05:54:13.377818 systemd-journald[216]: Runtime Journal (/run/log/journal/eb8fa030aeea4bd6a2d1fd9920c62ea1) is 8.0M, max 78.5M, 70.5M free. Jul 7 05:54:13.388080 systemd-modules-load[217]: Inserted module 'overlay' Jul 7 05:54:13.405905 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 05:54:13.418702 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 05:54:13.420721 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 05:54:13.435728 kernel: Bridge firewalling registered Jul 7 05:54:13.429315 systemd-modules-load[217]: Inserted module 'br_netfilter' Jul 7 05:54:13.430218 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:54:13.443344 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 05:54:13.453228 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 05:54:13.466533 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:54:13.492058 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:54:13.500102 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:54:13.530135 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 05:54:13.551734 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 05:54:13.570374 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:54:13.578839 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:54:13.596967 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:54:13.608035 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:54:13.637979 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 05:54:13.646869 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 05:54:13.670932 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 05:54:13.687085 dracut-cmdline[252]: dracut-dracut-053 Jul 7 05:54:13.694472 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 05:54:13.728937 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:54:13.732811 systemd-resolved[253]: Positive Trust Anchors: Jul 7 05:54:13.732821 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 05:54:13.732857 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 05:54:13.735101 systemd-resolved[253]: Defaulting to hostname 'linux'. Jul 7 05:54:13.742963 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 05:54:13.757330 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:54:13.885694 kernel: SCSI subsystem initialized Jul 7 05:54:13.892714 kernel: Loading iSCSI transport class v2.0-870. Jul 7 05:54:13.903711 kernel: iscsi: registered transport (tcp) Jul 7 05:54:13.921437 kernel: iscsi: registered transport (qla4xxx) Jul 7 05:54:13.921474 kernel: QLogic iSCSI HBA Driver Jul 7 05:54:13.964310 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 05:54:13.979943 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 05:54:14.021340 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 05:54:14.021405 kernel: device-mapper: uevent: version 1.0.3 Jul 7 05:54:14.028106 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 05:54:14.078716 kernel: raid6: neonx8 gen() 15793 MB/s Jul 7 05:54:14.097691 kernel: raid6: neonx4 gen() 15667 MB/s Jul 7 05:54:14.117691 kernel: raid6: neonx2 gen() 13253 MB/s Jul 7 05:54:14.138692 kernel: raid6: neonx1 gen() 10475 MB/s Jul 7 05:54:14.159691 kernel: raid6: int64x8 gen() 6955 MB/s Jul 7 05:54:14.179692 kernel: raid6: int64x4 gen() 7350 MB/s Jul 7 05:54:14.200693 kernel: raid6: int64x2 gen() 6128 MB/s Jul 7 05:54:14.225050 kernel: raid6: int64x1 gen() 5059 MB/s Jul 7 05:54:14.225061 kernel: raid6: using algorithm neonx8 gen() 15793 MB/s Jul 7 05:54:14.249579 kernel: raid6: .... xor() 11939 MB/s, rmw enabled Jul 7 05:54:14.249601 kernel: raid6: using neon recovery algorithm Jul 7 05:54:14.262776 kernel: xor: measuring software checksum speed Jul 7 05:54:14.262792 kernel: 8regs : 19816 MB/sec Jul 7 05:54:14.266937 kernel: 32regs : 19636 MB/sec Jul 7 05:54:14.270881 kernel: arm64_neon : 27034 MB/sec Jul 7 05:54:14.275423 kernel: xor: using function: arm64_neon (27034 MB/sec) Jul 7 05:54:14.326697 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 05:54:14.337074 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 05:54:14.355867 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:54:14.390932 systemd-udevd[438]: Using default interface naming scheme 'v255'. Jul 7 05:54:14.397421 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:54:14.417977 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 05:54:14.443404 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Jul 7 05:54:14.474365 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 05:54:14.493052 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 05:54:14.542094 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:54:14.569880 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 05:54:14.595529 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 05:54:14.610430 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 05:54:14.623803 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:54:14.647774 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 05:54:14.674703 kernel: hv_vmbus: Vmbus version:5.3 Jul 7 05:54:14.683104 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 05:54:14.699532 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 05:54:14.727761 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 7 05:54:14.727812 kernel: hv_vmbus: registering driver hid_hyperv Jul 7 05:54:14.727823 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 7 05:54:14.727844 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 7 05:54:14.730431 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 05:54:14.771220 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jul 7 05:54:14.771256 kernel: hv_vmbus: registering driver hv_storvsc Jul 7 05:54:14.771266 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 7 05:54:14.771408 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jul 7 05:54:14.771420 kernel: hv_vmbus: registering driver hv_netvsc Jul 7 05:54:14.771432 kernel: scsi host1: storvsc_host_t Jul 7 05:54:14.730592 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:54:14.789580 kernel: scsi host0: storvsc_host_t Jul 7 05:54:14.789764 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 7 05:54:14.802754 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 7 05:54:14.803461 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:54:14.820043 kernel: PTP clock support registered Jul 7 05:54:14.822361 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:54:14.848019 kernel: hv_utils: Registering HyperV Utility Driver Jul 7 05:54:14.848044 kernel: hv_vmbus: registering driver hv_utils Jul 7 05:54:14.822744 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:54:15.366827 kernel: hv_utils: Heartbeat IC version 3.0 Jul 7 05:54:15.366863 kernel: hv_utils: Shutdown IC version 3.2 Jul 7 05:54:15.366875 kernel: hv_utils: TimeSync IC version 4.0 Jul 7 05:54:15.366888 kernel: hv_netvsc 000d3ac5-c88a-000d-3ac5-c88a000d3ac5 eth0: VF slot 1 added Jul 7 05:54:14.863491 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:54:15.366835 systemd-resolved[253]: Clock change detected. Flushing caches. Jul 7 05:54:15.410121 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 7 05:54:15.410317 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 05:54:15.410329 kernel: hv_vmbus: registering driver hv_pci Jul 7 05:54:15.377906 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:54:15.429351 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 7 05:54:15.429542 kernel: hv_pci 15d71d33-c426-4b9f-95b7-042d9b3cf930: PCI VMBus probing: Using version 0x10004 Jul 7 05:54:15.403137 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:54:15.403255 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:54:15.435640 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:54:15.487540 kernel: hv_pci 15d71d33-c426-4b9f-95b7-042d9b3cf930: PCI host bridge to bus c426:00 Jul 7 05:54:15.487738 kernel: pci_bus c426:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 7 05:54:15.487846 kernel: pci_bus c426:00: No busn resource found for root bus, will use [bus 00-ff] Jul 7 05:54:15.487924 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 7 05:54:15.488031 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 7 05:54:15.501365 kernel: pci c426:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 7 05:54:15.501408 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 7 05:54:15.501520 kernel: pci c426:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 7 05:54:15.509130 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 7 05:54:15.514325 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:54:15.538528 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 7 05:54:15.538920 kernel: pci c426:00:02.0: enabling Extended Tags Jul 7 05:54:15.540414 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:54:15.582261 kernel: pci c426:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c426:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 7 05:54:15.582460 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 05:54:15.582471 kernel: pci_bus c426:00: busn_res: [bus 00-ff] end is updated to 00 Jul 7 05:54:15.589327 kernel: pci c426:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 7 05:54:15.594791 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 7 05:54:15.613662 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:54:15.648443 kernel: mlx5_core c426:00:02.0: enabling device (0000 -> 0002) Jul 7 05:54:15.655101 kernel: mlx5_core c426:00:02.0: firmware version: 16.30.1284 Jul 7 05:54:15.853049 kernel: hv_netvsc 000d3ac5-c88a-000d-3ac5-c88a000d3ac5 eth0: VF registering: eth1 Jul 7 05:54:15.853278 kernel: mlx5_core c426:00:02.0 eth1: joined to eth0 Jul 7 05:54:15.861237 kernel: mlx5_core c426:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 7 05:54:15.871108 kernel: mlx5_core c426:00:02.0 enP50214s1: renamed from eth1 Jul 7 05:54:16.080065 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 7 05:54:16.121350 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (489) Jul 7 05:54:16.137169 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 7 05:54:16.166174 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 7 05:54:16.192221 kernel: BTRFS: device fsid 8b9ce65a-b4d6-4744-987c-133e7f159d2d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (502) Jul 7 05:54:16.195583 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 7 05:54:16.203369 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 7 05:54:16.241242 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 05:54:16.264191 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 05:54:16.272108 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 05:54:17.282176 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 05:54:17.284710 disk-uuid[601]: The operation has completed successfully. Jul 7 05:54:17.348328 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 05:54:17.350272 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 05:54:17.384296 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 05:54:17.397296 sh[687]: Success Jul 7 05:54:17.426130 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 7 05:54:17.585395 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 05:54:17.608231 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 05:54:17.618353 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 05:54:17.649842 kernel: BTRFS info (device dm-0): first mount of filesystem 8b9ce65a-b4d6-4744-987c-133e7f159d2d Jul 7 05:54:17.649902 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:54:17.657048 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 05:54:17.662158 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 05:54:17.666581 kernel: BTRFS info (device dm-0): using free space tree Jul 7 05:54:17.897838 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 05:54:17.903733 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 05:54:17.921362 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 05:54:17.929299 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 05:54:17.968322 kernel: BTRFS info (device sda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:54:17.968389 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:54:17.973122 kernel: BTRFS info (device sda6): using free space tree Jul 7 05:54:18.003299 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 05:54:18.011036 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 05:54:18.025336 kernel: BTRFS info (device sda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:54:18.031170 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 05:54:18.050576 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 05:54:18.069460 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 05:54:18.084758 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 05:54:18.127275 systemd-networkd[871]: lo: Link UP Jul 7 05:54:18.127286 systemd-networkd[871]: lo: Gained carrier Jul 7 05:54:18.128839 systemd-networkd[871]: Enumeration completed Jul 7 05:54:18.129470 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:54:18.129473 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:54:18.134803 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 05:54:18.141046 systemd[1]: Reached target network.target - Network. Jul 7 05:54:18.218102 kernel: mlx5_core c426:00:02.0 enP50214s1: Link up Jul 7 05:54:18.257104 kernel: hv_netvsc 000d3ac5-c88a-000d-3ac5-c88a000d3ac5 eth0: Data path switched to VF: enP50214s1 Jul 7 05:54:18.257802 systemd-networkd[871]: enP50214s1: Link UP Jul 7 05:54:18.257881 systemd-networkd[871]: eth0: Link UP Jul 7 05:54:18.258005 systemd-networkd[871]: eth0: Gained carrier Jul 7 05:54:18.258015 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:54:18.271227 systemd-networkd[871]: enP50214s1: Gained carrier Jul 7 05:54:18.291171 systemd-networkd[871]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 7 05:54:18.635011 ignition[864]: Ignition 2.19.0 Jul 7 05:54:18.635022 ignition[864]: Stage: fetch-offline Jul 7 05:54:18.639234 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 05:54:18.635059 ignition[864]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:54:18.635068 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:54:18.635183 ignition[864]: parsed url from cmdline: "" Jul 7 05:54:18.635186 ignition[864]: no config URL provided Jul 7 05:54:18.635191 ignition[864]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 05:54:18.668436 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 05:54:18.635199 ignition[864]: no config at "/usr/lib/ignition/user.ign" Jul 7 05:54:18.635203 ignition[864]: failed to fetch config: resource requires networking Jul 7 05:54:18.635614 ignition[864]: Ignition finished successfully Jul 7 05:54:18.697017 ignition[880]: Ignition 2.19.0 Jul 7 05:54:18.697024 ignition[880]: Stage: fetch Jul 7 05:54:18.697425 ignition[880]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:54:18.697440 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:54:18.697558 ignition[880]: parsed url from cmdline: "" Jul 7 05:54:18.697562 ignition[880]: no config URL provided Jul 7 05:54:18.697571 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 05:54:18.697579 ignition[880]: no config at "/usr/lib/ignition/user.ign" Jul 7 05:54:18.697606 ignition[880]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 7 05:54:18.832182 ignition[880]: GET result: OK Jul 7 05:54:18.832260 ignition[880]: config has been read from IMDS userdata Jul 7 05:54:18.832305 ignition[880]: parsing config with SHA512: f8ed95ade9a123c279de443cdb84295f66a17cb5d9efd4ff1396b12351b50e87d26bebd50df7ab5292faae3835e37498f853aa0cb2eddf3dcc680ec307029a2c Jul 7 05:54:18.836414 unknown[880]: fetched base config from "system" Jul 7 05:54:18.836831 ignition[880]: fetch: fetch complete Jul 7 05:54:18.836421 unknown[880]: fetched base config from "system" Jul 7 05:54:18.836836 ignition[880]: fetch: fetch passed Jul 7 05:54:18.836427 unknown[880]: fetched user config from "azure" Jul 7 05:54:18.836883 ignition[880]: Ignition finished successfully Jul 7 05:54:18.842626 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 05:54:18.863663 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 05:54:18.892716 ignition[886]: Ignition 2.19.0 Jul 7 05:54:18.899684 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 05:54:18.892722 ignition[886]: Stage: kargs Jul 7 05:54:18.892973 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:54:18.892986 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:54:18.894037 ignition[886]: kargs: kargs passed Jul 7 05:54:18.894106 ignition[886]: Ignition finished successfully Jul 7 05:54:18.928371 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 05:54:18.950815 ignition[892]: Ignition 2.19.0 Jul 7 05:54:18.950839 ignition[892]: Stage: disks Jul 7 05:54:18.953770 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 05:54:18.951054 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:54:18.960029 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 05:54:18.951075 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:54:18.968853 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 05:54:18.952103 ignition[892]: disks: disks passed Jul 7 05:54:18.980700 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 05:54:18.952155 ignition[892]: Ignition finished successfully Jul 7 05:54:18.990633 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 05:54:19.002062 systemd[1]: Reached target basic.target - Basic System. Jul 7 05:54:19.030479 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 05:54:19.096414 systemd-fsck[901]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 7 05:54:19.100749 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 05:54:19.121339 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 05:54:19.176109 kernel: EXT4-fs (sda9): mounted filesystem bea371b7-1069-4e98-84b2-bf5b94f934f3 r/w with ordered data mode. Quota mode: none. Jul 7 05:54:19.176513 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 05:54:19.181643 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 05:54:19.223165 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 05:54:19.232378 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 05:54:19.255441 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 05:54:19.278154 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (912) Jul 7 05:54:19.278179 kernel: BTRFS info (device sda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:54:19.271191 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 05:54:19.308840 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:54:19.308865 kernel: BTRFS info (device sda6): using free space tree Jul 7 05:54:19.271237 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 05:54:19.308563 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 05:54:19.331098 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 05:54:19.333341 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 05:54:19.340724 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 05:54:19.605222 systemd-networkd[871]: eth0: Gained IPv6LL Jul 7 05:54:19.733228 systemd-networkd[871]: enP50214s1: Gained IPv6LL Jul 7 05:54:19.787945 coreos-metadata[914]: Jul 07 05:54:19.787 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 7 05:54:19.798778 coreos-metadata[914]: Jul 07 05:54:19.798 INFO Fetch successful Jul 7 05:54:19.798778 coreos-metadata[914]: Jul 07 05:54:19.798 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 7 05:54:19.815883 coreos-metadata[914]: Jul 07 05:54:19.810 INFO Fetch successful Jul 7 05:54:19.823941 coreos-metadata[914]: Jul 07 05:54:19.823 INFO wrote hostname ci-4081.3.4-a-2bf61d9e54 to /sysroot/etc/hostname Jul 7 05:54:19.833313 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 05:54:19.955048 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 05:54:19.980165 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Jul 7 05:54:19.989784 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 05:54:19.996189 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 05:54:20.705573 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 05:54:20.719565 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 05:54:20.733179 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 05:54:20.745802 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 05:54:20.763314 kernel: BTRFS info (device sda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:54:20.780479 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 05:54:20.797234 ignition[1034]: INFO : Ignition 2.19.0 Jul 7 05:54:20.797234 ignition[1034]: INFO : Stage: mount Jul 7 05:54:20.813950 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:54:20.813950 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:54:20.813950 ignition[1034]: INFO : mount: mount passed Jul 7 05:54:20.813950 ignition[1034]: INFO : Ignition finished successfully Jul 7 05:54:20.802837 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 05:54:20.830259 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 05:54:20.853183 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 05:54:20.887247 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1042) Jul 7 05:54:20.887308 kernel: BTRFS info (device sda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:54:20.893293 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:54:20.897604 kernel: BTRFS info (device sda6): using free space tree Jul 7 05:54:20.904100 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 05:54:20.906551 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 05:54:20.930585 ignition[1060]: INFO : Ignition 2.19.0 Jul 7 05:54:20.930585 ignition[1060]: INFO : Stage: files Jul 7 05:54:20.938658 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:54:20.938658 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:54:20.938658 ignition[1060]: DEBUG : files: compiled without relabeling support, skipping Jul 7 05:54:20.957471 ignition[1060]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 05:54:20.957471 ignition[1060]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 05:54:20.987625 ignition[1060]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 05:54:20.995460 ignition[1060]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 05:54:20.995460 ignition[1060]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 05:54:20.988043 unknown[1060]: wrote ssh authorized keys file for user: core Jul 7 05:54:21.019851 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 05:54:21.030768 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 7 05:54:21.202080 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 05:54:22.126309 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 05:54:22.126309 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:54:22.147197 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 7 05:54:22.700841 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 05:54:22.913755 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:54:22.913755 ignition[1060]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 05:54:22.933075 ignition[1060]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 05:54:22.933075 ignition[1060]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 05:54:22.933075 ignition[1060]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 05:54:22.933075 ignition[1060]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 7 05:54:22.933075 ignition[1060]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 05:54:22.988354 ignition[1060]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 05:54:22.988354 ignition[1060]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 05:54:22.988354 ignition[1060]: INFO : files: files passed Jul 7 05:54:22.988354 ignition[1060]: INFO : Ignition finished successfully Jul 7 05:54:22.961122 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 05:54:22.998369 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 05:54:23.016277 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 05:54:23.036752 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 05:54:23.079996 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:54:23.079996 initrd-setup-root-after-ignition[1087]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:54:23.036847 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 05:54:23.105179 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:54:23.053309 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 05:54:23.061795 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 05:54:23.098396 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 05:54:23.136560 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 05:54:23.136704 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 05:54:23.147667 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 05:54:23.158287 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 05:54:23.170839 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 05:54:23.192381 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 05:54:23.211576 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 05:54:23.222316 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 05:54:23.244740 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:54:23.257884 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:54:23.264960 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 05:54:23.276292 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 05:54:23.276487 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 05:54:23.293116 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 05:54:23.299121 systemd[1]: Stopped target basic.target - Basic System. Jul 7 05:54:23.310797 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 05:54:23.322527 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 05:54:23.333632 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 05:54:23.346332 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 05:54:23.358248 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 05:54:23.371244 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 05:54:23.382687 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 05:54:23.395291 systemd[1]: Stopped target swap.target - Swaps. Jul 7 05:54:23.405176 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 05:54:23.405365 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 05:54:23.420723 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:54:23.432249 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:54:23.444248 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 05:54:23.444363 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:54:23.456899 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 05:54:23.457080 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 05:54:23.474301 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 05:54:23.474484 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 05:54:23.490059 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 05:54:23.490261 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 05:54:23.501329 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 05:54:23.501487 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 05:54:23.535236 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 05:54:23.595589 ignition[1112]: INFO : Ignition 2.19.0 Jul 7 05:54:23.595589 ignition[1112]: INFO : Stage: umount Jul 7 05:54:23.595589 ignition[1112]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:54:23.595589 ignition[1112]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:54:23.546308 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 05:54:23.649881 ignition[1112]: INFO : umount: umount passed Jul 7 05:54:23.649881 ignition[1112]: INFO : Ignition finished successfully Jul 7 05:54:23.566917 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 05:54:23.567241 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:54:23.585884 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 05:54:23.586025 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 05:54:23.601976 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 05:54:23.606236 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 05:54:23.613792 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 05:54:23.616812 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 05:54:23.616902 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 05:54:23.624877 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 05:54:23.624940 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 05:54:23.637342 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 05:54:23.637393 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 05:54:23.643487 systemd[1]: Stopped target network.target - Network. Jul 7 05:54:23.654998 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 05:54:23.655076 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 05:54:23.670303 systemd[1]: Stopped target paths.target - Path Units. Jul 7 05:54:23.680345 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 05:54:23.686074 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:54:23.693120 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 05:54:23.703517 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 05:54:23.714523 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 05:54:23.714577 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 05:54:23.724951 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 05:54:23.724994 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 05:54:23.735662 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 05:54:23.735719 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 05:54:23.745920 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 05:54:23.745974 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 05:54:23.756978 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 05:54:23.767688 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 05:54:23.780541 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 05:54:23.780653 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 05:54:23.784999 systemd-networkd[871]: eth0: DHCPv6 lease lost Jul 7 05:54:23.793480 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 05:54:23.793572 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 05:54:24.042803 kernel: hv_netvsc 000d3ac5-c88a-000d-3ac5-c88a000d3ac5 eth0: Data path switched from VF: enP50214s1 Jul 7 05:54:23.805884 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 05:54:23.806065 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 05:54:23.816596 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 05:54:23.816691 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 05:54:23.828973 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 05:54:23.829509 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:54:23.838875 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 05:54:23.838950 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 05:54:23.875331 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 05:54:23.885220 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 05:54:23.885315 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 05:54:23.896930 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 05:54:23.896988 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:54:23.908304 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 05:54:23.908362 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 05:54:23.919559 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 05:54:23.919625 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:54:23.931981 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:54:23.968661 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 05:54:23.968847 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:54:23.979919 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 05:54:23.979973 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 05:54:23.992015 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 05:54:23.992065 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:54:24.002855 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 05:54:24.002916 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 05:54:24.027829 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 05:54:24.027900 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 05:54:24.042849 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 05:54:24.042914 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:54:24.072488 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 05:54:24.311962 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Jul 7 05:54:24.087448 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 05:54:24.087547 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:54:24.106133 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 05:54:24.106192 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:54:24.118453 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 05:54:24.118511 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:54:24.131701 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:54:24.131756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:54:24.144548 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 05:54:24.144667 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 05:54:24.155793 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 05:54:24.157105 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 05:54:24.170736 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 05:54:24.193390 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 05:54:24.224144 systemd[1]: Switching root. Jul 7 05:54:24.403922 systemd-journald[216]: Journal stopped Jul 7 05:54:28.486061 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 05:54:28.486103 kernel: SELinux: policy capability open_perms=1 Jul 7 05:54:28.486115 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 05:54:28.486123 kernel: SELinux: policy capability always_check_network=0 Jul 7 05:54:28.486134 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 05:54:28.486142 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 05:54:28.486151 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 05:54:28.486203 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 05:54:28.486213 kernel: audit: type=1403 audit(1751867665.733:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 05:54:28.486223 systemd[1]: Successfully loaded SELinux policy in 127.290ms. Jul 7 05:54:28.486236 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.648ms. Jul 7 05:54:28.486246 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 05:54:28.486255 systemd[1]: Detected virtualization microsoft. Jul 7 05:54:28.486264 systemd[1]: Detected architecture arm64. Jul 7 05:54:28.486273 systemd[1]: Detected first boot. Jul 7 05:54:28.486285 systemd[1]: Hostname set to . Jul 7 05:54:28.486294 systemd[1]: Initializing machine ID from random generator. Jul 7 05:54:28.486304 zram_generator::config[1153]: No configuration found. Jul 7 05:54:28.486313 systemd[1]: Populated /etc with preset unit settings. Jul 7 05:54:28.486323 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 05:54:28.486334 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 05:54:28.486344 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 05:54:28.486356 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 05:54:28.486365 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 05:54:28.486375 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 05:54:28.486384 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 05:54:28.486394 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 05:54:28.486403 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 05:54:28.486413 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 05:54:28.486424 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 05:54:28.486433 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:54:28.486443 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:54:28.486452 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 05:54:28.486462 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 05:54:28.486471 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 05:54:28.486481 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 05:54:28.486490 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 7 05:54:28.486501 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:54:28.486510 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 05:54:28.486520 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 05:54:28.486531 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 05:54:28.486542 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 05:54:28.486552 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:54:28.486561 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 05:54:28.486635 systemd[1]: Reached target slices.target - Slice Units. Jul 7 05:54:28.486650 systemd[1]: Reached target swap.target - Swaps. Jul 7 05:54:28.486660 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 05:54:28.486670 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 05:54:28.486680 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:54:28.486689 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 05:54:28.486699 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:54:28.486711 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 05:54:28.486721 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 05:54:28.486730 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 05:54:28.486740 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 05:54:28.486749 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 05:54:28.486759 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 05:54:28.486769 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 05:54:28.486782 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 05:54:28.486816 systemd[1]: Reached target machines.target - Containers. Jul 7 05:54:28.486827 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 05:54:28.486837 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:54:28.486850 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 05:54:28.486860 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 05:54:28.486870 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:54:28.486880 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 05:54:28.486892 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:54:28.486901 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 05:54:28.486911 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:54:28.486921 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 05:54:28.486931 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 05:54:28.486940 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 05:54:28.486950 kernel: loop: module loaded Jul 7 05:54:28.486959 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 05:54:28.486970 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 05:54:28.486979 kernel: fuse: init (API version 7.39) Jul 7 05:54:28.486988 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 05:54:28.486998 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 05:54:28.487008 kernel: ACPI: bus type drm_connector registered Jul 7 05:54:28.487017 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 05:54:28.487027 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 05:54:28.487037 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 05:54:28.487069 systemd-journald[1256]: Collecting audit messages is disabled. Jul 7 05:54:28.487671 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 05:54:28.487689 systemd[1]: Stopped verity-setup.service. Jul 7 05:54:28.487701 systemd-journald[1256]: Journal started Jul 7 05:54:28.487725 systemd-journald[1256]: Runtime Journal (/run/log/journal/1519313ffbcc4cff9dc5fb926cbd5b74) is 8.0M, max 78.5M, 70.5M free. Jul 7 05:54:27.437230 systemd[1]: Queued start job for default target multi-user.target. Jul 7 05:54:27.566924 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 7 05:54:27.567326 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 05:54:27.567639 systemd[1]: systemd-journald.service: Consumed 3.313s CPU time. Jul 7 05:54:28.512915 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 05:54:28.525376 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 05:54:28.526214 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 05:54:28.532928 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 05:54:28.538724 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 05:54:28.544940 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 05:54:28.551619 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 05:54:28.557290 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 05:54:28.564344 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:54:28.571710 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 05:54:28.571855 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 05:54:28.579111 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:54:28.579250 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:54:28.586313 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 05:54:28.586450 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 05:54:28.592883 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:54:28.593016 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:54:28.600630 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 05:54:28.600764 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 05:54:28.606920 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:54:28.607042 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:54:28.613626 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 05:54:28.620428 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 05:54:28.627940 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 05:54:28.635680 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:54:28.652257 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 05:54:28.664187 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 05:54:28.671696 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 05:54:28.678370 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 05:54:28.678415 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 05:54:28.685674 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 05:54:28.694298 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 05:54:28.702159 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 05:54:28.708233 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:54:28.709737 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 05:54:28.720289 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 05:54:28.727155 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 05:54:28.729175 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 05:54:28.735515 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 05:54:28.736869 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:54:28.746266 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 05:54:28.758335 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 05:54:28.769506 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 05:54:28.785901 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 05:54:28.793580 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 05:54:28.806441 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 05:54:28.814078 systemd-journald[1256]: Time spent on flushing to /var/log/journal/1519313ffbcc4cff9dc5fb926cbd5b74 is 16.058ms for 900 entries. Jul 7 05:54:28.814078 systemd-journald[1256]: System Journal (/var/log/journal/1519313ffbcc4cff9dc5fb926cbd5b74) is 8.0M, max 2.6G, 2.6G free. Jul 7 05:54:28.877595 systemd-journald[1256]: Received client request to flush runtime journal. Jul 7 05:54:28.831132 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 05:54:28.839276 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 05:54:28.856380 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 05:54:28.869006 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:54:28.878475 udevadm[1290]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 7 05:54:28.882593 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 05:54:28.906120 kernel: loop0: detected capacity change from 0 to 114432 Jul 7 05:54:28.916635 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 05:54:28.918141 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 05:54:28.941730 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Jul 7 05:54:28.941746 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Jul 7 05:54:28.946723 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:54:28.959339 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 05:54:29.200255 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 05:54:29.211320 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 05:54:29.229131 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 05:54:29.238940 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Jul 7 05:54:29.238960 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Jul 7 05:54:29.243028 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:54:29.292112 kernel: loop1: detected capacity change from 0 to 31320 Jul 7 05:54:29.576117 kernel: loop2: detected capacity change from 0 to 114328 Jul 7 05:54:29.803137 kernel: loop3: detected capacity change from 0 to 203944 Jul 7 05:54:29.828709 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 05:54:29.831729 kernel: loop4: detected capacity change from 0 to 114432 Jul 7 05:54:29.843166 kernel: loop5: detected capacity change from 0 to 31320 Jul 7 05:54:29.852807 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:54:29.860124 kernel: loop6: detected capacity change from 0 to 114328 Jul 7 05:54:29.871119 kernel: loop7: detected capacity change from 0 to 203944 Jul 7 05:54:29.876078 (sd-merge)[1314]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 7 05:54:29.876888 (sd-merge)[1314]: Merged extensions into '/usr'. Jul 7 05:54:29.877643 systemd-udevd[1316]: Using default interface naming scheme 'v255'. Jul 7 05:54:29.880968 systemd[1]: Reloading requested from client PID 1287 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 05:54:29.880992 systemd[1]: Reloading... Jul 7 05:54:29.971129 zram_generator::config[1342]: No configuration found. Jul 7 05:54:30.167125 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 05:54:30.174582 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:54:30.246445 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 7 05:54:30.246571 systemd[1]: Reloading finished in 365 ms. Jul 7 05:54:30.254139 kernel: hv_vmbus: registering driver hv_balloon Jul 7 05:54:30.264487 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 7 05:54:30.265000 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 7 05:54:30.272108 kernel: hv_vmbus: registering driver hyperv_fb Jul 7 05:54:30.283210 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 7 05:54:30.283320 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 7 05:54:30.289661 kernel: Console: switching to colour dummy device 80x25 Jul 7 05:54:30.309151 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 05:54:30.293605 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:54:30.314783 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 05:54:30.352117 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1377) Jul 7 05:54:30.355535 systemd[1]: Starting ensure-sysext.service... Jul 7 05:54:30.374390 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 05:54:30.395320 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 05:54:30.409132 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:54:30.442850 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 7 05:54:30.457834 systemd[1]: Reloading requested from client PID 1444 ('systemctl') (unit ensure-sysext.service)... Jul 7 05:54:30.457849 systemd[1]: Reloading... Jul 7 05:54:30.460478 systemd-tmpfiles[1461]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 05:54:30.461694 systemd-tmpfiles[1461]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 05:54:30.463635 systemd-tmpfiles[1461]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 05:54:30.463880 systemd-tmpfiles[1461]: ACLs are not supported, ignoring. Jul 7 05:54:30.463926 systemd-tmpfiles[1461]: ACLs are not supported, ignoring. Jul 7 05:54:30.478369 systemd-tmpfiles[1461]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 05:54:30.478380 systemd-tmpfiles[1461]: Skipping /boot Jul 7 05:54:30.487832 systemd-tmpfiles[1461]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 05:54:30.487978 systemd-tmpfiles[1461]: Skipping /boot Jul 7 05:54:30.544177 zram_generator::config[1508]: No configuration found. Jul 7 05:54:30.650684 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:54:30.727806 systemd[1]: Reloading finished in 269 ms. Jul 7 05:54:30.761157 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:54:30.769228 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:54:30.791491 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 05:54:30.815500 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 05:54:30.824896 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 05:54:30.834453 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 05:54:30.844461 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 05:54:30.859550 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 05:54:30.881982 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 05:54:30.891118 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:54:30.891375 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:54:30.900772 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:54:30.921728 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:54:30.930726 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 05:54:30.945836 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 05:54:30.952654 augenrules[1589]: No rules Jul 7 05:54:30.953722 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 05:54:30.966790 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 05:54:30.978711 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:54:30.999362 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 05:54:31.007735 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 05:54:31.021221 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:54:31.027171 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 05:54:31.040259 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:54:31.057431 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:54:31.077503 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:54:31.085733 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:54:31.086690 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:54:31.086849 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:54:31.098862 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:54:31.099033 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:54:31.101515 systemd-networkd[1453]: lo: Link UP Jul 7 05:54:31.101815 systemd-networkd[1453]: lo: Gained carrier Jul 7 05:54:31.103874 systemd-networkd[1453]: Enumeration completed Jul 7 05:54:31.105222 systemd-networkd[1453]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:54:31.105335 systemd-networkd[1453]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:54:31.106717 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 05:54:31.115664 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:54:31.115836 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:54:31.116380 lvm[1610]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 05:54:31.135314 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 05:54:31.138885 systemd-resolved[1576]: Positive Trust Anchors: Jul 7 05:54:31.139314 systemd-resolved[1576]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 05:54:31.139395 systemd-resolved[1576]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 05:54:31.142696 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 05:54:31.142908 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 05:54:31.146478 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:54:31.153976 systemd-resolved[1576]: Using system hostname 'ci-4081.3.4-a-2bf61d9e54'. Jul 7 05:54:31.154652 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:54:31.162594 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 05:54:31.171346 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:54:31.188114 kernel: mlx5_core c426:00:02.0 enP50214s1: Link up Jul 7 05:54:31.194452 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:54:31.201561 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:54:31.201771 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 05:54:31.209643 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 05:54:31.225250 kernel: hv_netvsc 000d3ac5-c88a-000d-3ac5-c88a000d3ac5 eth0: Data path switched to VF: enP50214s1 Jul 7 05:54:31.225813 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:54:31.226137 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:54:31.233383 systemd-networkd[1453]: enP50214s1: Link UP Jul 7 05:54:31.233742 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 05:54:31.234487 systemd-networkd[1453]: eth0: Link UP Jul 7 05:54:31.234494 systemd-networkd[1453]: eth0: Gained carrier Jul 7 05:54:31.234520 systemd-networkd[1453]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:54:31.242667 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 05:54:31.242883 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 05:54:31.250727 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:54:31.251912 systemd-networkd[1453]: enP50214s1: Gained carrier Jul 7 05:54:31.252134 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:54:31.259642 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:54:31.259791 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:54:31.271199 systemd-networkd[1453]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 7 05:54:31.271396 systemd[1]: Finished ensure-sysext.service. Jul 7 05:54:31.278812 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:54:31.285351 systemd[1]: Reached target network.target - Network. Jul 7 05:54:31.290388 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:54:31.303278 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 05:54:31.311043 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 05:54:31.311259 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 05:54:31.316792 lvm[1629]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 05:54:31.346142 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 05:54:31.353813 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 05:54:31.362019 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 05:54:32.341335 systemd-networkd[1453]: eth0: Gained IPv6LL Jul 7 05:54:32.347781 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 05:54:32.355478 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 05:54:32.660990 ldconfig[1282]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 05:54:32.670398 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 05:54:32.683239 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 05:54:32.697693 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 05:54:32.704503 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 05:54:32.710831 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 05:54:32.717730 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 05:54:32.725312 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 05:54:32.731656 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 05:54:32.738846 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 05:54:32.746019 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 05:54:32.746056 systemd[1]: Reached target paths.target - Path Units. Jul 7 05:54:32.751102 systemd[1]: Reached target timers.target - Timer Units. Jul 7 05:54:32.756806 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 05:54:32.764730 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 05:54:32.774755 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 05:54:32.781227 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 05:54:32.787429 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 05:54:32.792871 systemd[1]: Reached target basic.target - Basic System. Jul 7 05:54:32.797989 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 05:54:32.798018 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 05:54:32.805187 systemd[1]: Starting chronyd.service - NTP client/server... Jul 7 05:54:32.814245 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 05:54:32.826304 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 05:54:32.833495 (chronyd)[1638]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 7 05:54:32.835299 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 05:54:32.842247 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 05:54:32.849273 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 05:54:32.859548 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 05:54:32.859591 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jul 7 05:54:32.861594 chronyd[1649]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 7 05:54:32.865526 jq[1644]: false Jul 7 05:54:32.861868 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 7 05:54:32.872564 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 7 05:54:32.879214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:54:32.881992 KVP[1648]: KVP starting; pid is:1648 Jul 7 05:54:32.882188 chronyd[1649]: Timezone right/UTC failed leap second check, ignoring Jul 7 05:54:32.886607 chronyd[1649]: Loaded seccomp filter (level 2) Jul 7 05:54:32.888383 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 05:54:32.896466 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 05:54:32.905280 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 05:54:32.917486 systemd-networkd[1453]: enP50214s1: Gained IPv6LL Jul 7 05:54:32.922366 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 05:54:32.936047 KVP[1648]: KVP LIC Version: 3.1 Jul 7 05:54:32.936304 kernel: hv_utils: KVP IC version 4.0 Jul 7 05:54:32.939795 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 05:54:32.948731 dbus-daemon[1641]: [system] SELinux support is enabled Jul 7 05:54:32.962392 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 05:54:32.968770 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 05:54:32.969315 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 05:54:32.976223 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 05:54:32.984136 extend-filesystems[1645]: Found loop4 Jul 7 05:54:32.984136 extend-filesystems[1645]: Found loop5 Jul 7 05:54:32.984136 extend-filesystems[1645]: Found loop6 Jul 7 05:54:32.984136 extend-filesystems[1645]: Found loop7 Jul 7 05:54:32.984136 extend-filesystems[1645]: Found sda Jul 7 05:54:32.984136 extend-filesystems[1645]: Found sda1 Jul 7 05:54:32.984136 extend-filesystems[1645]: Found sda2 Jul 7 05:54:32.984136 extend-filesystems[1645]: Found sda3 Jul 7 05:54:32.984136 extend-filesystems[1645]: Found usr Jul 7 05:54:32.984136 extend-filesystems[1645]: Found sda4 Jul 7 05:54:32.984136 extend-filesystems[1645]: Found sda6 Jul 7 05:54:32.984136 extend-filesystems[1645]: Found sda7 Jul 7 05:54:32.984136 extend-filesystems[1645]: Found sda9 Jul 7 05:54:32.984136 extend-filesystems[1645]: Checking size of /dev/sda9 Jul 7 05:54:33.200351 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1366) Jul 7 05:54:33.200438 coreos-metadata[1640]: Jul 07 05:54:33.057 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 7 05:54:33.200438 coreos-metadata[1640]: Jul 07 05:54:33.060 INFO Fetch successful Jul 7 05:54:33.200438 coreos-metadata[1640]: Jul 07 05:54:33.060 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 7 05:54:33.200438 coreos-metadata[1640]: Jul 07 05:54:33.069 INFO Fetch successful Jul 7 05:54:33.200438 coreos-metadata[1640]: Jul 07 05:54:33.069 INFO Fetching http://168.63.129.16/machine/d2f46e79-10aa-4bec-af7d-2487e840e3c8/bc894b29%2Da809%2D4fcf%2Db2ea%2D44f3577e5655.%5Fci%2D4081.3.4%2Da%2D2bf61d9e54?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 7 05:54:33.200438 coreos-metadata[1640]: Jul 07 05:54:33.081 INFO Fetch successful Jul 7 05:54:33.200438 coreos-metadata[1640]: Jul 07 05:54:33.081 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 7 05:54:33.200438 coreos-metadata[1640]: Jul 07 05:54:33.097 INFO Fetch successful Jul 7 05:54:32.991512 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 05:54:33.200770 extend-filesystems[1645]: Old size kept for /dev/sda9 Jul 7 05:54:33.200770 extend-filesystems[1645]: Found sr0 Jul 7 05:54:33.266805 update_engine[1670]: I20250707 05:54:33.068159 1670 main.cc:92] Flatcar Update Engine starting Jul 7 05:54:33.266805 update_engine[1670]: I20250707 05:54:33.070608 1670 update_check_scheduler.cc:74] Next update check in 7m37s Jul 7 05:54:33.014007 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 05:54:33.267278 jq[1673]: true Jul 7 05:54:33.025855 systemd[1]: Started chronyd.service - NTP client/server. Jul 7 05:54:33.043542 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 05:54:33.043745 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 05:54:33.050314 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 05:54:33.050489 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 05:54:33.071760 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 05:54:33.072609 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 05:54:33.087954 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 05:54:33.205609 systemd-logind[1662]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jul 7 05:54:33.209898 systemd-logind[1662]: New seat seat0. Jul 7 05:54:33.212210 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 05:54:33.212406 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 05:54:33.230562 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 05:54:33.274167 dbus-daemon[1641]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 05:54:33.278441 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 05:54:33.285555 (ntainerd)[1711]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 05:54:33.291453 jq[1710]: true Jul 7 05:54:33.292550 tar[1686]: linux-arm64/helm Jul 7 05:54:33.321082 systemd[1]: Started update-engine.service - Update Engine. Jul 7 05:54:33.333675 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 05:54:33.333908 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 05:54:33.334036 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 05:54:33.344405 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 05:54:33.344525 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 05:54:33.360546 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 05:54:33.448012 bash[1744]: Updated "/home/core/.ssh/authorized_keys" Jul 7 05:54:33.450841 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 05:54:33.467562 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 05:54:33.610490 locksmithd[1740]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 05:54:33.862493 containerd[1711]: time="2025-07-07T05:54:33.862358360Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 05:54:33.954852 containerd[1711]: time="2025-07-07T05:54:33.953673000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:54:33.958054 containerd[1711]: time="2025-07-07T05:54:33.957312960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:54:33.958054 containerd[1711]: time="2025-07-07T05:54:33.957361720Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 05:54:33.958054 containerd[1711]: time="2025-07-07T05:54:33.957382080Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 05:54:33.958054 containerd[1711]: time="2025-07-07T05:54:33.957547920Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 05:54:33.958054 containerd[1711]: time="2025-07-07T05:54:33.957565400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 05:54:33.958054 containerd[1711]: time="2025-07-07T05:54:33.957627840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:54:33.958054 containerd[1711]: time="2025-07-07T05:54:33.957641120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:54:33.958054 containerd[1711]: time="2025-07-07T05:54:33.957808240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:54:33.958054 containerd[1711]: time="2025-07-07T05:54:33.957823280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 05:54:33.958054 containerd[1711]: time="2025-07-07T05:54:33.957836280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:54:33.958054 containerd[1711]: time="2025-07-07T05:54:33.957845360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 05:54:33.958365 containerd[1711]: time="2025-07-07T05:54:33.957915840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:54:33.958365 containerd[1711]: time="2025-07-07T05:54:33.958140720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:54:33.958365 containerd[1711]: time="2025-07-07T05:54:33.958242680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:54:33.958365 containerd[1711]: time="2025-07-07T05:54:33.958256480Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 05:54:33.958365 containerd[1711]: time="2025-07-07T05:54:33.958334120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 05:54:33.958453 containerd[1711]: time="2025-07-07T05:54:33.958381440Z" level=info msg="metadata content store policy set" policy=shared Jul 7 05:54:33.970933 sshd_keygen[1675]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 05:54:33.973785 containerd[1711]: time="2025-07-07T05:54:33.973725560Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 05:54:33.975730 containerd[1711]: time="2025-07-07T05:54:33.973799480Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 05:54:33.975730 containerd[1711]: time="2025-07-07T05:54:33.973827040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 05:54:33.975730 containerd[1711]: time="2025-07-07T05:54:33.973846160Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 05:54:33.975730 containerd[1711]: time="2025-07-07T05:54:33.973864800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 05:54:33.975730 containerd[1711]: time="2025-07-07T05:54:33.974024080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 05:54:33.975730 containerd[1711]: time="2025-07-07T05:54:33.975245920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 05:54:33.975939 containerd[1711]: time="2025-07-07T05:54:33.975789960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 05:54:33.975939 containerd[1711]: time="2025-07-07T05:54:33.975812400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 05:54:33.975939 containerd[1711]: time="2025-07-07T05:54:33.975826240Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 05:54:33.975939 containerd[1711]: time="2025-07-07T05:54:33.975840120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 05:54:33.975939 containerd[1711]: time="2025-07-07T05:54:33.975862560Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 05:54:33.975939 containerd[1711]: time="2025-07-07T05:54:33.975876400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 05:54:33.975939 containerd[1711]: time="2025-07-07T05:54:33.975893080Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 05:54:33.975939 containerd[1711]: time="2025-07-07T05:54:33.975908160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 05:54:33.975939 containerd[1711]: time="2025-07-07T05:54:33.975920400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 05:54:33.975939 containerd[1711]: time="2025-07-07T05:54:33.975932360Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 05:54:33.975939 containerd[1711]: time="2025-07-07T05:54:33.975945160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 05:54:33.976148 containerd[1711]: time="2025-07-07T05:54:33.975966760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 05:54:33.976148 containerd[1711]: time="2025-07-07T05:54:33.975981440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 05:54:33.976148 containerd[1711]: time="2025-07-07T05:54:33.975993560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 05:54:33.976148 containerd[1711]: time="2025-07-07T05:54:33.976006640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 05:54:33.976148 containerd[1711]: time="2025-07-07T05:54:33.976018520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 05:54:33.976148 containerd[1711]: time="2025-07-07T05:54:33.976031040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 05:54:33.976148 containerd[1711]: time="2025-07-07T05:54:33.976042480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 05:54:33.976148 containerd[1711]: time="2025-07-07T05:54:33.976055720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 05:54:33.976148 containerd[1711]: time="2025-07-07T05:54:33.976069720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 05:54:33.976148 containerd[1711]: time="2025-07-07T05:54:33.976122680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 05:54:33.976148 containerd[1711]: time="2025-07-07T05:54:33.976139800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 05:54:33.976148 containerd[1711]: time="2025-07-07T05:54:33.976151800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 05:54:33.976358 containerd[1711]: time="2025-07-07T05:54:33.976166440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 05:54:33.976358 containerd[1711]: time="2025-07-07T05:54:33.976182320Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 05:54:33.976358 containerd[1711]: time="2025-07-07T05:54:33.976204840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 05:54:33.976358 containerd[1711]: time="2025-07-07T05:54:33.976217440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 05:54:33.976358 containerd[1711]: time="2025-07-07T05:54:33.976229400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 05:54:33.977283 containerd[1711]: time="2025-07-07T05:54:33.977153160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 05:54:33.978242 containerd[1711]: time="2025-07-07T05:54:33.978163840Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 05:54:33.978242 containerd[1711]: time="2025-07-07T05:54:33.978196200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 05:54:33.978242 containerd[1711]: time="2025-07-07T05:54:33.978213800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 05:54:33.978242 containerd[1711]: time="2025-07-07T05:54:33.978224680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 05:54:33.978242 containerd[1711]: time="2025-07-07T05:54:33.978241120Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 05:54:33.978841 containerd[1711]: time="2025-07-07T05:54:33.978252200Z" level=info msg="NRI interface is disabled by configuration." Jul 7 05:54:33.978841 containerd[1711]: time="2025-07-07T05:54:33.978265360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 05:54:33.978904 containerd[1711]: time="2025-07-07T05:54:33.978563760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 05:54:33.978904 containerd[1711]: time="2025-07-07T05:54:33.978622120Z" level=info msg="Connect containerd service" Jul 7 05:54:33.978904 containerd[1711]: time="2025-07-07T05:54:33.978659040Z" level=info msg="using legacy CRI server" Jul 7 05:54:33.978904 containerd[1711]: time="2025-07-07T05:54:33.978665880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 05:54:33.981005 containerd[1711]: time="2025-07-07T05:54:33.980960600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 05:54:33.986049 containerd[1711]: time="2025-07-07T05:54:33.985999480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 05:54:33.989553 containerd[1711]: time="2025-07-07T05:54:33.986992920Z" level=info msg="Start subscribing containerd event" Jul 7 05:54:33.989619 containerd[1711]: time="2025-07-07T05:54:33.989571160Z" level=info msg="Start recovering state" Jul 7 05:54:33.989678 containerd[1711]: time="2025-07-07T05:54:33.989658480Z" level=info msg="Start event monitor" Jul 7 05:54:33.989704 containerd[1711]: time="2025-07-07T05:54:33.989678240Z" level=info msg="Start snapshots syncer" Jul 7 05:54:33.989704 containerd[1711]: time="2025-07-07T05:54:33.989694280Z" level=info msg="Start cni network conf syncer for default" Jul 7 05:54:33.989704 containerd[1711]: time="2025-07-07T05:54:33.989702520Z" level=info msg="Start streaming server" Jul 7 05:54:33.990444 containerd[1711]: time="2025-07-07T05:54:33.989320520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 05:54:33.990444 containerd[1711]: time="2025-07-07T05:54:33.989853760Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 05:54:33.990003 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 05:54:33.996657 containerd[1711]: time="2025-07-07T05:54:33.996595800Z" level=info msg="containerd successfully booted in 0.136906s" Jul 7 05:54:34.011894 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 05:54:34.026590 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 05:54:34.040150 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 7 05:54:34.059282 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 05:54:34.059469 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 05:54:34.078593 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 05:54:34.107739 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 05:54:34.124360 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 7 05:54:34.149571 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 05:54:34.168551 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 7 05:54:34.176378 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 05:54:34.182197 tar[1686]: linux-arm64/LICENSE Jul 7 05:54:34.182197 tar[1686]: linux-arm64/README.md Jul 7 05:54:34.199271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:54:34.211524 (kubelet)[1788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:54:34.211842 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 05:54:34.222356 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 05:54:34.229503 systemd[1]: Startup finished in 712ms (kernel) + 12.361s (initrd) + 8.621s (userspace) = 21.695s. Jul 7 05:54:34.418900 login[1783]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:34.425522 login[1784]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:34.433760 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 05:54:34.443918 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 05:54:34.449081 systemd-logind[1662]: New session 1 of user core. Jul 7 05:54:34.457171 systemd-logind[1662]: New session 2 of user core. Jul 7 05:54:34.462886 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 05:54:34.474515 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 05:54:34.477382 (systemd)[1802]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 05:54:34.611109 kubelet[1788]: E0707 05:54:34.611003 1788 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:54:34.617395 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:54:34.617538 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:54:34.640900 systemd[1802]: Queued start job for default target default.target. Jul 7 05:54:34.652636 systemd[1802]: Created slice app.slice - User Application Slice. Jul 7 05:54:34.652867 systemd[1802]: Reached target paths.target - Paths. Jul 7 05:54:34.652941 systemd[1802]: Reached target timers.target - Timers. Jul 7 05:54:34.656244 systemd[1802]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 05:54:34.665638 systemd[1802]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 05:54:34.665714 systemd[1802]: Reached target sockets.target - Sockets. Jul 7 05:54:34.665726 systemd[1802]: Reached target basic.target - Basic System. Jul 7 05:54:34.665770 systemd[1802]: Reached target default.target - Main User Target. Jul 7 05:54:34.665798 systemd[1802]: Startup finished in 180ms. Jul 7 05:54:34.666214 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 05:54:34.676265 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 05:54:34.676991 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 05:54:35.419838 waagent[1779]: 2025-07-07T05:54:35.419744Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jul 7 05:54:35.426157 waagent[1779]: 2025-07-07T05:54:35.426064Z INFO Daemon Daemon OS: flatcar 4081.3.4 Jul 7 05:54:35.431103 waagent[1779]: 2025-07-07T05:54:35.430909Z INFO Daemon Daemon Python: 3.11.9 Jul 7 05:54:35.435526 waagent[1779]: 2025-07-07T05:54:35.435454Z INFO Daemon Daemon Run daemon Jul 7 05:54:35.439868 waagent[1779]: 2025-07-07T05:54:35.439810Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.4' Jul 7 05:54:35.449036 waagent[1779]: 2025-07-07T05:54:35.448965Z INFO Daemon Daemon Using waagent for provisioning Jul 7 05:54:35.454491 waagent[1779]: 2025-07-07T05:54:35.454441Z INFO Daemon Daemon Activate resource disk Jul 7 05:54:35.459289 waagent[1779]: 2025-07-07T05:54:35.459233Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 7 05:54:35.470687 waagent[1779]: 2025-07-07T05:54:35.470623Z INFO Daemon Daemon Found device: None Jul 7 05:54:35.475228 waagent[1779]: 2025-07-07T05:54:35.475173Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 7 05:54:35.483767 waagent[1779]: 2025-07-07T05:54:35.483705Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 7 05:54:35.497657 waagent[1779]: 2025-07-07T05:54:35.497592Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 7 05:54:35.503484 waagent[1779]: 2025-07-07T05:54:35.503424Z INFO Daemon Daemon Running default provisioning handler Jul 7 05:54:35.516162 waagent[1779]: 2025-07-07T05:54:35.516027Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 7 05:54:35.530454 waagent[1779]: 2025-07-07T05:54:35.530382Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 7 05:54:35.540399 waagent[1779]: 2025-07-07T05:54:35.540327Z INFO Daemon Daemon cloud-init is enabled: False Jul 7 05:54:35.545535 waagent[1779]: 2025-07-07T05:54:35.545478Z INFO Daemon Daemon Copying ovf-env.xml Jul 7 05:54:35.621114 waagent[1779]: 2025-07-07T05:54:35.620970Z INFO Daemon Daemon Successfully mounted dvd Jul 7 05:54:35.647669 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 7 05:54:35.650135 waagent[1779]: 2025-07-07T05:54:35.649728Z INFO Daemon Daemon Detect protocol endpoint Jul 7 05:54:35.654952 waagent[1779]: 2025-07-07T05:54:35.654865Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 7 05:54:35.661409 waagent[1779]: 2025-07-07T05:54:35.661331Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 7 05:54:35.668225 waagent[1779]: 2025-07-07T05:54:35.668151Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 7 05:54:35.673792 waagent[1779]: 2025-07-07T05:54:35.673658Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 7 05:54:35.678812 waagent[1779]: 2025-07-07T05:54:35.678740Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 7 05:54:35.708235 waagent[1779]: 2025-07-07T05:54:35.708184Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 7 05:54:35.715548 waagent[1779]: 2025-07-07T05:54:35.715518Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 7 05:54:35.720927 waagent[1779]: 2025-07-07T05:54:35.720874Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 7 05:54:35.895734 waagent[1779]: 2025-07-07T05:54:35.895622Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 7 05:54:35.902574 waagent[1779]: 2025-07-07T05:54:35.902504Z INFO Daemon Daemon Forcing an update of the goal state. Jul 7 05:54:35.911764 waagent[1779]: 2025-07-07T05:54:35.911707Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 7 05:54:35.933784 waagent[1779]: 2025-07-07T05:54:35.933694Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 7 05:54:35.939601 waagent[1779]: 2025-07-07T05:54:35.939552Z INFO Daemon Jul 7 05:54:35.942488 waagent[1779]: 2025-07-07T05:54:35.942439Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: a271505f-2e37-4729-ab75-7cb61cbaa8ac eTag: 2319605676427692254 source: Fabric] Jul 7 05:54:35.954030 waagent[1779]: 2025-07-07T05:54:35.953982Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 7 05:54:35.960958 waagent[1779]: 2025-07-07T05:54:35.960906Z INFO Daemon Jul 7 05:54:35.964039 waagent[1779]: 2025-07-07T05:54:35.963992Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 7 05:54:35.975010 waagent[1779]: 2025-07-07T05:54:35.974968Z INFO Daemon Daemon Downloading artifacts profile blob Jul 7 05:54:36.128410 waagent[1779]: 2025-07-07T05:54:36.128313Z INFO Daemon Downloaded certificate {'thumbprint': '2906CDF18D64CE066934A943F8F20F2A58B193C0', 'hasPrivateKey': False} Jul 7 05:54:36.138618 waagent[1779]: 2025-07-07T05:54:36.138565Z INFO Daemon Downloaded certificate {'thumbprint': '1BEEA7A88FA78B70EFFCDF055F611154778AA7E8', 'hasPrivateKey': True} Jul 7 05:54:36.148681 waagent[1779]: 2025-07-07T05:54:36.148622Z INFO Daemon Fetch goal state completed Jul 7 05:54:36.193813 waagent[1779]: 2025-07-07T05:54:36.193725Z INFO Daemon Daemon Starting provisioning Jul 7 05:54:36.199188 waagent[1779]: 2025-07-07T05:54:36.199112Z INFO Daemon Daemon Handle ovf-env.xml. Jul 7 05:54:36.204116 waagent[1779]: 2025-07-07T05:54:36.204047Z INFO Daemon Daemon Set hostname [ci-4081.3.4-a-2bf61d9e54] Jul 7 05:54:36.229109 waagent[1779]: 2025-07-07T05:54:36.224216Z INFO Daemon Daemon Publish hostname [ci-4081.3.4-a-2bf61d9e54] Jul 7 05:54:36.230866 waagent[1779]: 2025-07-07T05:54:36.230794Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 7 05:54:36.237437 waagent[1779]: 2025-07-07T05:54:36.237375Z INFO Daemon Daemon Primary interface is [eth0] Jul 7 05:54:36.274471 systemd-networkd[1453]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:54:36.274478 systemd-networkd[1453]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:54:36.274530 systemd-networkd[1453]: eth0: DHCP lease lost Jul 7 05:54:36.276714 waagent[1779]: 2025-07-07T05:54:36.276612Z INFO Daemon Daemon Create user account if not exists Jul 7 05:54:36.282495 waagent[1779]: 2025-07-07T05:54:36.282428Z INFO Daemon Daemon User core already exists, skip useradd Jul 7 05:54:36.288197 systemd-networkd[1453]: eth0: DHCPv6 lease lost Jul 7 05:54:36.288412 waagent[1779]: 2025-07-07T05:54:36.288281Z INFO Daemon Daemon Configure sudoer Jul 7 05:54:36.293094 waagent[1779]: 2025-07-07T05:54:36.293013Z INFO Daemon Daemon Configure sshd Jul 7 05:54:36.297822 waagent[1779]: 2025-07-07T05:54:36.297754Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 7 05:54:36.311761 waagent[1779]: 2025-07-07T05:54:36.311689Z INFO Daemon Daemon Deploy ssh public key. Jul 7 05:54:36.331135 systemd-networkd[1453]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 7 05:54:37.427051 waagent[1779]: 2025-07-07T05:54:37.426995Z INFO Daemon Daemon Provisioning complete Jul 7 05:54:37.445623 waagent[1779]: 2025-07-07T05:54:37.445573Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 7 05:54:37.451952 waagent[1779]: 2025-07-07T05:54:37.451885Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 7 05:54:37.461831 waagent[1779]: 2025-07-07T05:54:37.461757Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jul 7 05:54:37.602705 waagent[1861]: 2025-07-07T05:54:37.601996Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jul 7 05:54:37.602705 waagent[1861]: 2025-07-07T05:54:37.602178Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.4 Jul 7 05:54:37.602705 waagent[1861]: 2025-07-07T05:54:37.602238Z INFO ExtHandler ExtHandler Python: 3.11.9 Jul 7 05:54:37.652862 waagent[1861]: 2025-07-07T05:54:37.652768Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 7 05:54:37.653269 waagent[1861]: 2025-07-07T05:54:37.653216Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 05:54:37.653461 waagent[1861]: 2025-07-07T05:54:37.653422Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 05:54:37.662607 waagent[1861]: 2025-07-07T05:54:37.662521Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 7 05:54:37.674422 waagent[1861]: 2025-07-07T05:54:37.674371Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 7 05:54:37.675115 waagent[1861]: 2025-07-07T05:54:37.675058Z INFO ExtHandler Jul 7 05:54:37.675291 waagent[1861]: 2025-07-07T05:54:37.675249Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 915966a4-e84e-4e3a-9d57-274b68b64da9 eTag: 2319605676427692254 source: Fabric] Jul 7 05:54:37.675726 waagent[1861]: 2025-07-07T05:54:37.675687Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 7 05:54:37.677164 waagent[1861]: 2025-07-07T05:54:37.676404Z INFO ExtHandler Jul 7 05:54:37.677164 waagent[1861]: 2025-07-07T05:54:37.676484Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 7 05:54:37.680686 waagent[1861]: 2025-07-07T05:54:37.680644Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 7 05:54:37.764673 waagent[1861]: 2025-07-07T05:54:37.764571Z INFO ExtHandler Downloaded certificate {'thumbprint': '2906CDF18D64CE066934A943F8F20F2A58B193C0', 'hasPrivateKey': False} Jul 7 05:54:37.765134 waagent[1861]: 2025-07-07T05:54:37.765062Z INFO ExtHandler Downloaded certificate {'thumbprint': '1BEEA7A88FA78B70EFFCDF055F611154778AA7E8', 'hasPrivateKey': True} Jul 7 05:54:37.765611 waagent[1861]: 2025-07-07T05:54:37.765535Z INFO ExtHandler Fetch goal state completed Jul 7 05:54:37.781766 waagent[1861]: 2025-07-07T05:54:37.781705Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1861 Jul 7 05:54:37.781937 waagent[1861]: 2025-07-07T05:54:37.781900Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 7 05:54:37.783747 waagent[1861]: 2025-07-07T05:54:37.783699Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.4', '', 'Flatcar Container Linux by Kinvolk'] Jul 7 05:54:37.784182 waagent[1861]: 2025-07-07T05:54:37.784121Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 7 05:54:37.809616 waagent[1861]: 2025-07-07T05:54:37.809565Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 7 05:54:37.809852 waagent[1861]: 2025-07-07T05:54:37.809808Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 7 05:54:37.816665 waagent[1861]: 2025-07-07T05:54:37.816617Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 7 05:54:37.824282 systemd[1]: Reloading requested from client PID 1876 ('systemctl') (unit waagent.service)... Jul 7 05:54:37.824297 systemd[1]: Reloading... Jul 7 05:54:37.909153 zram_generator::config[1919]: No configuration found. Jul 7 05:54:38.000468 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:54:38.080634 systemd[1]: Reloading finished in 256 ms. Jul 7 05:54:38.117031 waagent[1861]: 2025-07-07T05:54:38.116673Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jul 7 05:54:38.123568 systemd[1]: Reloading requested from client PID 1964 ('systemctl') (unit waagent.service)... Jul 7 05:54:38.123584 systemd[1]: Reloading... Jul 7 05:54:38.203126 zram_generator::config[1998]: No configuration found. Jul 7 05:54:38.311474 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:54:38.388500 systemd[1]: Reloading finished in 264 ms. Jul 7 05:54:38.408298 waagent[1861]: 2025-07-07T05:54:38.407463Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 7 05:54:38.408298 waagent[1861]: 2025-07-07T05:54:38.407638Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 7 05:54:39.270119 waagent[1861]: 2025-07-07T05:54:39.269171Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 7 05:54:39.270119 waagent[1861]: 2025-07-07T05:54:39.269792Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 7 05:54:39.271111 waagent[1861]: 2025-07-07T05:54:39.271042Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 7 05:54:39.271235 waagent[1861]: 2025-07-07T05:54:39.271186Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 05:54:39.271330 waagent[1861]: 2025-07-07T05:54:39.271294Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 05:54:39.271569 waagent[1861]: 2025-07-07T05:54:39.271524Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 7 05:54:39.272176 waagent[1861]: 2025-07-07T05:54:39.272071Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 7 05:54:39.272391 waagent[1861]: 2025-07-07T05:54:39.272335Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 7 05:54:39.272391 waagent[1861]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 7 05:54:39.272391 waagent[1861]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 7 05:54:39.272391 waagent[1861]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 7 05:54:39.272391 waagent[1861]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 7 05:54:39.272391 waagent[1861]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 7 05:54:39.272391 waagent[1861]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 7 05:54:39.272889 waagent[1861]: 2025-07-07T05:54:39.272837Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 7 05:54:39.273639 waagent[1861]: 2025-07-07T05:54:39.273550Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 7 05:54:39.274211 waagent[1861]: 2025-07-07T05:54:39.274158Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 05:54:39.274211 waagent[1861]: 2025-07-07T05:54:39.273979Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 7 05:54:39.274415 waagent[1861]: 2025-07-07T05:54:39.274212Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 7 05:54:39.274739 waagent[1861]: 2025-07-07T05:54:39.274664Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 7 05:54:39.275730 waagent[1861]: 2025-07-07T05:54:39.275066Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 05:54:39.275730 waagent[1861]: 2025-07-07T05:54:39.275314Z INFO EnvHandler ExtHandler Configure routes Jul 7 05:54:39.275730 waagent[1861]: 2025-07-07T05:54:39.275389Z INFO EnvHandler ExtHandler Gateway:None Jul 7 05:54:39.275730 waagent[1861]: 2025-07-07T05:54:39.275432Z INFO EnvHandler ExtHandler Routes:None Jul 7 05:54:39.285416 waagent[1861]: 2025-07-07T05:54:39.285358Z INFO ExtHandler ExtHandler Jul 7 05:54:39.285526 waagent[1861]: 2025-07-07T05:54:39.285479Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 4a96c70e-d8f5-4dd7-874d-45f08d6b69fe correlation 9a1b40e9-6424-4a04-8327-7a209a1a6aa6 created: 2025-07-07T05:53:34.721169Z] Jul 7 05:54:39.285918 waagent[1861]: 2025-07-07T05:54:39.285862Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 7 05:54:39.286552 waagent[1861]: 2025-07-07T05:54:39.286507Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 7 05:54:39.320036 waagent[1861]: 2025-07-07T05:54:39.319882Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 3EFF89E0-3D1D-4792-A4A6-2C40BE53F02E;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jul 7 05:54:39.338214 waagent[1861]: 2025-07-07T05:54:39.338129Z INFO MonitorHandler ExtHandler Network interfaces: Jul 7 05:54:39.338214 waagent[1861]: Executing ['ip', '-a', '-o', 'link']: Jul 7 05:54:39.338214 waagent[1861]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 7 05:54:39.338214 waagent[1861]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c5:c8:8a brd ff:ff:ff:ff:ff:ff Jul 7 05:54:39.338214 waagent[1861]: 3: enP50214s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c5:c8:8a brd ff:ff:ff:ff:ff:ff\ altname enP50214p0s2 Jul 7 05:54:39.338214 waagent[1861]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 7 05:54:39.338214 waagent[1861]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 7 05:54:39.338214 waagent[1861]: 2: eth0 inet 10.200.20.24/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 7 05:54:39.338214 waagent[1861]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 7 05:54:39.338214 waagent[1861]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 7 05:54:39.338214 waagent[1861]: 2: eth0 inet6 fe80::20d:3aff:fec5:c88a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 7 05:54:39.338214 waagent[1861]: 3: enP50214s1 inet6 fe80::20d:3aff:fec5:c88a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 7 05:54:39.376123 waagent[1861]: 2025-07-07T05:54:39.375846Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jul 7 05:54:39.376123 waagent[1861]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 05:54:39.376123 waagent[1861]: pkts bytes target prot opt in out source destination Jul 7 05:54:39.376123 waagent[1861]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 7 05:54:39.376123 waagent[1861]: pkts bytes target prot opt in out source destination Jul 7 05:54:39.376123 waagent[1861]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 05:54:39.376123 waagent[1861]: pkts bytes target prot opt in out source destination Jul 7 05:54:39.376123 waagent[1861]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 7 05:54:39.376123 waagent[1861]: 8 941 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 7 05:54:39.376123 waagent[1861]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 7 05:54:39.379530 waagent[1861]: 2025-07-07T05:54:39.379461Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 7 05:54:39.379530 waagent[1861]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 05:54:39.379530 waagent[1861]: pkts bytes target prot opt in out source destination Jul 7 05:54:39.379530 waagent[1861]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 7 05:54:39.379530 waagent[1861]: pkts bytes target prot opt in out source destination Jul 7 05:54:39.379530 waagent[1861]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 05:54:39.379530 waagent[1861]: pkts bytes target prot opt in out source destination Jul 7 05:54:39.379530 waagent[1861]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 7 05:54:39.379530 waagent[1861]: 9 993 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 7 05:54:39.379530 waagent[1861]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 7 05:54:39.380261 waagent[1861]: 2025-07-07T05:54:39.380121Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 7 05:54:44.868184 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 05:54:44.875321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:54:44.976922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:54:44.983268 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:54:45.099705 kubelet[2091]: E0707 05:54:45.099607 2091 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:54:45.102757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:54:45.102920 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:54:51.302381 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 05:54:51.307369 systemd[1]: Started sshd@0-10.200.20.24:22-10.200.16.10:57194.service - OpenSSH per-connection server daemon (10.200.16.10:57194). Jul 7 05:54:51.814952 sshd[2099]: Accepted publickey for core from 10.200.16.10 port 57194 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:54:51.816410 sshd[2099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:51.820366 systemd-logind[1662]: New session 3 of user core. Jul 7 05:54:51.827303 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 05:54:52.248302 systemd[1]: Started sshd@1-10.200.20.24:22-10.200.16.10:57204.service - OpenSSH per-connection server daemon (10.200.16.10:57204). Jul 7 05:54:52.696464 sshd[2104]: Accepted publickey for core from 10.200.16.10 port 57204 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:54:52.698743 sshd[2104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:52.703033 systemd-logind[1662]: New session 4 of user core. Jul 7 05:54:52.709303 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 05:54:53.025346 sshd[2104]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:53.028866 systemd-logind[1662]: Session 4 logged out. Waiting for processes to exit. Jul 7 05:54:53.029497 systemd[1]: sshd@1-10.200.20.24:22-10.200.16.10:57204.service: Deactivated successfully. Jul 7 05:54:53.031644 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 05:54:53.032565 systemd-logind[1662]: Removed session 4. Jul 7 05:54:53.105485 systemd[1]: Started sshd@2-10.200.20.24:22-10.200.16.10:57206.service - OpenSSH per-connection server daemon (10.200.16.10:57206). Jul 7 05:54:53.557205 sshd[2111]: Accepted publickey for core from 10.200.16.10 port 57206 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:54:53.558671 sshd[2111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:53.563605 systemd-logind[1662]: New session 5 of user core. Jul 7 05:54:53.570278 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 05:54:53.883809 sshd[2111]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:53.892818 systemd-logind[1662]: Session 5 logged out. Waiting for processes to exit. Jul 7 05:54:53.892826 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 05:54:53.894673 systemd[1]: sshd@2-10.200.20.24:22-10.200.16.10:57206.service: Deactivated successfully. Jul 7 05:54:53.897525 systemd-logind[1662]: Removed session 5. Jul 7 05:54:53.971643 systemd[1]: Started sshd@3-10.200.20.24:22-10.200.16.10:57210.service - OpenSSH per-connection server daemon (10.200.16.10:57210). Jul 7 05:54:54.460749 sshd[2118]: Accepted publickey for core from 10.200.16.10 port 57210 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:54:54.462370 sshd[2118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:54.467722 systemd-logind[1662]: New session 6 of user core. Jul 7 05:54:54.474269 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 05:54:54.815802 sshd[2118]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:54.820255 systemd[1]: sshd@3-10.200.20.24:22-10.200.16.10:57210.service: Deactivated successfully. Jul 7 05:54:54.821843 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 05:54:54.822543 systemd-logind[1662]: Session 6 logged out. Waiting for processes to exit. Jul 7 05:54:54.823559 systemd-logind[1662]: Removed session 6. Jul 7 05:54:54.903689 systemd[1]: Started sshd@4-10.200.20.24:22-10.200.16.10:57220.service - OpenSSH per-connection server daemon (10.200.16.10:57220). Jul 7 05:54:55.305322 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 05:54:55.311285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:54:55.394147 sshd[2125]: Accepted publickey for core from 10.200.16.10 port 57220 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:54:55.396471 sshd[2125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:55.411251 systemd-logind[1662]: New session 7 of user core. Jul 7 05:54:55.415346 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 05:54:55.426325 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:54:55.426457 (kubelet)[2135]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:54:55.537353 kubelet[2135]: E0707 05:54:55.537304 2135 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:54:55.540194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:54:55.540340 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:54:55.977473 sudo[2143]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 05:54:55.977773 sudo[2143]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:54:56.005103 sudo[2143]: pam_unix(sudo:session): session closed for user root Jul 7 05:54:56.085389 sshd[2125]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:56.089514 systemd[1]: sshd@4-10.200.20.24:22-10.200.16.10:57220.service: Deactivated successfully. Jul 7 05:54:56.091148 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 05:54:56.091826 systemd-logind[1662]: Session 7 logged out. Waiting for processes to exit. Jul 7 05:54:56.092855 systemd-logind[1662]: Removed session 7. Jul 7 05:54:56.176392 systemd[1]: Started sshd@5-10.200.20.24:22-10.200.16.10:57224.service - OpenSSH per-connection server daemon (10.200.16.10:57224). Jul 7 05:54:56.648840 sshd[2148]: Accepted publickey for core from 10.200.16.10 port 57224 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:54:56.650343 sshd[2148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:56.655542 systemd-logind[1662]: New session 8 of user core. Jul 7 05:54:56.662317 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 05:54:56.675627 chronyd[1649]: Selected source PHC0 Jul 7 05:54:56.920184 sudo[2152]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 05:54:56.920475 sudo[2152]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:54:56.923915 sudo[2152]: pam_unix(sudo:session): session closed for user root Jul 7 05:54:56.929049 sudo[2151]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 05:54:56.929372 sudo[2151]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:54:56.946350 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 05:54:56.947826 auditctl[2155]: No rules Jul 7 05:54:56.948196 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 05:54:56.948375 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 05:54:56.951386 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 05:54:56.987237 augenrules[2173]: No rules Jul 7 05:54:56.988818 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 05:54:56.990269 sudo[2151]: pam_unix(sudo:session): session closed for user root Jul 7 05:54:57.076694 sshd[2148]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:57.080903 systemd-logind[1662]: Session 8 logged out. Waiting for processes to exit. Jul 7 05:54:57.081937 systemd[1]: sshd@5-10.200.20.24:22-10.200.16.10:57224.service: Deactivated successfully. Jul 7 05:54:57.083778 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 05:54:57.085219 systemd-logind[1662]: Removed session 8. Jul 7 05:54:57.177380 systemd[1]: Started sshd@6-10.200.20.24:22-10.200.16.10:57240.service - OpenSSH per-connection server daemon (10.200.16.10:57240). Jul 7 05:54:57.668154 sshd[2181]: Accepted publickey for core from 10.200.16.10 port 57240 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:54:57.669671 sshd[2181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:57.673901 systemd-logind[1662]: New session 9 of user core. Jul 7 05:54:57.682249 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 05:54:57.945823 sudo[2184]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 05:54:57.946146 sudo[2184]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:54:58.763530 (dockerd)[2200]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 05:54:58.763565 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 05:54:59.227124 dockerd[2200]: time="2025-07-07T05:54:59.226818906Z" level=info msg="Starting up" Jul 7 05:54:59.469246 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2887244558-merged.mount: Deactivated successfully. Jul 7 05:54:59.538769 dockerd[2200]: time="2025-07-07T05:54:59.538158240Z" level=info msg="Loading containers: start." Jul 7 05:54:59.687152 kernel: Initializing XFRM netlink socket Jul 7 05:54:59.817022 systemd-networkd[1453]: docker0: Link UP Jul 7 05:54:59.841117 dockerd[2200]: time="2025-07-07T05:54:59.840436477Z" level=info msg="Loading containers: done." Jul 7 05:54:59.852271 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1343737775-merged.mount: Deactivated successfully. Jul 7 05:54:59.859789 dockerd[2200]: time="2025-07-07T05:54:59.859656875Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 05:54:59.859789 dockerd[2200]: time="2025-07-07T05:54:59.859780475Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 05:54:59.859949 dockerd[2200]: time="2025-07-07T05:54:59.859902915Z" level=info msg="Daemon has completed initialization" Jul 7 05:54:59.911560 dockerd[2200]: time="2025-07-07T05:54:59.911444817Z" level=info msg="API listen on /run/docker.sock" Jul 7 05:54:59.912633 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 05:55:00.794449 containerd[1711]: time="2025-07-07T05:55:00.794408000Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 7 05:55:01.609405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3041730205.mount: Deactivated successfully. Jul 7 05:55:02.637753 containerd[1711]: time="2025-07-07T05:55:02.637689599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:02.641183 containerd[1711]: time="2025-07-07T05:55:02.641123686Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651793" Jul 7 05:55:02.644208 containerd[1711]: time="2025-07-07T05:55:02.644149652Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:02.648417 containerd[1711]: time="2025-07-07T05:55:02.648357900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:02.649608 containerd[1711]: time="2025-07-07T05:55:02.649427822Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.854977662s" Jul 7 05:55:02.649608 containerd[1711]: time="2025-07-07T05:55:02.649466702Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 7 05:55:02.650921 containerd[1711]: time="2025-07-07T05:55:02.650885585Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 7 05:55:03.802126 containerd[1711]: time="2025-07-07T05:55:03.801236176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:03.804441 containerd[1711]: time="2025-07-07T05:55:03.804159862Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459677" Jul 7 05:55:03.809379 containerd[1711]: time="2025-07-07T05:55:03.809142511Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:03.814806 containerd[1711]: time="2025-07-07T05:55:03.814746642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:03.816061 containerd[1711]: time="2025-07-07T05:55:03.815928845Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.16490374s" Jul 7 05:55:03.816061 containerd[1711]: time="2025-07-07T05:55:03.815968285Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 7 05:55:03.816707 containerd[1711]: time="2025-07-07T05:55:03.816634846Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 7 05:55:04.821131 containerd[1711]: time="2025-07-07T05:55:04.820312069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:04.822789 containerd[1711]: time="2025-07-07T05:55:04.822589757Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125066" Jul 7 05:55:04.827765 containerd[1711]: time="2025-07-07T05:55:04.827731095Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:04.833265 containerd[1711]: time="2025-07-07T05:55:04.833188194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:04.834462 containerd[1711]: time="2025-07-07T05:55:04.834321798Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.017520992s" Jul 7 05:55:04.834462 containerd[1711]: time="2025-07-07T05:55:04.834357238Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 7 05:55:04.835420 containerd[1711]: time="2025-07-07T05:55:04.835382722Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 05:55:05.690213 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 05:55:05.697339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:55:05.833302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:55:05.841409 (kubelet)[2410]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:55:05.868493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1144970596.mount: Deactivated successfully. Jul 7 05:55:05.910114 kubelet[2410]: E0707 05:55:05.910021 2410 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:55:05.914590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:55:05.914856 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:55:06.994003 containerd[1711]: time="2025-07-07T05:55:06.993942128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:06.996001 containerd[1711]: time="2025-07-07T05:55:06.995966615Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915957" Jul 7 05:55:06.998926 containerd[1711]: time="2025-07-07T05:55:06.998895386Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:07.002017 containerd[1711]: time="2025-07-07T05:55:07.001968876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:07.002743 containerd[1711]: time="2025-07-07T05:55:07.002608119Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 2.167093437s" Jul 7 05:55:07.002743 containerd[1711]: time="2025-07-07T05:55:07.002644399Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 7 05:55:07.003444 containerd[1711]: time="2025-07-07T05:55:07.003064080Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 05:55:07.737659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1603441228.mount: Deactivated successfully. Jul 7 05:55:08.663131 containerd[1711]: time="2025-07-07T05:55:08.662654687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:08.664919 containerd[1711]: time="2025-07-07T05:55:08.664688452Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 7 05:55:08.667973 containerd[1711]: time="2025-07-07T05:55:08.667924579Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:08.672692 containerd[1711]: time="2025-07-07T05:55:08.672630909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:08.673890 containerd[1711]: time="2025-07-07T05:55:08.673744911Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.670651751s" Jul 7 05:55:08.673890 containerd[1711]: time="2025-07-07T05:55:08.673782551Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 7 05:55:08.674674 containerd[1711]: time="2025-07-07T05:55:08.674378032Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 05:55:09.189506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2750140077.mount: Deactivated successfully. Jul 7 05:55:09.216129 containerd[1711]: time="2025-07-07T05:55:09.215984338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:09.218074 containerd[1711]: time="2025-07-07T05:55:09.217900782Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 7 05:55:09.220918 containerd[1711]: time="2025-07-07T05:55:09.220866429Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:09.225218 containerd[1711]: time="2025-07-07T05:55:09.225163638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:09.226050 containerd[1711]: time="2025-07-07T05:55:09.225891119Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 551.482687ms" Jul 7 05:55:09.226050 containerd[1711]: time="2025-07-07T05:55:09.225924839Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 7 05:55:09.226590 containerd[1711]: time="2025-07-07T05:55:09.226548881Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 05:55:09.835050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2848172886.mount: Deactivated successfully. Jul 7 05:55:11.916487 containerd[1711]: time="2025-07-07T05:55:11.916421213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:11.919039 containerd[1711]: time="2025-07-07T05:55:11.918994099Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" Jul 7 05:55:11.922308 containerd[1711]: time="2025-07-07T05:55:11.922247185Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:11.927715 containerd[1711]: time="2025-07-07T05:55:11.927654597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:11.929149 containerd[1711]: time="2025-07-07T05:55:11.928988160Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.702393919s" Jul 7 05:55:11.929149 containerd[1711]: time="2025-07-07T05:55:11.929030640Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 7 05:55:15.940699 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 7 05:55:15.950374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:55:16.069281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:55:16.073815 (kubelet)[2559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:55:16.140426 kubelet[2559]: E0707 05:55:16.140371 2559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:55:16.143534 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:55:16.143680 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:55:18.404684 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 7 05:55:18.553188 update_engine[1670]: I20250707 05:55:18.553120 1670 update_attempter.cc:509] Updating boot flags... Jul 7 05:55:18.657129 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2578) Jul 7 05:55:19.194410 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:55:19.202339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:55:19.238394 systemd[1]: Reloading requested from client PID 2612 ('systemctl') (unit session-9.scope)... Jul 7 05:55:19.238577 systemd[1]: Reloading... Jul 7 05:55:19.347127 zram_generator::config[2648]: No configuration found. Jul 7 05:55:19.465711 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:55:19.546037 systemd[1]: Reloading finished in 306 ms. Jul 7 05:55:19.596167 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 05:55:19.596277 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 05:55:19.596663 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:55:19.602439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:55:19.781443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:55:19.794387 (kubelet)[2720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 05:55:19.929390 kubelet[2720]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:55:19.929390 kubelet[2720]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 05:55:19.929390 kubelet[2720]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:55:19.929771 kubelet[2720]: I0707 05:55:19.929469 2720 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 05:55:20.730196 kubelet[2720]: I0707 05:55:20.730152 2720 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 05:55:20.730196 kubelet[2720]: I0707 05:55:20.730187 2720 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 05:55:20.730457 kubelet[2720]: I0707 05:55:20.730435 2720 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 05:55:20.747502 kubelet[2720]: E0707 05:55:20.747461 2720 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:55:20.750123 kubelet[2720]: I0707 05:55:20.750035 2720 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 05:55:20.757398 kubelet[2720]: E0707 05:55:20.757361 2720 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 05:55:20.757398 kubelet[2720]: I0707 05:55:20.757394 2720 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 05:55:20.761385 kubelet[2720]: I0707 05:55:20.761357 2720 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 05:55:20.762153 kubelet[2720]: I0707 05:55:20.762130 2720 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 05:55:20.762316 kubelet[2720]: I0707 05:55:20.762282 2720 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 05:55:20.762494 kubelet[2720]: I0707 05:55:20.762317 2720 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-a-2bf61d9e54","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 05:55:20.762582 kubelet[2720]: I0707 05:55:20.762501 2720 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 05:55:20.762582 kubelet[2720]: I0707 05:55:20.762510 2720 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 05:55:20.762646 kubelet[2720]: I0707 05:55:20.762628 2720 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:55:20.764889 kubelet[2720]: I0707 05:55:20.764660 2720 kubelet.go:408] "Attempting to sync node with API server" Jul 7 05:55:20.764889 kubelet[2720]: I0707 05:55:20.764691 2720 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 05:55:20.764889 kubelet[2720]: I0707 05:55:20.764714 2720 kubelet.go:314] "Adding apiserver pod source" Jul 7 05:55:20.764889 kubelet[2720]: I0707 05:55:20.764730 2720 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 05:55:20.767861 kubelet[2720]: W0707 05:55:20.767400 2720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-2bf61d9e54&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Jul 7 05:55:20.767861 kubelet[2720]: E0707 05:55:20.767469 2720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-2bf61d9e54&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:55:20.767861 kubelet[2720]: W0707 05:55:20.767731 2720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Jul 7 05:55:20.767861 kubelet[2720]: E0707 05:55:20.767765 2720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:55:20.769539 kubelet[2720]: I0707 05:55:20.768440 2720 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 05:55:20.769539 kubelet[2720]: I0707 05:55:20.768926 2720 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 05:55:20.769539 kubelet[2720]: W0707 05:55:20.768973 2720 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 05:55:20.770525 kubelet[2720]: I0707 05:55:20.770504 2720 server.go:1274] "Started kubelet" Jul 7 05:55:20.774776 kubelet[2720]: E0707 05:55:20.773728 2720 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.24:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.24:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.4-a-2bf61d9e54.184fe26905ff6bd7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.4-a-2bf61d9e54,UID:ci-4081.3.4-a-2bf61d9e54,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.4-a-2bf61d9e54,},FirstTimestamp:2025-07-07 05:55:20.770481111 +0000 UTC m=+0.972718891,LastTimestamp:2025-07-07 05:55:20.770481111 +0000 UTC m=+0.972718891,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.4-a-2bf61d9e54,}" Jul 7 05:55:20.775625 kubelet[2720]: I0707 05:55:20.774923 2720 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 05:55:20.775625 kubelet[2720]: I0707 05:55:20.775396 2720 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 05:55:20.775625 kubelet[2720]: I0707 05:55:20.775480 2720 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 05:55:20.776205 kubelet[2720]: I0707 05:55:20.776182 2720 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 05:55:20.776440 kubelet[2720]: I0707 05:55:20.776409 2720 server.go:449] "Adding debug handlers to kubelet server" Jul 7 05:55:20.778240 kubelet[2720]: I0707 05:55:20.778209 2720 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 05:55:20.782140 kubelet[2720]: I0707 05:55:20.780317 2720 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 05:55:20.782140 kubelet[2720]: E0707 05:55:20.780587 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-2bf61d9e54\" not found" Jul 7 05:55:20.782140 kubelet[2720]: I0707 05:55:20.781165 2720 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 05:55:20.782140 kubelet[2720]: I0707 05:55:20.781230 2720 reconciler.go:26] "Reconciler: start to sync state" Jul 7 05:55:20.782140 kubelet[2720]: W0707 05:55:20.781964 2720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Jul 7 05:55:20.782140 kubelet[2720]: E0707 05:55:20.782021 2720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:55:20.783223 kubelet[2720]: E0707 05:55:20.782079 2720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-2bf61d9e54?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="200ms" Jul 7 05:55:20.783918 kubelet[2720]: I0707 05:55:20.783876 2720 factory.go:221] Registration of the systemd container factory successfully Jul 7 05:55:20.784588 kubelet[2720]: I0707 05:55:20.784556 2720 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 05:55:20.787144 kubelet[2720]: E0707 05:55:20.787124 2720 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 05:55:20.787484 kubelet[2720]: I0707 05:55:20.787465 2720 factory.go:221] Registration of the containerd container factory successfully Jul 7 05:55:20.798382 kubelet[2720]: I0707 05:55:20.798345 2720 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 05:55:20.799517 kubelet[2720]: I0707 05:55:20.799494 2720 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 05:55:20.799755 kubelet[2720]: I0707 05:55:20.799740 2720 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 05:55:20.799855 kubelet[2720]: I0707 05:55:20.799844 2720 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 05:55:20.799960 kubelet[2720]: E0707 05:55:20.799941 2720 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 05:55:20.806759 kubelet[2720]: W0707 05:55:20.806727 2720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Jul 7 05:55:20.806938 kubelet[2720]: E0707 05:55:20.806916 2720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:55:20.881207 kubelet[2720]: E0707 05:55:20.881176 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-2bf61d9e54\" not found" Jul 7 05:55:20.900469 kubelet[2720]: E0707 05:55:20.900441 2720 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 05:55:20.982187 kubelet[2720]: E0707 05:55:20.982033 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-2bf61d9e54\" not found" Jul 7 05:55:20.984571 kubelet[2720]: E0707 05:55:20.984526 2720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-2bf61d9e54?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="400ms" Jul 7 05:55:21.082744 kubelet[2720]: E0707 05:55:21.082714 2720 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-2bf61d9e54\" not found" Jul 7 05:55:21.088218 kubelet[2720]: I0707 05:55:21.088185 2720 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 05:55:21.088218 kubelet[2720]: I0707 05:55:21.088213 2720 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 05:55:21.088353 kubelet[2720]: I0707 05:55:21.088234 2720 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:55:21.093563 kubelet[2720]: I0707 05:55:21.093523 2720 policy_none.go:49] "None policy: Start" Jul 7 05:55:21.094622 kubelet[2720]: I0707 05:55:21.094302 2720 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 05:55:21.094622 kubelet[2720]: I0707 05:55:21.094334 2720 state_mem.go:35] "Initializing new in-memory state store" Jul 7 05:55:21.101751 kubelet[2720]: E0707 05:55:21.101725 2720 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 05:55:21.103919 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 05:55:21.116618 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 05:55:21.120544 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 05:55:21.131115 kubelet[2720]: I0707 05:55:21.130145 2720 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 05:55:21.131115 kubelet[2720]: I0707 05:55:21.130363 2720 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 05:55:21.131115 kubelet[2720]: I0707 05:55:21.130374 2720 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 05:55:21.131115 kubelet[2720]: I0707 05:55:21.130737 2720 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 05:55:21.133741 kubelet[2720]: E0707 05:55:21.133716 2720 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.4-a-2bf61d9e54\" not found" Jul 7 05:55:21.232680 kubelet[2720]: I0707 05:55:21.232571 2720 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:21.233509 kubelet[2720]: E0707 05:55:21.233452 2720 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:21.385719 kubelet[2720]: E0707 05:55:21.385676 2720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-2bf61d9e54?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="800ms" Jul 7 05:55:21.435595 kubelet[2720]: I0707 05:55:21.435565 2720 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:21.435911 kubelet[2720]: E0707 05:55:21.435886 2720 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:21.512399 systemd[1]: Created slice kubepods-burstable-pod489fd6940a89a10949472499e2df4203.slice - libcontainer container kubepods-burstable-pod489fd6940a89a10949472499e2df4203.slice. Jul 7 05:55:21.532660 systemd[1]: Created slice kubepods-burstable-pod6157f23da8bef908d145ca7a07b7b746.slice - libcontainer container kubepods-burstable-pod6157f23da8bef908d145ca7a07b7b746.slice. Jul 7 05:55:21.538778 systemd[1]: Created slice kubepods-burstable-podc2cf9e52925e598b221d79057d371391.slice - libcontainer container kubepods-burstable-podc2cf9e52925e598b221d79057d371391.slice. Jul 7 05:55:21.585163 kubelet[2720]: I0707 05:55:21.585124 2720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2cf9e52925e598b221d79057d371391-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-a-2bf61d9e54\" (UID: \"c2cf9e52925e598b221d79057d371391\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:21.585163 kubelet[2720]: I0707 05:55:21.585163 2720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6157f23da8bef908d145ca7a07b7b746-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-a-2bf61d9e54\" (UID: \"6157f23da8bef908d145ca7a07b7b746\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:21.585326 kubelet[2720]: I0707 05:55:21.585181 2720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2cf9e52925e598b221d79057d371391-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-2bf61d9e54\" (UID: \"c2cf9e52925e598b221d79057d371391\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:21.585326 kubelet[2720]: I0707 05:55:21.585206 2720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c2cf9e52925e598b221d79057d371391-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-a-2bf61d9e54\" (UID: \"c2cf9e52925e598b221d79057d371391\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:21.585326 kubelet[2720]: I0707 05:55:21.585225 2720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c2cf9e52925e598b221d79057d371391-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-a-2bf61d9e54\" (UID: \"c2cf9e52925e598b221d79057d371391\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:21.585326 kubelet[2720]: I0707 05:55:21.585241 2720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/489fd6940a89a10949472499e2df4203-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-a-2bf61d9e54\" (UID: \"489fd6940a89a10949472499e2df4203\") " pod="kube-system/kube-scheduler-ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:21.585326 kubelet[2720]: I0707 05:55:21.585256 2720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6157f23da8bef908d145ca7a07b7b746-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-a-2bf61d9e54\" (UID: \"6157f23da8bef908d145ca7a07b7b746\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:21.585481 kubelet[2720]: I0707 05:55:21.585272 2720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6157f23da8bef908d145ca7a07b7b746-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-a-2bf61d9e54\" (UID: \"6157f23da8bef908d145ca7a07b7b746\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:21.585481 kubelet[2720]: I0707 05:55:21.585288 2720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2cf9e52925e598b221d79057d371391-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-2bf61d9e54\" (UID: \"c2cf9e52925e598b221d79057d371391\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:21.830628 containerd[1711]: time="2025-07-07T05:55:21.830351861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-a-2bf61d9e54,Uid:489fd6940a89a10949472499e2df4203,Namespace:kube-system,Attempt:0,}" Jul 7 05:55:21.836544 containerd[1711]: time="2025-07-07T05:55:21.836317074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-a-2bf61d9e54,Uid:6157f23da8bef908d145ca7a07b7b746,Namespace:kube-system,Attempt:0,}" Jul 7 05:55:21.837484 kubelet[2720]: I0707 05:55:21.837458 2720 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:21.838038 kubelet[2720]: E0707 05:55:21.838011 2720 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:21.841663 containerd[1711]: time="2025-07-07T05:55:21.841628926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-a-2bf61d9e54,Uid:c2cf9e52925e598b221d79057d371391,Namespace:kube-system,Attempt:0,}" Jul 7 05:55:21.854607 kubelet[2720]: W0707 05:55:21.854507 2720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Jul 7 05:55:21.854607 kubelet[2720]: E0707 05:55:21.854573 2720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:55:21.991911 kubelet[2720]: W0707 05:55:21.991837 2720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Jul 7 05:55:21.991911 kubelet[2720]: E0707 05:55:21.991881 2720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:55:22.015513 kubelet[2720]: W0707 05:55:22.015463 2720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Jul 7 05:55:22.015662 kubelet[2720]: E0707 05:55:22.015521 2720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:55:22.051241 kubelet[2720]: W0707 05:55:22.051178 2720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-2bf61d9e54&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Jul 7 05:55:22.051341 kubelet[2720]: E0707 05:55:22.051256 2720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-2bf61d9e54&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:55:22.186782 kubelet[2720]: E0707 05:55:22.186656 2720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-2bf61d9e54?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="1.6s" Jul 7 05:55:22.536161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount792076392.mount: Deactivated successfully. Jul 7 05:55:22.560847 containerd[1711]: time="2025-07-07T05:55:22.560792293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:55:22.563250 containerd[1711]: time="2025-07-07T05:55:22.563203859Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 7 05:55:22.566395 containerd[1711]: time="2025-07-07T05:55:22.566355066Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:55:22.569664 containerd[1711]: time="2025-07-07T05:55:22.568903151Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:55:22.574171 containerd[1711]: time="2025-07-07T05:55:22.574133163Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 05:55:22.577978 containerd[1711]: time="2025-07-07T05:55:22.576927329Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:55:22.579772 containerd[1711]: time="2025-07-07T05:55:22.579520014Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 05:55:22.584676 containerd[1711]: time="2025-07-07T05:55:22.584619025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:55:22.585621 containerd[1711]: time="2025-07-07T05:55:22.585383027Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 743.509181ms" Jul 7 05:55:22.587831 containerd[1711]: time="2025-07-07T05:55:22.587790272Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 751.393918ms" Jul 7 05:55:22.588510 containerd[1711]: time="2025-07-07T05:55:22.588479794Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 758.011453ms" Jul 7 05:55:22.640739 kubelet[2720]: I0707 05:55:22.640640 2720 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:22.641136 kubelet[2720]: E0707 05:55:22.641066 2720 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:22.934803 kubelet[2720]: E0707 05:55:22.934663 2720 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:55:23.034744 containerd[1711]: time="2025-07-07T05:55:23.034197366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:55:23.034744 containerd[1711]: time="2025-07-07T05:55:23.034273006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:55:23.034744 containerd[1711]: time="2025-07-07T05:55:23.034291526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:23.034744 containerd[1711]: time="2025-07-07T05:55:23.034371126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:23.035894 containerd[1711]: time="2025-07-07T05:55:23.035389568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:55:23.035894 containerd[1711]: time="2025-07-07T05:55:23.035675369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:55:23.035894 containerd[1711]: time="2025-07-07T05:55:23.035692329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:23.037387 containerd[1711]: time="2025-07-07T05:55:23.035921249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:23.038621 containerd[1711]: time="2025-07-07T05:55:23.037578253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:55:23.038870 containerd[1711]: time="2025-07-07T05:55:23.038583775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:55:23.038870 containerd[1711]: time="2025-07-07T05:55:23.038602775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:23.039404 containerd[1711]: time="2025-07-07T05:55:23.039170576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:23.065378 systemd[1]: Started cri-containerd-2296c2900940a4944218e2abf7f44330986c6d3e76a6e0b2f54c3503da8a2972.scope - libcontainer container 2296c2900940a4944218e2abf7f44330986c6d3e76a6e0b2f54c3503da8a2972. Jul 7 05:55:23.067563 systemd[1]: Started cri-containerd-682a9bcb44212b1e922bfd0c17a316764e1aa5a1322ce9b94258b1419f500844.scope - libcontainer container 682a9bcb44212b1e922bfd0c17a316764e1aa5a1322ce9b94258b1419f500844. Jul 7 05:55:23.072775 systemd[1]: Started cri-containerd-221bbf0a5d07dc306f78eeb1fa54e9629f9d67cd0ef07bdc843b5dcf86281f09.scope - libcontainer container 221bbf0a5d07dc306f78eeb1fa54e9629f9d67cd0ef07bdc843b5dcf86281f09. Jul 7 05:55:23.111863 kubelet[2720]: E0707 05:55:23.111641 2720 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.24:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.24:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.4-a-2bf61d9e54.184fe26905ff6bd7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.4-a-2bf61d9e54,UID:ci-4081.3.4-a-2bf61d9e54,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.4-a-2bf61d9e54,},FirstTimestamp:2025-07-07 05:55:20.770481111 +0000 UTC m=+0.972718891,LastTimestamp:2025-07-07 05:55:20.770481111 +0000 UTC m=+0.972718891,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.4-a-2bf61d9e54,}" Jul 7 05:55:23.115214 containerd[1711]: time="2025-07-07T05:55:23.114716901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-a-2bf61d9e54,Uid:489fd6940a89a10949472499e2df4203,Namespace:kube-system,Attempt:0,} returns sandbox id \"682a9bcb44212b1e922bfd0c17a316764e1aa5a1322ce9b94258b1419f500844\"" Jul 7 05:55:23.121127 containerd[1711]: time="2025-07-07T05:55:23.120956955Z" level=info msg="CreateContainer within sandbox \"682a9bcb44212b1e922bfd0c17a316764e1aa5a1322ce9b94258b1419f500844\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 05:55:23.131749 containerd[1711]: time="2025-07-07T05:55:23.131672298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-a-2bf61d9e54,Uid:6157f23da8bef908d145ca7a07b7b746,Namespace:kube-system,Attempt:0,} returns sandbox id \"221bbf0a5d07dc306f78eeb1fa54e9629f9d67cd0ef07bdc843b5dcf86281f09\"" Jul 7 05:55:23.136006 containerd[1711]: time="2025-07-07T05:55:23.135901627Z" level=info msg="CreateContainer within sandbox \"221bbf0a5d07dc306f78eeb1fa54e9629f9d67cd0ef07bdc843b5dcf86281f09\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 05:55:23.139821 containerd[1711]: time="2025-07-07T05:55:23.139774716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-a-2bf61d9e54,Uid:c2cf9e52925e598b221d79057d371391,Namespace:kube-system,Attempt:0,} returns sandbox id \"2296c2900940a4944218e2abf7f44330986c6d3e76a6e0b2f54c3503da8a2972\"" Jul 7 05:55:23.142701 containerd[1711]: time="2025-07-07T05:55:23.142219481Z" level=info msg="CreateContainer within sandbox \"2296c2900940a4944218e2abf7f44330986c6d3e76a6e0b2f54c3503da8a2972\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 05:55:23.184542 containerd[1711]: time="2025-07-07T05:55:23.184485493Z" level=info msg="CreateContainer within sandbox \"682a9bcb44212b1e922bfd0c17a316764e1aa5a1322ce9b94258b1419f500844\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"679845344a393764365f372bfdae6740e91807e00e2db2786c7c3f132d307d55\"" Jul 7 05:55:23.186435 containerd[1711]: time="2025-07-07T05:55:23.185201815Z" level=info msg="StartContainer for \"679845344a393764365f372bfdae6740e91807e00e2db2786c7c3f132d307d55\"" Jul 7 05:55:23.195217 containerd[1711]: time="2025-07-07T05:55:23.195171876Z" level=info msg="CreateContainer within sandbox \"221bbf0a5d07dc306f78eeb1fa54e9629f9d67cd0ef07bdc843b5dcf86281f09\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"612e60dbe008d193066dc1ac5cc4112c05e5f4ca58bfaa7d8c71dcd6cbacaa9e\"" Jul 7 05:55:23.195851 containerd[1711]: time="2025-07-07T05:55:23.195832558Z" level=info msg="StartContainer for \"612e60dbe008d193066dc1ac5cc4112c05e5f4ca58bfaa7d8c71dcd6cbacaa9e\"" Jul 7 05:55:23.211449 containerd[1711]: time="2025-07-07T05:55:23.211411352Z" level=info msg="CreateContainer within sandbox \"2296c2900940a4944218e2abf7f44330986c6d3e76a6e0b2f54c3503da8a2972\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8ae8a845d3acab622329fc82d3d5ede9035f461a8590b33b48bd8a3d1daf3bc5\"" Jul 7 05:55:23.212535 containerd[1711]: time="2025-07-07T05:55:23.212488314Z" level=info msg="StartContainer for \"8ae8a845d3acab622329fc82d3d5ede9035f461a8590b33b48bd8a3d1daf3bc5\"" Jul 7 05:55:23.216308 systemd[1]: Started cri-containerd-679845344a393764365f372bfdae6740e91807e00e2db2786c7c3f132d307d55.scope - libcontainer container 679845344a393764365f372bfdae6740e91807e00e2db2786c7c3f132d307d55. Jul 7 05:55:23.226307 systemd[1]: Started cri-containerd-612e60dbe008d193066dc1ac5cc4112c05e5f4ca58bfaa7d8c71dcd6cbacaa9e.scope - libcontainer container 612e60dbe008d193066dc1ac5cc4112c05e5f4ca58bfaa7d8c71dcd6cbacaa9e. Jul 7 05:55:23.265328 systemd[1]: Started cri-containerd-8ae8a845d3acab622329fc82d3d5ede9035f461a8590b33b48bd8a3d1daf3bc5.scope - libcontainer container 8ae8a845d3acab622329fc82d3d5ede9035f461a8590b33b48bd8a3d1daf3bc5. Jul 7 05:55:23.274503 containerd[1711]: time="2025-07-07T05:55:23.273935648Z" level=info msg="StartContainer for \"679845344a393764365f372bfdae6740e91807e00e2db2786c7c3f132d307d55\" returns successfully" Jul 7 05:55:23.287049 containerd[1711]: time="2025-07-07T05:55:23.286516556Z" level=info msg="StartContainer for \"612e60dbe008d193066dc1ac5cc4112c05e5f4ca58bfaa7d8c71dcd6cbacaa9e\" returns successfully" Jul 7 05:55:23.331183 containerd[1711]: time="2025-07-07T05:55:23.330619612Z" level=info msg="StartContainer for \"8ae8a845d3acab622329fc82d3d5ede9035f461a8590b33b48bd8a3d1daf3bc5\" returns successfully" Jul 7 05:55:24.245746 kubelet[2720]: I0707 05:55:24.244472 2720 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:26.176131 kubelet[2720]: E0707 05:55:26.174901 2720 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.4-a-2bf61d9e54\" not found" node="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:26.239122 kubelet[2720]: I0707 05:55:26.238139 2720 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:26.771800 kubelet[2720]: I0707 05:55:26.771537 2720 apiserver.go:52] "Watching apiserver" Jul 7 05:55:26.781645 kubelet[2720]: I0707 05:55:26.781619 2720 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 05:55:28.368397 systemd[1]: Reloading requested from client PID 3001 ('systemctl') (unit session-9.scope)... Jul 7 05:55:28.368412 systemd[1]: Reloading... Jul 7 05:55:28.464148 zram_generator::config[3041]: No configuration found. Jul 7 05:55:28.584042 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:55:28.676736 systemd[1]: Reloading finished in 308 ms. Jul 7 05:55:28.715176 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:55:28.735217 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 05:55:28.735491 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:55:28.735557 systemd[1]: kubelet.service: Consumed 1.238s CPU time, 128.3M memory peak, 0B memory swap peak. Jul 7 05:55:28.740868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:55:28.858360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:55:28.870519 (kubelet)[3105]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 05:55:28.919137 kubelet[3105]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:55:28.919137 kubelet[3105]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 05:55:28.919137 kubelet[3105]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:55:28.919137 kubelet[3105]: I0707 05:55:28.918946 3105 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 05:55:28.928605 kubelet[3105]: I0707 05:55:28.928489 3105 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 05:55:28.928605 kubelet[3105]: I0707 05:55:28.928524 3105 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 05:55:28.929049 kubelet[3105]: I0707 05:55:28.928801 3105 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 05:55:28.931488 kubelet[3105]: I0707 05:55:28.931196 3105 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 05:55:28.933552 kubelet[3105]: I0707 05:55:28.933527 3105 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 05:55:28.938260 kubelet[3105]: E0707 05:55:28.938215 3105 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 05:55:28.939292 kubelet[3105]: I0707 05:55:28.939233 3105 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 05:55:28.942168 kubelet[3105]: I0707 05:55:28.942139 3105 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 05:55:28.942328 kubelet[3105]: I0707 05:55:28.942266 3105 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 05:55:28.942386 kubelet[3105]: I0707 05:55:28.942363 3105 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 05:55:28.942567 kubelet[3105]: I0707 05:55:28.942388 3105 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-a-2bf61d9e54","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 05:55:28.942657 kubelet[3105]: I0707 05:55:28.942571 3105 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 05:55:28.942657 kubelet[3105]: I0707 05:55:28.942580 3105 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 05:55:28.942657 kubelet[3105]: I0707 05:55:28.942615 3105 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:55:28.942749 kubelet[3105]: I0707 05:55:28.942720 3105 kubelet.go:408] "Attempting to sync node with API server" Jul 7 05:55:28.942749 kubelet[3105]: I0707 05:55:28.942732 3105 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 05:55:28.944045 kubelet[3105]: I0707 05:55:28.942752 3105 kubelet.go:314] "Adding apiserver pod source" Jul 7 05:55:28.944045 kubelet[3105]: I0707 05:55:28.942766 3105 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 05:55:28.945126 kubelet[3105]: I0707 05:55:28.945022 3105 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 05:55:28.946037 kubelet[3105]: I0707 05:55:28.946009 3105 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 05:55:28.947260 kubelet[3105]: I0707 05:55:28.947242 3105 server.go:1274] "Started kubelet" Jul 7 05:55:28.956514 kubelet[3105]: I0707 05:55:28.956315 3105 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 05:55:28.964709 kubelet[3105]: I0707 05:55:28.964668 3105 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 05:55:28.966133 kubelet[3105]: I0707 05:55:28.965883 3105 server.go:449] "Adding debug handlers to kubelet server" Jul 7 05:55:28.978529 kubelet[3105]: I0707 05:55:28.969362 3105 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 05:55:28.978529 kubelet[3105]: I0707 05:55:28.969714 3105 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 05:55:28.979445 kubelet[3105]: I0707 05:55:28.969723 3105 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 05:55:28.979589 kubelet[3105]: I0707 05:55:28.979569 3105 reconciler.go:26] "Reconciler: start to sync state" Jul 7 05:55:28.982132 kubelet[3105]: E0707 05:55:28.969856 3105 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-2bf61d9e54\" not found" Jul 7 05:55:28.988745 kubelet[3105]: I0707 05:55:28.988682 3105 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 05:55:28.989064 kubelet[3105]: I0707 05:55:28.989046 3105 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 05:55:28.991841 kubelet[3105]: E0707 05:55:28.991809 3105 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 05:55:28.992639 kubelet[3105]: I0707 05:55:28.992047 3105 factory.go:221] Registration of the containerd container factory successfully Jul 7 05:55:28.992639 kubelet[3105]: I0707 05:55:28.992066 3105 factory.go:221] Registration of the systemd container factory successfully Jul 7 05:55:28.992639 kubelet[3105]: I0707 05:55:28.992170 3105 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 05:55:28.992901 kubelet[3105]: I0707 05:55:28.992870 3105 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 05:55:28.994831 kubelet[3105]: I0707 05:55:28.994730 3105 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 05:55:28.994984 kubelet[3105]: I0707 05:55:28.994971 3105 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 05:55:28.995060 kubelet[3105]: I0707 05:55:28.995050 3105 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 05:55:28.995277 kubelet[3105]: E0707 05:55:28.995256 3105 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 05:55:29.056342 kubelet[3105]: I0707 05:55:29.056313 3105 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 05:55:29.056580 kubelet[3105]: I0707 05:55:29.056563 3105 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 05:55:29.056931 kubelet[3105]: I0707 05:55:29.056650 3105 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:55:29.056931 kubelet[3105]: I0707 05:55:29.056810 3105 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 05:55:29.056931 kubelet[3105]: I0707 05:55:29.056821 3105 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 05:55:29.056931 kubelet[3105]: I0707 05:55:29.056839 3105 policy_none.go:49] "None policy: Start" Jul 7 05:55:29.057784 kubelet[3105]: I0707 05:55:29.057761 3105 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 05:55:29.057784 kubelet[3105]: I0707 05:55:29.057790 3105 state_mem.go:35] "Initializing new in-memory state store" Jul 7 05:55:29.057988 kubelet[3105]: I0707 05:55:29.057967 3105 state_mem.go:75] "Updated machine memory state" Jul 7 05:55:29.062220 kubelet[3105]: I0707 05:55:29.062189 3105 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 05:55:29.062391 kubelet[3105]: I0707 05:55:29.062370 3105 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 05:55:29.062432 kubelet[3105]: I0707 05:55:29.062390 3105 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 05:55:29.063052 kubelet[3105]: I0707 05:55:29.063014 3105 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 05:55:29.113779 kubelet[3105]: W0707 05:55:29.113706 3105 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 05:55:29.118736 kubelet[3105]: W0707 05:55:29.118690 3105 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 05:55:29.118881 kubelet[3105]: W0707 05:55:29.118753 3105 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 05:55:29.165989 kubelet[3105]: I0707 05:55:29.165601 3105 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:29.178947 kubelet[3105]: I0707 05:55:29.178839 3105 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:29.178947 kubelet[3105]: I0707 05:55:29.178930 3105 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:29.281160 kubelet[3105]: I0707 05:55:29.281069 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6157f23da8bef908d145ca7a07b7b746-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-a-2bf61d9e54\" (UID: \"6157f23da8bef908d145ca7a07b7b746\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:29.281160 kubelet[3105]: I0707 05:55:29.281135 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2cf9e52925e598b221d79057d371391-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-2bf61d9e54\" (UID: \"c2cf9e52925e598b221d79057d371391\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:29.281160 kubelet[3105]: I0707 05:55:29.281155 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c2cf9e52925e598b221d79057d371391-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-a-2bf61d9e54\" (UID: \"c2cf9e52925e598b221d79057d371391\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:29.281437 kubelet[3105]: I0707 05:55:29.281174 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2cf9e52925e598b221d79057d371391-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-a-2bf61d9e54\" (UID: \"c2cf9e52925e598b221d79057d371391\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:29.281437 kubelet[3105]: I0707 05:55:29.281193 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/489fd6940a89a10949472499e2df4203-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-a-2bf61d9e54\" (UID: \"489fd6940a89a10949472499e2df4203\") " pod="kube-system/kube-scheduler-ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:29.281437 kubelet[3105]: I0707 05:55:29.281235 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6157f23da8bef908d145ca7a07b7b746-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-a-2bf61d9e54\" (UID: \"6157f23da8bef908d145ca7a07b7b746\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:29.281437 kubelet[3105]: I0707 05:55:29.281251 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6157f23da8bef908d145ca7a07b7b746-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-a-2bf61d9e54\" (UID: \"6157f23da8bef908d145ca7a07b7b746\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:29.281437 kubelet[3105]: I0707 05:55:29.281267 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2cf9e52925e598b221d79057d371391-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-2bf61d9e54\" (UID: \"c2cf9e52925e598b221d79057d371391\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:29.281553 kubelet[3105]: I0707 05:55:29.281282 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c2cf9e52925e598b221d79057d371391-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-a-2bf61d9e54\" (UID: \"c2cf9e52925e598b221d79057d371391\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:29.943775 kubelet[3105]: I0707 05:55:29.943539 3105 apiserver.go:52] "Watching apiserver" Jul 7 05:55:29.979839 kubelet[3105]: I0707 05:55:29.979786 3105 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 05:55:30.041448 kubelet[3105]: W0707 05:55:30.041199 3105 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 05:55:30.041448 kubelet[3105]: E0707 05:55:30.041281 3105 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.4-a-2bf61d9e54\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:30.042256 kubelet[3105]: W0707 05:55:30.042159 3105 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 05:55:30.042256 kubelet[3105]: E0707 05:55:30.042206 3105 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.4-a-2bf61d9e54\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.4-a-2bf61d9e54" Jul 7 05:55:30.082443 kubelet[3105]: I0707 05:55:30.081873 3105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.4-a-2bf61d9e54" podStartSLOduration=1.081854462 podStartE2EDuration="1.081854462s" podCreationTimestamp="2025-07-07 05:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:55:30.062290258 +0000 UTC m=+1.187338230" watchObservedRunningTime="2025-07-07 05:55:30.081854462 +0000 UTC m=+1.206902434" Jul 7 05:55:30.082443 kubelet[3105]: I0707 05:55:30.082024 3105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.4-a-2bf61d9e54" podStartSLOduration=1.082019343 podStartE2EDuration="1.082019343s" podCreationTimestamp="2025-07-07 05:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:55:30.081811502 +0000 UTC m=+1.206859474" watchObservedRunningTime="2025-07-07 05:55:30.082019343 +0000 UTC m=+1.207067315" Jul 7 05:55:35.270257 kubelet[3105]: I0707 05:55:35.270231 3105 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 05:55:35.271633 kubelet[3105]: I0707 05:55:35.271035 3105 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 05:55:35.271665 containerd[1711]: time="2025-07-07T05:55:35.270819808Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 05:55:36.065680 kubelet[3105]: I0707 05:55:36.064731 3105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.4-a-2bf61d9e54" podStartSLOduration=7.064691152 podStartE2EDuration="7.064691152s" podCreationTimestamp="2025-07-07 05:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:55:30.097576098 +0000 UTC m=+1.222624070" watchObservedRunningTime="2025-07-07 05:55:36.064691152 +0000 UTC m=+7.189739124" Jul 7 05:55:36.079899 systemd[1]: Created slice kubepods-besteffort-pod719eec71_2633_40c9_b86b_cecd0bec394e.slice - libcontainer container kubepods-besteffort-pod719eec71_2633_40c9_b86b_cecd0bec394e.slice. Jul 7 05:55:36.126398 kubelet[3105]: I0707 05:55:36.126258 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/719eec71-2633-40c9-b86b-cecd0bec394e-kube-proxy\") pod \"kube-proxy-j5qhq\" (UID: \"719eec71-2633-40c9-b86b-cecd0bec394e\") " pod="kube-system/kube-proxy-j5qhq" Jul 7 05:55:36.126398 kubelet[3105]: I0707 05:55:36.126316 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/719eec71-2633-40c9-b86b-cecd0bec394e-lib-modules\") pod \"kube-proxy-j5qhq\" (UID: \"719eec71-2633-40c9-b86b-cecd0bec394e\") " pod="kube-system/kube-proxy-j5qhq" Jul 7 05:55:36.126398 kubelet[3105]: I0707 05:55:36.126334 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/719eec71-2633-40c9-b86b-cecd0bec394e-xtables-lock\") pod \"kube-proxy-j5qhq\" (UID: \"719eec71-2633-40c9-b86b-cecd0bec394e\") " pod="kube-system/kube-proxy-j5qhq" Jul 7 05:55:36.126398 kubelet[3105]: I0707 05:55:36.126353 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xqjs\" (UniqueName: \"kubernetes.io/projected/719eec71-2633-40c9-b86b-cecd0bec394e-kube-api-access-8xqjs\") pod \"kube-proxy-j5qhq\" (UID: \"719eec71-2633-40c9-b86b-cecd0bec394e\") " pod="kube-system/kube-proxy-j5qhq" Jul 7 05:55:36.353259 systemd[1]: Created slice kubepods-besteffort-podd9ecd009_ad3b_4701_9b62_7c6b69b9cbaa.slice - libcontainer container kubepods-besteffort-podd9ecd009_ad3b_4701_9b62_7c6b69b9cbaa.slice. Jul 7 05:55:36.389748 containerd[1711]: time="2025-07-07T05:55:36.389698123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j5qhq,Uid:719eec71-2633-40c9-b86b-cecd0bec394e,Namespace:kube-system,Attempt:0,}" Jul 7 05:55:36.424149 containerd[1711]: time="2025-07-07T05:55:36.423977240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:55:36.424352 containerd[1711]: time="2025-07-07T05:55:36.424047840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:55:36.424352 containerd[1711]: time="2025-07-07T05:55:36.424071840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:36.424643 containerd[1711]: time="2025-07-07T05:55:36.424453481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:36.428728 kubelet[3105]: I0707 05:55:36.428609 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d9ecd009-ad3b-4701-9b62-7c6b69b9cbaa-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-69kpw\" (UID: \"d9ecd009-ad3b-4701-9b62-7c6b69b9cbaa\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-69kpw" Jul 7 05:55:36.428728 kubelet[3105]: I0707 05:55:36.428656 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6wxm\" (UniqueName: \"kubernetes.io/projected/d9ecd009-ad3b-4701-9b62-7c6b69b9cbaa-kube-api-access-x6wxm\") pod \"tigera-operator-5bf8dfcb4-69kpw\" (UID: \"d9ecd009-ad3b-4701-9b62-7c6b69b9cbaa\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-69kpw" Jul 7 05:55:36.445409 systemd[1]: Started cri-containerd-cd2ab0e45e1a05fdefc01aa9607a0ae1bc6e93c0fff01a6d876c2d00cc49f356.scope - libcontainer container cd2ab0e45e1a05fdefc01aa9607a0ae1bc6e93c0fff01a6d876c2d00cc49f356. Jul 7 05:55:36.476152 containerd[1711]: time="2025-07-07T05:55:36.476077557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j5qhq,Uid:719eec71-2633-40c9-b86b-cecd0bec394e,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd2ab0e45e1a05fdefc01aa9607a0ae1bc6e93c0fff01a6d876c2d00cc49f356\"" Jul 7 05:55:36.480154 containerd[1711]: time="2025-07-07T05:55:36.480063806Z" level=info msg="CreateContainer within sandbox \"cd2ab0e45e1a05fdefc01aa9607a0ae1bc6e93c0fff01a6d876c2d00cc49f356\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 05:55:36.515827 containerd[1711]: time="2025-07-07T05:55:36.515770647Z" level=info msg="CreateContainer within sandbox \"cd2ab0e45e1a05fdefc01aa9607a0ae1bc6e93c0fff01a6d876c2d00cc49f356\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"662606054cc9232b6cc18fa689c362aea886ac26835f32f87444efca67f40b8b\"" Jul 7 05:55:36.517312 containerd[1711]: time="2025-07-07T05:55:36.516682289Z" level=info msg="StartContainer for \"662606054cc9232b6cc18fa689c362aea886ac26835f32f87444efca67f40b8b\"" Jul 7 05:55:36.544024 systemd[1]: Started cri-containerd-662606054cc9232b6cc18fa689c362aea886ac26835f32f87444efca67f40b8b.scope - libcontainer container 662606054cc9232b6cc18fa689c362aea886ac26835f32f87444efca67f40b8b. Jul 7 05:55:36.574501 containerd[1711]: time="2025-07-07T05:55:36.574440138Z" level=info msg="StartContainer for \"662606054cc9232b6cc18fa689c362aea886ac26835f32f87444efca67f40b8b\" returns successfully" Jul 7 05:55:36.657302 containerd[1711]: time="2025-07-07T05:55:36.656383683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-69kpw,Uid:d9ecd009-ad3b-4701-9b62-7c6b69b9cbaa,Namespace:tigera-operator,Attempt:0,}" Jul 7 05:55:36.701489 containerd[1711]: time="2025-07-07T05:55:36.701113663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:55:36.701489 containerd[1711]: time="2025-07-07T05:55:36.701184783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:55:36.701489 containerd[1711]: time="2025-07-07T05:55:36.701203783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:36.701949 containerd[1711]: time="2025-07-07T05:55:36.701898865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:36.724303 systemd[1]: Started cri-containerd-eed6e0333d16f653ed46009df4f3d54e2efc27ddd0ef1fc05433195418d7d96c.scope - libcontainer container eed6e0333d16f653ed46009df4f3d54e2efc27ddd0ef1fc05433195418d7d96c. Jul 7 05:55:36.754568 containerd[1711]: time="2025-07-07T05:55:36.754519343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-69kpw,Uid:d9ecd009-ad3b-4701-9b62-7c6b69b9cbaa,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"eed6e0333d16f653ed46009df4f3d54e2efc27ddd0ef1fc05433195418d7d96c\"" Jul 7 05:55:36.758299 containerd[1711]: time="2025-07-07T05:55:36.758247471Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 05:55:38.029855 kubelet[3105]: I0707 05:55:38.029490 3105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j5qhq" podStartSLOduration=2.02946776 podStartE2EDuration="2.02946776s" podCreationTimestamp="2025-07-07 05:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:55:37.057393932 +0000 UTC m=+8.182441904" watchObservedRunningTime="2025-07-07 05:55:38.02946776 +0000 UTC m=+9.154515732" Jul 7 05:55:38.200522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3364784584.mount: Deactivated successfully. Jul 7 05:55:38.659149 containerd[1711]: time="2025-07-07T05:55:38.658926271Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:38.661141 containerd[1711]: time="2025-07-07T05:55:38.660891915Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 7 05:55:38.663359 containerd[1711]: time="2025-07-07T05:55:38.663303080Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:38.668124 containerd[1711]: time="2025-07-07T05:55:38.667863171Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:38.668729 containerd[1711]: time="2025-07-07T05:55:38.668398892Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.910107741s" Jul 7 05:55:38.668729 containerd[1711]: time="2025-07-07T05:55:38.668433692Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 7 05:55:38.671374 containerd[1711]: time="2025-07-07T05:55:38.671336778Z" level=info msg="CreateContainer within sandbox \"eed6e0333d16f653ed46009df4f3d54e2efc27ddd0ef1fc05433195418d7d96c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 05:55:38.708563 containerd[1711]: time="2025-07-07T05:55:38.708509300Z" level=info msg="CreateContainer within sandbox \"eed6e0333d16f653ed46009df4f3d54e2efc27ddd0ef1fc05433195418d7d96c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f72dde005279635f82b11e5a76d2f2fe301389a65e15bb12a413e0150eb2e623\"" Jul 7 05:55:38.709295 containerd[1711]: time="2025-07-07T05:55:38.709262222Z" level=info msg="StartContainer for \"f72dde005279635f82b11e5a76d2f2fe301389a65e15bb12a413e0150eb2e623\"" Jul 7 05:55:38.738293 systemd[1]: Started cri-containerd-f72dde005279635f82b11e5a76d2f2fe301389a65e15bb12a413e0150eb2e623.scope - libcontainer container f72dde005279635f82b11e5a76d2f2fe301389a65e15bb12a413e0150eb2e623. Jul 7 05:55:38.764374 containerd[1711]: time="2025-07-07T05:55:38.764238864Z" level=info msg="StartContainer for \"f72dde005279635f82b11e5a76d2f2fe301389a65e15bb12a413e0150eb2e623\" returns successfully" Jul 7 05:55:39.079476 kubelet[3105]: I0707 05:55:39.079292 3105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-69kpw" podStartSLOduration=1.16532509 podStartE2EDuration="3.07927252s" podCreationTimestamp="2025-07-07 05:55:36 +0000 UTC" firstStartedPulling="2025-07-07 05:55:36.756203106 +0000 UTC m=+7.881251038" lastFinishedPulling="2025-07-07 05:55:38.670150496 +0000 UTC m=+9.795198468" observedRunningTime="2025-07-07 05:55:39.062334922 +0000 UTC m=+10.187382894" watchObservedRunningTime="2025-07-07 05:55:39.07927252 +0000 UTC m=+10.204320492" Jul 7 05:55:44.701218 sudo[2184]: pam_unix(sudo:session): session closed for user root Jul 7 05:55:44.779897 sshd[2181]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:44.786566 systemd-logind[1662]: Session 9 logged out. Waiting for processes to exit. Jul 7 05:55:44.790381 systemd[1]: sshd@6-10.200.20.24:22-10.200.16.10:57240.service: Deactivated successfully. Jul 7 05:55:44.793806 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 05:55:44.794950 systemd[1]: session-9.scope: Consumed 7.984s CPU time, 146.2M memory peak, 0B memory swap peak. Jul 7 05:55:44.796735 systemd-logind[1662]: Removed session 9. Jul 7 05:55:51.383468 systemd[1]: Created slice kubepods-besteffort-pod6eee5644_d796_470b_ae63_786bf6d72804.slice - libcontainer container kubepods-besteffort-pod6eee5644_d796_470b_ae63_786bf6d72804.slice. Jul 7 05:55:51.423690 kubelet[3105]: I0707 05:55:51.423625 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6eee5644-d796-470b-ae63-786bf6d72804-typha-certs\") pod \"calico-typha-69f979544f-jq2vc\" (UID: \"6eee5644-d796-470b-ae63-786bf6d72804\") " pod="calico-system/calico-typha-69f979544f-jq2vc" Jul 7 05:55:51.423690 kubelet[3105]: I0707 05:55:51.423676 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhgk6\" (UniqueName: \"kubernetes.io/projected/6eee5644-d796-470b-ae63-786bf6d72804-kube-api-access-fhgk6\") pod \"calico-typha-69f979544f-jq2vc\" (UID: \"6eee5644-d796-470b-ae63-786bf6d72804\") " pod="calico-system/calico-typha-69f979544f-jq2vc" Jul 7 05:55:51.423690 kubelet[3105]: I0707 05:55:51.423701 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eee5644-d796-470b-ae63-786bf6d72804-tigera-ca-bundle\") pod \"calico-typha-69f979544f-jq2vc\" (UID: \"6eee5644-d796-470b-ae63-786bf6d72804\") " pod="calico-system/calico-typha-69f979544f-jq2vc" Jul 7 05:55:51.564140 systemd[1]: Created slice kubepods-besteffort-pod1260b3b6_01ce_4cbe_9a79_fe1dd20004e1.slice - libcontainer container kubepods-besteffort-pod1260b3b6_01ce_4cbe_9a79_fe1dd20004e1.slice. Jul 7 05:55:51.625280 kubelet[3105]: I0707 05:55:51.625190 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1260b3b6-01ce-4cbe-9a79-fe1dd20004e1-cni-net-dir\") pod \"calico-node-9trfb\" (UID: \"1260b3b6-01ce-4cbe-9a79-fe1dd20004e1\") " pod="calico-system/calico-node-9trfb" Jul 7 05:55:51.625280 kubelet[3105]: I0707 05:55:51.625235 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1260b3b6-01ce-4cbe-9a79-fe1dd20004e1-cni-bin-dir\") pod \"calico-node-9trfb\" (UID: \"1260b3b6-01ce-4cbe-9a79-fe1dd20004e1\") " pod="calico-system/calico-node-9trfb" Jul 7 05:55:51.625280 kubelet[3105]: I0707 05:55:51.625255 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1260b3b6-01ce-4cbe-9a79-fe1dd20004e1-var-lib-calico\") pod \"calico-node-9trfb\" (UID: \"1260b3b6-01ce-4cbe-9a79-fe1dd20004e1\") " pod="calico-system/calico-node-9trfb" Jul 7 05:55:51.625280 kubelet[3105]: I0707 05:55:51.625274 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq267\" (UniqueName: \"kubernetes.io/projected/1260b3b6-01ce-4cbe-9a79-fe1dd20004e1-kube-api-access-pq267\") pod \"calico-node-9trfb\" (UID: \"1260b3b6-01ce-4cbe-9a79-fe1dd20004e1\") " pod="calico-system/calico-node-9trfb" Jul 7 05:55:51.625280 kubelet[3105]: I0707 05:55:51.625293 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1260b3b6-01ce-4cbe-9a79-fe1dd20004e1-xtables-lock\") pod \"calico-node-9trfb\" (UID: \"1260b3b6-01ce-4cbe-9a79-fe1dd20004e1\") " pod="calico-system/calico-node-9trfb" Jul 7 05:55:51.625667 kubelet[3105]: I0707 05:55:51.625309 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1260b3b6-01ce-4cbe-9a79-fe1dd20004e1-flexvol-driver-host\") pod \"calico-node-9trfb\" (UID: \"1260b3b6-01ce-4cbe-9a79-fe1dd20004e1\") " pod="calico-system/calico-node-9trfb" Jul 7 05:55:51.625667 kubelet[3105]: I0707 05:55:51.625323 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1260b3b6-01ce-4cbe-9a79-fe1dd20004e1-lib-modules\") pod \"calico-node-9trfb\" (UID: \"1260b3b6-01ce-4cbe-9a79-fe1dd20004e1\") " pod="calico-system/calico-node-9trfb" Jul 7 05:55:51.625667 kubelet[3105]: I0707 05:55:51.625337 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1260b3b6-01ce-4cbe-9a79-fe1dd20004e1-node-certs\") pod \"calico-node-9trfb\" (UID: \"1260b3b6-01ce-4cbe-9a79-fe1dd20004e1\") " pod="calico-system/calico-node-9trfb" Jul 7 05:55:51.625667 kubelet[3105]: I0707 05:55:51.625352 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1260b3b6-01ce-4cbe-9a79-fe1dd20004e1-policysync\") pod \"calico-node-9trfb\" (UID: \"1260b3b6-01ce-4cbe-9a79-fe1dd20004e1\") " pod="calico-system/calico-node-9trfb" Jul 7 05:55:51.625667 kubelet[3105]: I0707 05:55:51.625366 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1260b3b6-01ce-4cbe-9a79-fe1dd20004e1-var-run-calico\") pod \"calico-node-9trfb\" (UID: \"1260b3b6-01ce-4cbe-9a79-fe1dd20004e1\") " pod="calico-system/calico-node-9trfb" Jul 7 05:55:51.625810 kubelet[3105]: I0707 05:55:51.625383 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1260b3b6-01ce-4cbe-9a79-fe1dd20004e1-tigera-ca-bundle\") pod \"calico-node-9trfb\" (UID: \"1260b3b6-01ce-4cbe-9a79-fe1dd20004e1\") " pod="calico-system/calico-node-9trfb" Jul 7 05:55:51.625810 kubelet[3105]: I0707 05:55:51.625399 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1260b3b6-01ce-4cbe-9a79-fe1dd20004e1-cni-log-dir\") pod \"calico-node-9trfb\" (UID: \"1260b3b6-01ce-4cbe-9a79-fe1dd20004e1\") " pod="calico-system/calico-node-9trfb" Jul 7 05:55:51.687639 containerd[1711]: time="2025-07-07T05:55:51.687502630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69f979544f-jq2vc,Uid:6eee5644-d796-470b-ae63-786bf6d72804,Namespace:calico-system,Attempt:0,}" Jul 7 05:55:51.723669 kubelet[3105]: E0707 05:55:51.723614 3105 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2zxks" podUID="e285d422-7e9a-40bd-bc64-700d7b15f4ab" Jul 7 05:55:51.728663 kubelet[3105]: E0707 05:55:51.728617 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.728663 kubelet[3105]: W0707 05:55:51.728640 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.729100 kubelet[3105]: E0707 05:55:51.728866 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.733198 kubelet[3105]: E0707 05:55:51.732317 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.733198 kubelet[3105]: W0707 05:55:51.732337 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.733198 kubelet[3105]: E0707 05:55:51.732362 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.733959 kubelet[3105]: E0707 05:55:51.733713 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.733959 kubelet[3105]: W0707 05:55:51.733746 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.733959 kubelet[3105]: E0707 05:55:51.733807 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.734559 kubelet[3105]: E0707 05:55:51.734390 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.734559 kubelet[3105]: W0707 05:55:51.734405 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.734859 kubelet[3105]: E0707 05:55:51.734791 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.734859 kubelet[3105]: W0707 05:55:51.734803 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.734859 kubelet[3105]: E0707 05:55:51.734815 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.735159 kubelet[3105]: E0707 05:55:51.734973 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.735630 kubelet[3105]: E0707 05:55:51.735552 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.735630 kubelet[3105]: W0707 05:55:51.735565 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.735630 kubelet[3105]: E0707 05:55:51.735580 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.736341 kubelet[3105]: E0707 05:55:51.736232 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.736341 kubelet[3105]: W0707 05:55:51.736260 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.736341 kubelet[3105]: E0707 05:55:51.736277 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.737326 kubelet[3105]: E0707 05:55:51.737075 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.737326 kubelet[3105]: W0707 05:55:51.737113 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.737326 kubelet[3105]: E0707 05:55:51.737126 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.740657 kubelet[3105]: E0707 05:55:51.740569 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.740657 kubelet[3105]: W0707 05:55:51.740603 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.740657 kubelet[3105]: E0707 05:55:51.740620 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.760917 containerd[1711]: time="2025-07-07T05:55:51.760624624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:55:51.760917 containerd[1711]: time="2025-07-07T05:55:51.760686824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:55:51.760917 containerd[1711]: time="2025-07-07T05:55:51.760702664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:51.761628 containerd[1711]: time="2025-07-07T05:55:51.761350185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:51.785856 kubelet[3105]: E0707 05:55:51.785766 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.786940 kubelet[3105]: W0707 05:55:51.786439 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.786940 kubelet[3105]: E0707 05:55:51.786511 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.790214 systemd[1]: Started cri-containerd-a8b9939607fe12a73a9b0f8cb78f4a6a03bc755fd491c5790a18253b67680591.scope - libcontainer container a8b9939607fe12a73a9b0f8cb78f4a6a03bc755fd491c5790a18253b67680591. Jul 7 05:55:51.820489 kubelet[3105]: E0707 05:55:51.820460 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.820751 kubelet[3105]: W0707 05:55:51.820713 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.820907 kubelet[3105]: E0707 05:55:51.820844 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.821522 kubelet[3105]: E0707 05:55:51.821377 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.821522 kubelet[3105]: W0707 05:55:51.821392 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.821522 kubelet[3105]: E0707 05:55:51.821407 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.822383 kubelet[3105]: E0707 05:55:51.822255 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.822383 kubelet[3105]: W0707 05:55:51.822279 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.822383 kubelet[3105]: E0707 05:55:51.822294 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.823453 kubelet[3105]: E0707 05:55:51.823185 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.823453 kubelet[3105]: W0707 05:55:51.823199 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.823453 kubelet[3105]: E0707 05:55:51.823213 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.823727 kubelet[3105]: E0707 05:55:51.823579 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.823727 kubelet[3105]: W0707 05:55:51.823591 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.823727 kubelet[3105]: E0707 05:55:51.823616 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.824171 kubelet[3105]: E0707 05:55:51.823906 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.824171 kubelet[3105]: W0707 05:55:51.823918 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.824171 kubelet[3105]: E0707 05:55:51.823929 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.824679 kubelet[3105]: E0707 05:55:51.824653 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.824914 kubelet[3105]: W0707 05:55:51.824753 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.824914 kubelet[3105]: E0707 05:55:51.824772 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.825706 kubelet[3105]: E0707 05:55:51.825370 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.825706 kubelet[3105]: W0707 05:55:51.825384 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.825706 kubelet[3105]: E0707 05:55:51.825401 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.826636 kubelet[3105]: E0707 05:55:51.826510 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.826636 kubelet[3105]: W0707 05:55:51.826527 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.826636 kubelet[3105]: E0707 05:55:51.826539 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.826963 kubelet[3105]: E0707 05:55:51.826845 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.827161 kubelet[3105]: W0707 05:55:51.827045 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.827161 kubelet[3105]: E0707 05:55:51.827063 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.827768 kubelet[3105]: E0707 05:55:51.827725 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.828020 kubelet[3105]: W0707 05:55:51.827923 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.828020 kubelet[3105]: E0707 05:55:51.827945 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.829012 kubelet[3105]: E0707 05:55:51.828847 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.829012 kubelet[3105]: W0707 05:55:51.828871 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.829012 kubelet[3105]: E0707 05:55:51.828885 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.829686 kubelet[3105]: E0707 05:55:51.829508 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.829686 kubelet[3105]: W0707 05:55:51.829594 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.829686 kubelet[3105]: E0707 05:55:51.829609 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.830507 kubelet[3105]: E0707 05:55:51.830456 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.830507 kubelet[3105]: W0707 05:55:51.830470 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.830507 kubelet[3105]: E0707 05:55:51.830483 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.831245 kubelet[3105]: E0707 05:55:51.830898 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.831245 kubelet[3105]: W0707 05:55:51.830911 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.831245 kubelet[3105]: E0707 05:55:51.830924 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.831696 kubelet[3105]: E0707 05:55:51.831524 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.831696 kubelet[3105]: W0707 05:55:51.831539 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.831696 kubelet[3105]: E0707 05:55:51.831551 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.832298 kubelet[3105]: E0707 05:55:51.832274 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.832298 kubelet[3105]: W0707 05:55:51.832292 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.832657 kubelet[3105]: E0707 05:55:51.832316 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.832657 kubelet[3105]: I0707 05:55:51.832345 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e285d422-7e9a-40bd-bc64-700d7b15f4ab-socket-dir\") pod \"csi-node-driver-2zxks\" (UID: \"e285d422-7e9a-40bd-bc64-700d7b15f4ab\") " pod="calico-system/csi-node-driver-2zxks" Jul 7 05:55:51.832657 kubelet[3105]: E0707 05:55:51.832524 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.832657 kubelet[3105]: W0707 05:55:51.832535 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.832657 kubelet[3105]: E0707 05:55:51.832548 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.832657 kubelet[3105]: I0707 05:55:51.832563 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e285d422-7e9a-40bd-bc64-700d7b15f4ab-registration-dir\") pod \"csi-node-driver-2zxks\" (UID: \"e285d422-7e9a-40bd-bc64-700d7b15f4ab\") " pod="calico-system/csi-node-driver-2zxks" Jul 7 05:55:51.833007 kubelet[3105]: E0707 05:55:51.832869 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.833007 kubelet[3105]: W0707 05:55:51.832882 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.833007 kubelet[3105]: E0707 05:55:51.832902 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.833007 kubelet[3105]: I0707 05:55:51.832921 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e285d422-7e9a-40bd-bc64-700d7b15f4ab-kubelet-dir\") pod \"csi-node-driver-2zxks\" (UID: \"e285d422-7e9a-40bd-bc64-700d7b15f4ab\") " pod="calico-system/csi-node-driver-2zxks" Jul 7 05:55:51.833538 kubelet[3105]: E0707 05:55:51.833321 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.833538 kubelet[3105]: W0707 05:55:51.833346 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.833538 kubelet[3105]: E0707 05:55:51.833360 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.833538 kubelet[3105]: I0707 05:55:51.833379 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e285d422-7e9a-40bd-bc64-700d7b15f4ab-varrun\") pod \"csi-node-driver-2zxks\" (UID: \"e285d422-7e9a-40bd-bc64-700d7b15f4ab\") " pod="calico-system/csi-node-driver-2zxks" Jul 7 05:55:51.835065 kubelet[3105]: E0707 05:55:51.834753 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.835065 kubelet[3105]: W0707 05:55:51.834807 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.835065 kubelet[3105]: E0707 05:55:51.834838 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.835895 kubelet[3105]: E0707 05:55:51.835879 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.836058 kubelet[3105]: W0707 05:55:51.835975 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.836058 kubelet[3105]: E0707 05:55:51.836017 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.836489 kubelet[3105]: E0707 05:55:51.836420 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.836489 kubelet[3105]: W0707 05:55:51.836438 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.836489 kubelet[3105]: E0707 05:55:51.836479 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.837071 kubelet[3105]: E0707 05:55:51.836900 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.837071 kubelet[3105]: W0707 05:55:51.836918 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.837071 kubelet[3105]: E0707 05:55:51.836951 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.837670 kubelet[3105]: E0707 05:55:51.837594 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.838206 kubelet[3105]: W0707 05:55:51.837832 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.838206 kubelet[3105]: E0707 05:55:51.837908 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.839300 kubelet[3105]: E0707 05:55:51.839276 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.839474 kubelet[3105]: W0707 05:55:51.839390 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.839474 kubelet[3105]: E0707 05:55:51.839446 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.839860 kubelet[3105]: E0707 05:55:51.839738 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.839860 kubelet[3105]: W0707 05:55:51.839755 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.839860 kubelet[3105]: E0707 05:55:51.839785 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.840292 kubelet[3105]: E0707 05:55:51.840255 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.840292 kubelet[3105]: W0707 05:55:51.840274 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.840546 kubelet[3105]: E0707 05:55:51.840414 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.840771 kubelet[3105]: E0707 05:55:51.840664 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.840940 kubelet[3105]: W0707 05:55:51.840677 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.841418 kubelet[3105]: E0707 05:55:51.841210 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.841418 kubelet[3105]: E0707 05:55:51.841294 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.841418 kubelet[3105]: W0707 05:55:51.841303 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.841672 kubelet[3105]: E0707 05:55:51.841323 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.841724 kubelet[3105]: E0707 05:55:51.841676 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.841724 kubelet[3105]: W0707 05:55:51.841691 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.841724 kubelet[3105]: E0707 05:55:51.841711 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.842093 kubelet[3105]: E0707 05:55:51.841995 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.842093 kubelet[3105]: W0707 05:55:51.842014 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.842093 kubelet[3105]: E0707 05:55:51.842028 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.871180 containerd[1711]: time="2025-07-07T05:55:51.869162772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9trfb,Uid:1260b3b6-01ce-4cbe-9a79-fe1dd20004e1,Namespace:calico-system,Attempt:0,}" Jul 7 05:55:51.884923 containerd[1711]: time="2025-07-07T05:55:51.884866886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69f979544f-jq2vc,Uid:6eee5644-d796-470b-ae63-786bf6d72804,Namespace:calico-system,Attempt:0,} returns sandbox id \"a8b9939607fe12a73a9b0f8cb78f4a6a03bc755fd491c5790a18253b67680591\"" Jul 7 05:55:51.886927 containerd[1711]: time="2025-07-07T05:55:51.886887490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 05:55:51.924141 containerd[1711]: time="2025-07-07T05:55:51.921484563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:55:51.924141 containerd[1711]: time="2025-07-07T05:55:51.922064524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:55:51.924141 containerd[1711]: time="2025-07-07T05:55:51.922161444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:51.924141 containerd[1711]: time="2025-07-07T05:55:51.922314124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:51.935817 kubelet[3105]: E0707 05:55:51.935778 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.936029 kubelet[3105]: W0707 05:55:51.936012 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.936177 kubelet[3105]: E0707 05:55:51.936161 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.937042 kubelet[3105]: E0707 05:55:51.936724 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.937214 kubelet[3105]: W0707 05:55:51.937192 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.937346 kubelet[3105]: E0707 05:55:51.937305 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.938162 kubelet[3105]: E0707 05:55:51.938006 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.938321 kubelet[3105]: W0707 05:55:51.938274 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.939062 kubelet[3105]: E0707 05:55:51.938923 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.940044 kubelet[3105]: E0707 05:55:51.939976 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.940576 kubelet[3105]: W0707 05:55:51.940556 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.940814 kubelet[3105]: E0707 05:55:51.940749 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.942276 kubelet[3105]: E0707 05:55:51.942169 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.942658 kubelet[3105]: W0707 05:55:51.942637 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.942949 kubelet[3105]: E0707 05:55:51.942892 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.944878 kubelet[3105]: E0707 05:55:51.944848 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.945119 kubelet[3105]: W0707 05:55:51.944863 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.945119 kubelet[3105]: E0707 05:55:51.945028 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.945660 kubelet[3105]: E0707 05:55:51.945554 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.945660 kubelet[3105]: W0707 05:55:51.945581 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.945660 kubelet[3105]: E0707 05:55:51.945627 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.948172 kubelet[3105]: E0707 05:55:51.946490 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.948172 kubelet[3105]: W0707 05:55:51.946508 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.948172 kubelet[3105]: E0707 05:55:51.947762 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.948549 kubelet[3105]: E0707 05:55:51.948534 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.948851 kubelet[3105]: W0707 05:55:51.948671 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.948851 kubelet[3105]: E0707 05:55:51.948717 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.951983 kubelet[3105]: E0707 05:55:51.951279 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.951983 kubelet[3105]: W0707 05:55:51.951299 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.951983 kubelet[3105]: E0707 05:55:51.951669 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.954331 systemd[1]: Started cri-containerd-984b25abc499dac7b3131d3b571a16ebde3f7090b42e35bba574136bc8ca2867.scope - libcontainer container 984b25abc499dac7b3131d3b571a16ebde3f7090b42e35bba574136bc8ca2867. Jul 7 05:55:51.957517 kubelet[3105]: E0707 05:55:51.955355 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.957517 kubelet[3105]: W0707 05:55:51.955379 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.957517 kubelet[3105]: E0707 05:55:51.955428 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.958755 kubelet[3105]: E0707 05:55:51.958643 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.958755 kubelet[3105]: W0707 05:55:51.958662 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.958755 kubelet[3105]: E0707 05:55:51.958697 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.958755 kubelet[3105]: I0707 05:55:51.958739 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nmk9\" (UniqueName: \"kubernetes.io/projected/e285d422-7e9a-40bd-bc64-700d7b15f4ab-kube-api-access-4nmk9\") pod \"csi-node-driver-2zxks\" (UID: \"e285d422-7e9a-40bd-bc64-700d7b15f4ab\") " pod="calico-system/csi-node-driver-2zxks" Jul 7 05:55:51.959445 kubelet[3105]: E0707 05:55:51.959421 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.959602 kubelet[3105]: W0707 05:55:51.959588 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.959708 kubelet[3105]: E0707 05:55:51.959680 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.960654 kubelet[3105]: E0707 05:55:51.960478 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.961172 kubelet[3105]: W0707 05:55:51.961016 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.961172 kubelet[3105]: E0707 05:55:51.961100 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.962481 kubelet[3105]: E0707 05:55:51.962298 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.962481 kubelet[3105]: W0707 05:55:51.962318 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.962481 kubelet[3105]: E0707 05:55:51.962411 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.963562 kubelet[3105]: E0707 05:55:51.963064 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.963562 kubelet[3105]: W0707 05:55:51.963115 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.963562 kubelet[3105]: E0707 05:55:51.963153 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.964102 kubelet[3105]: E0707 05:55:51.964011 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.964102 kubelet[3105]: W0707 05:55:51.964030 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.964690 kubelet[3105]: E0707 05:55:51.964442 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.964690 kubelet[3105]: E0707 05:55:51.964661 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.964690 kubelet[3105]: W0707 05:55:51.964673 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.965444 kubelet[3105]: E0707 05:55:51.965296 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.965762 kubelet[3105]: E0707 05:55:51.965747 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.966331 kubelet[3105]: W0707 05:55:51.965836 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.966331 kubelet[3105]: E0707 05:55:51.965916 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.967210 kubelet[3105]: E0707 05:55:51.967027 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.967210 kubelet[3105]: W0707 05:55:51.967043 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.967687 kubelet[3105]: E0707 05:55:51.967081 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.968452 kubelet[3105]: E0707 05:55:51.968213 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.968452 kubelet[3105]: W0707 05:55:51.968230 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.968452 kubelet[3105]: E0707 05:55:51.968279 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.969685 kubelet[3105]: E0707 05:55:51.969546 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.969685 kubelet[3105]: W0707 05:55:51.969564 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.969685 kubelet[3105]: E0707 05:55:51.969599 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.970105 kubelet[3105]: E0707 05:55:51.970033 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:51.970105 kubelet[3105]: W0707 05:55:51.970047 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:51.970105 kubelet[3105]: E0707 05:55:51.970058 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:51.995110 containerd[1711]: time="2025-07-07T05:55:51.995041438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9trfb,Uid:1260b3b6-01ce-4cbe-9a79-fe1dd20004e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"984b25abc499dac7b3131d3b571a16ebde3f7090b42e35bba574136bc8ca2867\"" Jul 7 05:55:52.061897 kubelet[3105]: E0707 05:55:52.061644 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:52.061897 kubelet[3105]: W0707 05:55:52.061670 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:52.061897 kubelet[3105]: E0707 05:55:52.061705 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:52.062707 kubelet[3105]: E0707 05:55:52.062594 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:52.062707 kubelet[3105]: W0707 05:55:52.062610 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:52.062707 kubelet[3105]: E0707 05:55:52.062627 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:52.064584 kubelet[3105]: E0707 05:55:52.063690 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:52.064584 kubelet[3105]: W0707 05:55:52.063708 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:52.064584 kubelet[3105]: E0707 05:55:52.063725 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:52.065223 kubelet[3105]: E0707 05:55:52.065197 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:52.065223 kubelet[3105]: W0707 05:55:52.065218 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:52.065306 kubelet[3105]: E0707 05:55:52.065234 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:52.065557 kubelet[3105]: E0707 05:55:52.065537 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:52.065637 kubelet[3105]: W0707 05:55:52.065554 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:52.065637 kubelet[3105]: E0707 05:55:52.065576 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:52.080898 kubelet[3105]: E0707 05:55:52.080836 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:52.080898 kubelet[3105]: W0707 05:55:52.080882 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:52.080898 kubelet[3105]: E0707 05:55:52.080906 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:52.995964 kubelet[3105]: E0707 05:55:52.995916 3105 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2zxks" podUID="e285d422-7e9a-40bd-bc64-700d7b15f4ab" Jul 7 05:55:53.324081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1790230444.mount: Deactivated successfully. Jul 7 05:55:53.832489 containerd[1711]: time="2025-07-07T05:55:53.832423936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:53.834823 containerd[1711]: time="2025-07-07T05:55:53.834773941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 7 05:55:53.838419 containerd[1711]: time="2025-07-07T05:55:53.838377789Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:53.842723 containerd[1711]: time="2025-07-07T05:55:53.842081198Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:53.842723 containerd[1711]: time="2025-07-07T05:55:53.842595679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.955664789s" Jul 7 05:55:53.842723 containerd[1711]: time="2025-07-07T05:55:53.842624159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 7 05:55:53.845767 containerd[1711]: time="2025-07-07T05:55:53.845731246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 05:55:53.862814 containerd[1711]: time="2025-07-07T05:55:53.862694564Z" level=info msg="CreateContainer within sandbox \"a8b9939607fe12a73a9b0f8cb78f4a6a03bc755fd491c5790a18253b67680591\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 05:55:53.894751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount138467526.mount: Deactivated successfully. Jul 7 05:55:53.901811 containerd[1711]: time="2025-07-07T05:55:53.901758372Z" level=info msg="CreateContainer within sandbox \"a8b9939607fe12a73a9b0f8cb78f4a6a03bc755fd491c5790a18253b67680591\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"209b7bbc8c758a2029167a3d2f73b3880723f822e7e868ac70555b9d3127f464\"" Jul 7 05:55:53.902383 containerd[1711]: time="2025-07-07T05:55:53.902353613Z" level=info msg="StartContainer for \"209b7bbc8c758a2029167a3d2f73b3880723f822e7e868ac70555b9d3127f464\"" Jul 7 05:55:53.938216 systemd[1]: Started cri-containerd-209b7bbc8c758a2029167a3d2f73b3880723f822e7e868ac70555b9d3127f464.scope - libcontainer container 209b7bbc8c758a2029167a3d2f73b3880723f822e7e868ac70555b9d3127f464. Jul 7 05:55:53.982756 containerd[1711]: time="2025-07-07T05:55:53.982639873Z" level=info msg="StartContainer for \"209b7bbc8c758a2029167a3d2f73b3880723f822e7e868ac70555b9d3127f464\" returns successfully" Jul 7 05:55:54.158357 kubelet[3105]: E0707 05:55:54.158201 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.158357 kubelet[3105]: W0707 05:55:54.158227 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.158357 kubelet[3105]: E0707 05:55:54.158249 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.161116 kubelet[3105]: E0707 05:55:54.160616 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.161116 kubelet[3105]: W0707 05:55:54.160649 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.161116 kubelet[3105]: E0707 05:55:54.160669 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.161116 kubelet[3105]: E0707 05:55:54.160861 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.161116 kubelet[3105]: W0707 05:55:54.160880 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.161116 kubelet[3105]: E0707 05:55:54.160890 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.161116 kubelet[3105]: E0707 05:55:54.161056 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.161116 kubelet[3105]: W0707 05:55:54.161064 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.161116 kubelet[3105]: E0707 05:55:54.161072 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.161992 kubelet[3105]: E0707 05:55:54.161656 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.161992 kubelet[3105]: W0707 05:55:54.161687 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.161992 kubelet[3105]: E0707 05:55:54.161705 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.161992 kubelet[3105]: E0707 05:55:54.161913 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.161992 kubelet[3105]: W0707 05:55:54.161921 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.161992 kubelet[3105]: E0707 05:55:54.161933 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.162873 kubelet[3105]: E0707 05:55:54.162577 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.162873 kubelet[3105]: W0707 05:55:54.162594 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.162873 kubelet[3105]: E0707 05:55:54.162607 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.162873 kubelet[3105]: E0707 05:55:54.162812 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.162873 kubelet[3105]: W0707 05:55:54.162820 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.162873 kubelet[3105]: E0707 05:55:54.162830 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.163451 kubelet[3105]: E0707 05:55:54.163305 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.163451 kubelet[3105]: W0707 05:55:54.163318 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.163451 kubelet[3105]: E0707 05:55:54.163331 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.163822 kubelet[3105]: E0707 05:55:54.163681 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.163822 kubelet[3105]: W0707 05:55:54.163693 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.163822 kubelet[3105]: E0707 05:55:54.163704 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.164181 kubelet[3105]: E0707 05:55:54.164029 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.164181 kubelet[3105]: W0707 05:55:54.164041 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.164181 kubelet[3105]: E0707 05:55:54.164051 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.164483 kubelet[3105]: E0707 05:55:54.164378 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.164483 kubelet[3105]: W0707 05:55:54.164390 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.164483 kubelet[3105]: E0707 05:55:54.164400 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.164860 kubelet[3105]: E0707 05:55:54.164732 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.164860 kubelet[3105]: W0707 05:55:54.164743 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.164860 kubelet[3105]: E0707 05:55:54.164753 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.167222 kubelet[3105]: E0707 05:55:54.167200 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.167440 kubelet[3105]: W0707 05:55:54.167319 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.167440 kubelet[3105]: E0707 05:55:54.167339 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.167595 kubelet[3105]: E0707 05:55:54.167584 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.167649 kubelet[3105]: W0707 05:55:54.167639 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.167705 kubelet[3105]: E0707 05:55:54.167695 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.176145 kubelet[3105]: E0707 05:55:54.176071 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.176145 kubelet[3105]: W0707 05:55:54.176124 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.176145 kubelet[3105]: E0707 05:55:54.176144 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.176555 kubelet[3105]: E0707 05:55:54.176343 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.176555 kubelet[3105]: W0707 05:55:54.176353 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.176555 kubelet[3105]: E0707 05:55:54.176368 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.176763 kubelet[3105]: E0707 05:55:54.176683 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.176763 kubelet[3105]: W0707 05:55:54.176701 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.176763 kubelet[3105]: E0707 05:55:54.176725 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.176945 kubelet[3105]: E0707 05:55:54.176926 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.176945 kubelet[3105]: W0707 05:55:54.176942 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.177006 kubelet[3105]: E0707 05:55:54.176959 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.177254 kubelet[3105]: E0707 05:55:54.177232 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.177254 kubelet[3105]: W0707 05:55:54.177250 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.177366 kubelet[3105]: E0707 05:55:54.177271 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.177490 kubelet[3105]: E0707 05:55:54.177450 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.177490 kubelet[3105]: W0707 05:55:54.177461 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.177490 kubelet[3105]: E0707 05:55:54.177477 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.177750 kubelet[3105]: E0707 05:55:54.177732 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.177750 kubelet[3105]: W0707 05:55:54.177747 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.177884 kubelet[3105]: E0707 05:55:54.177810 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.178400 kubelet[3105]: E0707 05:55:54.178374 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.178400 kubelet[3105]: W0707 05:55:54.178394 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.178555 kubelet[3105]: E0707 05:55:54.178467 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.178752 kubelet[3105]: E0707 05:55:54.178732 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.178752 kubelet[3105]: W0707 05:55:54.178748 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.178866 kubelet[3105]: E0707 05:55:54.178834 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.179286 kubelet[3105]: E0707 05:55:54.179261 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.179286 kubelet[3105]: W0707 05:55:54.179281 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.179376 kubelet[3105]: E0707 05:55:54.179304 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.179789 kubelet[3105]: E0707 05:55:54.179684 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.179789 kubelet[3105]: W0707 05:55:54.179702 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.179789 kubelet[3105]: E0707 05:55:54.179723 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.182374 kubelet[3105]: E0707 05:55:54.182221 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.182374 kubelet[3105]: W0707 05:55:54.182254 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.182587 kubelet[3105]: E0707 05:55:54.182486 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.182699 kubelet[3105]: E0707 05:55:54.182634 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.182699 kubelet[3105]: W0707 05:55:54.182644 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.182699 kubelet[3105]: E0707 05:55:54.182680 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.184125 kubelet[3105]: E0707 05:55:54.183019 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.184125 kubelet[3105]: W0707 05:55:54.183036 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.184125 kubelet[3105]: E0707 05:55:54.183129 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.184839 kubelet[3105]: E0707 05:55:54.184506 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.184839 kubelet[3105]: W0707 05:55:54.184526 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.184839 kubelet[3105]: E0707 05:55:54.184545 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.185036 kubelet[3105]: E0707 05:55:54.185023 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.185140 kubelet[3105]: W0707 05:55:54.185128 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.185269 kubelet[3105]: E0707 05:55:54.185220 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.185703 kubelet[3105]: E0707 05:55:54.185509 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.185703 kubelet[3105]: W0707 05:55:54.185524 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.185703 kubelet[3105]: E0707 05:55:54.185537 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.186048 kubelet[3105]: E0707 05:55:54.186030 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:54.186358 kubelet[3105]: W0707 05:55:54.186160 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:54.186358 kubelet[3105]: E0707 05:55:54.186180 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:54.996051 kubelet[3105]: E0707 05:55:54.995520 3105 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2zxks" podUID="e285d422-7e9a-40bd-bc64-700d7b15f4ab" Jul 7 05:55:55.088455 kubelet[3105]: I0707 05:55:55.088421 3105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 05:55:55.125223 containerd[1711]: time="2025-07-07T05:55:55.125167916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:55.127034 containerd[1711]: time="2025-07-07T05:55:55.126875760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 7 05:55:55.131380 containerd[1711]: time="2025-07-07T05:55:55.131301970Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:55.137912 containerd[1711]: time="2025-07-07T05:55:55.137869064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:55.138775 containerd[1711]: time="2025-07-07T05:55:55.138651226Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.29275722s" Jul 7 05:55:55.138775 containerd[1711]: time="2025-07-07T05:55:55.138688306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 7 05:55:55.142907 containerd[1711]: time="2025-07-07T05:55:55.142756515Z" level=info msg="CreateContainer within sandbox \"984b25abc499dac7b3131d3b571a16ebde3f7090b42e35bba574136bc8ca2867\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 05:55:55.174470 kubelet[3105]: E0707 05:55:55.174412 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.174470 kubelet[3105]: W0707 05:55:55.174436 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.174470 kubelet[3105]: E0707 05:55:55.174458 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.175052 kubelet[3105]: E0707 05:55:55.174629 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.175052 kubelet[3105]: W0707 05:55:55.174638 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.175052 kubelet[3105]: E0707 05:55:55.174648 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.175052 kubelet[3105]: E0707 05:55:55.174784 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.175052 kubelet[3105]: W0707 05:55:55.174791 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.175052 kubelet[3105]: E0707 05:55:55.174823 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.175052 kubelet[3105]: E0707 05:55:55.174959 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.175052 kubelet[3105]: W0707 05:55:55.174966 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.175052 kubelet[3105]: E0707 05:55:55.174974 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.175421 kubelet[3105]: E0707 05:55:55.175163 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.175421 kubelet[3105]: W0707 05:55:55.175171 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.175421 kubelet[3105]: E0707 05:55:55.175180 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.175559 kubelet[3105]: E0707 05:55:55.175537 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.175559 kubelet[3105]: W0707 05:55:55.175554 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.175559 kubelet[3105]: E0707 05:55:55.175566 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.175796 kubelet[3105]: E0707 05:55:55.175780 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.175796 kubelet[3105]: W0707 05:55:55.175797 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.175883 kubelet[3105]: E0707 05:55:55.175808 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.175961 kubelet[3105]: E0707 05:55:55.175948 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.175961 kubelet[3105]: W0707 05:55:55.175959 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.176047 kubelet[3105]: E0707 05:55:55.175968 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.176162 kubelet[3105]: E0707 05:55:55.176149 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.176162 kubelet[3105]: W0707 05:55:55.176161 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.176245 kubelet[3105]: E0707 05:55:55.176171 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.176320 kubelet[3105]: E0707 05:55:55.176307 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.176320 kubelet[3105]: W0707 05:55:55.176318 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.176420 kubelet[3105]: E0707 05:55:55.176327 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.176520 kubelet[3105]: E0707 05:55:55.176505 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.176520 kubelet[3105]: W0707 05:55:55.176518 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.176600 kubelet[3105]: E0707 05:55:55.176528 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.176800 kubelet[3105]: E0707 05:55:55.176782 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.176800 kubelet[3105]: W0707 05:55:55.176799 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.176884 kubelet[3105]: E0707 05:55:55.176810 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.177001 kubelet[3105]: E0707 05:55:55.176987 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.177001 kubelet[3105]: W0707 05:55:55.177000 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.177075 kubelet[3105]: E0707 05:55:55.177009 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.177266 kubelet[3105]: E0707 05:55:55.177247 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.177266 kubelet[3105]: W0707 05:55:55.177260 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.177348 kubelet[3105]: E0707 05:55:55.177273 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.177424 kubelet[3105]: E0707 05:55:55.177410 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.177424 kubelet[3105]: W0707 05:55:55.177421 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.177484 kubelet[3105]: E0707 05:55:55.177429 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.181370 containerd[1711]: time="2025-07-07T05:55:55.181270122Z" level=info msg="CreateContainer within sandbox \"984b25abc499dac7b3131d3b571a16ebde3f7090b42e35bba574136bc8ca2867\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b9b43fa3db62aca74ccb413c46d395648b932f01238b53723f4438f88ac0edce\"" Jul 7 05:55:55.183051 containerd[1711]: time="2025-07-07T05:55:55.182517645Z" level=info msg="StartContainer for \"b9b43fa3db62aca74ccb413c46d395648b932f01238b53723f4438f88ac0edce\"" Jul 7 05:55:55.185851 kubelet[3105]: E0707 05:55:55.185689 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.186731 kubelet[3105]: W0707 05:55:55.186037 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.186731 kubelet[3105]: E0707 05:55:55.186067 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.187855 kubelet[3105]: E0707 05:55:55.187311 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.187855 kubelet[3105]: W0707 05:55:55.187330 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.187855 kubelet[3105]: E0707 05:55:55.187453 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.188680 kubelet[3105]: E0707 05:55:55.188539 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.188680 kubelet[3105]: W0707 05:55:55.188560 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.188680 kubelet[3105]: E0707 05:55:55.188585 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.189511 kubelet[3105]: E0707 05:55:55.189485 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.189511 kubelet[3105]: W0707 05:55:55.189506 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.190341 kubelet[3105]: E0707 05:55:55.190228 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.190655 kubelet[3105]: E0707 05:55:55.190447 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.190655 kubelet[3105]: W0707 05:55:55.190460 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.190655 kubelet[3105]: E0707 05:55:55.190499 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.191276 kubelet[3105]: E0707 05:55:55.191197 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.191276 kubelet[3105]: W0707 05:55:55.191226 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.191489 kubelet[3105]: E0707 05:55:55.191387 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.191823 kubelet[3105]: E0707 05:55:55.191721 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.191823 kubelet[3105]: W0707 05:55:55.191746 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.192832 kubelet[3105]: E0707 05:55:55.192588 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.194372 kubelet[3105]: E0707 05:55:55.193522 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.194372 kubelet[3105]: W0707 05:55:55.193541 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.194372 kubelet[3105]: E0707 05:55:55.193677 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.195368 kubelet[3105]: E0707 05:55:55.195333 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.195368 kubelet[3105]: W0707 05:55:55.195363 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.196229 kubelet[3105]: E0707 05:55:55.196191 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.196455 kubelet[3105]: E0707 05:55:55.196293 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.200238 kubelet[3105]: W0707 05:55:55.196306 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.200705 kubelet[3105]: E0707 05:55:55.200490 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.200929 kubelet[3105]: E0707 05:55:55.200830 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.200929 kubelet[3105]: W0707 05:55:55.200844 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.200929 kubelet[3105]: E0707 05:55:55.200884 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.201305 kubelet[3105]: E0707 05:55:55.201222 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.201305 kubelet[3105]: W0707 05:55:55.201260 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.201305 kubelet[3105]: E0707 05:55:55.201288 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.201755 kubelet[3105]: E0707 05:55:55.201617 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.201755 kubelet[3105]: W0707 05:55:55.201637 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.201755 kubelet[3105]: E0707 05:55:55.201673 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.203135 kubelet[3105]: E0707 05:55:55.202166 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.203135 kubelet[3105]: W0707 05:55:55.202343 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.203135 kubelet[3105]: E0707 05:55:55.202521 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.203584 kubelet[3105]: E0707 05:55:55.203555 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.203584 kubelet[3105]: W0707 05:55:55.203578 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.203668 kubelet[3105]: E0707 05:55:55.203602 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.203896 kubelet[3105]: E0707 05:55:55.203871 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.203896 kubelet[3105]: W0707 05:55:55.203895 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.204413 kubelet[3105]: E0707 05:55:55.203959 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.205036 kubelet[3105]: E0707 05:55:55.205003 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.205036 kubelet[3105]: W0707 05:55:55.205026 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.205156 kubelet[3105]: E0707 05:55:55.205049 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.205999 kubelet[3105]: E0707 05:55:55.205975 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:55:55.205999 kubelet[3105]: W0707 05:55:55.205996 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:55:55.206132 kubelet[3105]: E0707 05:55:55.206010 3105 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:55:55.221326 systemd[1]: Started cri-containerd-b9b43fa3db62aca74ccb413c46d395648b932f01238b53723f4438f88ac0edce.scope - libcontainer container b9b43fa3db62aca74ccb413c46d395648b932f01238b53723f4438f88ac0edce. Jul 7 05:55:55.260360 containerd[1711]: time="2025-07-07T05:55:55.260243939Z" level=info msg="StartContainer for \"b9b43fa3db62aca74ccb413c46d395648b932f01238b53723f4438f88ac0edce\" returns successfully" Jul 7 05:55:55.268911 systemd[1]: cri-containerd-b9b43fa3db62aca74ccb413c46d395648b932f01238b53723f4438f88ac0edce.scope: Deactivated successfully. Jul 7 05:55:55.295651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9b43fa3db62aca74ccb413c46d395648b932f01238b53723f4438f88ac0edce-rootfs.mount: Deactivated successfully. Jul 7 05:55:56.111492 kubelet[3105]: I0707 05:55:56.111166 3105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69f979544f-jq2vc" podStartSLOduration=3.154042336 podStartE2EDuration="5.111147088s" podCreationTimestamp="2025-07-07 05:55:51 +0000 UTC" firstStartedPulling="2025-07-07 05:55:51.886499369 +0000 UTC m=+23.011547341" lastFinishedPulling="2025-07-07 05:55:53.843604081 +0000 UTC m=+24.968652093" observedRunningTime="2025-07-07 05:55:54.104504266 +0000 UTC m=+25.229552238" watchObservedRunningTime="2025-07-07 05:55:56.111147088 +0000 UTC m=+27.236195060" Jul 7 05:55:56.347711 containerd[1711]: time="2025-07-07T05:55:56.347455098Z" level=info msg="shim disconnected" id=b9b43fa3db62aca74ccb413c46d395648b932f01238b53723f4438f88ac0edce namespace=k8s.io Jul 7 05:55:56.347711 containerd[1711]: time="2025-07-07T05:55:56.347581498Z" level=warning msg="cleaning up after shim disconnected" id=b9b43fa3db62aca74ccb413c46d395648b932f01238b53723f4438f88ac0edce namespace=k8s.io Jul 7 05:55:56.347711 containerd[1711]: time="2025-07-07T05:55:56.347593178Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:55:56.996745 kubelet[3105]: E0707 05:55:56.996294 3105 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2zxks" podUID="e285d422-7e9a-40bd-bc64-700d7b15f4ab" Jul 7 05:55:57.100209 containerd[1711]: time="2025-07-07T05:55:57.099727545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 05:55:58.999240 kubelet[3105]: E0707 05:55:58.997575 3105 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2zxks" podUID="e285d422-7e9a-40bd-bc64-700d7b15f4ab" Jul 7 05:55:59.460320 containerd[1711]: time="2025-07-07T05:55:59.460265560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:59.462401 containerd[1711]: time="2025-07-07T05:55:59.462246485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 7 05:55:59.464449 containerd[1711]: time="2025-07-07T05:55:59.464395970Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:59.468732 containerd[1711]: time="2025-07-07T05:55:59.468677619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:59.469586 containerd[1711]: time="2025-07-07T05:55:59.469465621Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.369695316s" Jul 7 05:55:59.469586 containerd[1711]: time="2025-07-07T05:55:59.469501221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 7 05:55:59.473673 containerd[1711]: time="2025-07-07T05:55:59.473547990Z" level=info msg="CreateContainer within sandbox \"984b25abc499dac7b3131d3b571a16ebde3f7090b42e35bba574136bc8ca2867\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 05:55:59.510435 containerd[1711]: time="2025-07-07T05:55:59.510309513Z" level=info msg="CreateContainer within sandbox \"984b25abc499dac7b3131d3b571a16ebde3f7090b42e35bba574136bc8ca2867\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"81e8de0db9af390f1242d30e8f80b11690ee3b267df6ba875140f75583bb59de\"" Jul 7 05:55:59.511748 containerd[1711]: time="2025-07-07T05:55:59.511713116Z" level=info msg="StartContainer for \"81e8de0db9af390f1242d30e8f80b11690ee3b267df6ba875140f75583bb59de\"" Jul 7 05:55:59.544426 systemd[1]: Started cri-containerd-81e8de0db9af390f1242d30e8f80b11690ee3b267df6ba875140f75583bb59de.scope - libcontainer container 81e8de0db9af390f1242d30e8f80b11690ee3b267df6ba875140f75583bb59de. Jul 7 05:55:59.586229 containerd[1711]: time="2025-07-07T05:55:59.585869002Z" level=info msg="StartContainer for \"81e8de0db9af390f1242d30e8f80b11690ee3b267df6ba875140f75583bb59de\" returns successfully" Jul 7 05:56:00.897703 containerd[1711]: time="2025-07-07T05:56:00.897628376Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 05:56:00.900512 systemd[1]: cri-containerd-81e8de0db9af390f1242d30e8f80b11690ee3b267df6ba875140f75583bb59de.scope: Deactivated successfully. Jul 7 05:56:00.924838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81e8de0db9af390f1242d30e8f80b11690ee3b267df6ba875140f75583bb59de-rootfs.mount: Deactivated successfully. Jul 7 05:56:00.940226 kubelet[3105]: I0707 05:56:00.939184 3105 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 05:56:01.271958 kubelet[3105]: I0707 05:56:01.031935 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/381db6a2-37d3-4dd2-8605-5ee49bf45bbc-whisker-backend-key-pair\") pod \"whisker-7fbcfc8df4-stb7m\" (UID: \"381db6a2-37d3-4dd2-8605-5ee49bf45bbc\") " pod="calico-system/whisker-7fbcfc8df4-stb7m" Jul 7 05:56:01.271958 kubelet[3105]: I0707 05:56:01.031996 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/381db6a2-37d3-4dd2-8605-5ee49bf45bbc-whisker-ca-bundle\") pod \"whisker-7fbcfc8df4-stb7m\" (UID: \"381db6a2-37d3-4dd2-8605-5ee49bf45bbc\") " pod="calico-system/whisker-7fbcfc8df4-stb7m" Jul 7 05:56:01.271958 kubelet[3105]: I0707 05:56:01.032020 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9843f4c8-0ee5-49b8-b775-11d693445b56-tigera-ca-bundle\") pod \"calico-kube-controllers-69dff4fc68-6rlws\" (UID: \"9843f4c8-0ee5-49b8-b775-11d693445b56\") " pod="calico-system/calico-kube-controllers-69dff4fc68-6rlws" Jul 7 05:56:01.271958 kubelet[3105]: I0707 05:56:01.032038 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhd4t\" (UniqueName: \"kubernetes.io/projected/381db6a2-37d3-4dd2-8605-5ee49bf45bbc-kube-api-access-hhd4t\") pod \"whisker-7fbcfc8df4-stb7m\" (UID: \"381db6a2-37d3-4dd2-8605-5ee49bf45bbc\") " pod="calico-system/whisker-7fbcfc8df4-stb7m" Jul 7 05:56:01.271958 kubelet[3105]: I0707 05:56:01.032067 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89266\" (UniqueName: \"kubernetes.io/projected/654bded6-4470-4285-bbbd-8ec39f31a268-kube-api-access-89266\") pod \"calico-apiserver-8444f76b56-bhmcq\" (UID: \"654bded6-4470-4285-bbbd-8ec39f31a268\") " pod="calico-apiserver/calico-apiserver-8444f76b56-bhmcq" Jul 7 05:56:00.996022 systemd[1]: Created slice kubepods-besteffort-pod9843f4c8_0ee5_49b8_b775_11d693445b56.slice - libcontainer container kubepods-besteffort-pod9843f4c8_0ee5_49b8_b775_11d693445b56.slice. Jul 7 05:56:01.272265 kubelet[3105]: I0707 05:56:01.032526 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/654bded6-4470-4285-bbbd-8ec39f31a268-calico-apiserver-certs\") pod \"calico-apiserver-8444f76b56-bhmcq\" (UID: \"654bded6-4470-4285-bbbd-8ec39f31a268\") " pod="calico-apiserver/calico-apiserver-8444f76b56-bhmcq" Jul 7 05:56:01.272265 kubelet[3105]: I0707 05:56:01.032552 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wblm9\" (UniqueName: \"kubernetes.io/projected/9843f4c8-0ee5-49b8-b775-11d693445b56-kube-api-access-wblm9\") pod \"calico-kube-controllers-69dff4fc68-6rlws\" (UID: \"9843f4c8-0ee5-49b8-b775-11d693445b56\") " pod="calico-system/calico-kube-controllers-69dff4fc68-6rlws" Jul 7 05:56:01.013664 systemd[1]: Created slice kubepods-besteffort-pod381db6a2_37d3_4dd2_8605_5ee49bf45bbc.slice - libcontainer container kubepods-besteffort-pod381db6a2_37d3_4dd2_8605_5ee49bf45bbc.slice. Jul 7 05:56:01.024410 systemd[1]: Created slice kubepods-besteffort-pod654bded6_4470_4285_bbbd_8ec39f31a268.slice - libcontainer container kubepods-besteffort-pod654bded6_4470_4285_bbbd_8ec39f31a268.slice. Jul 7 05:56:01.037536 systemd[1]: Created slice kubepods-besteffort-pode285d422_7e9a_40bd_bc64_700d7b15f4ab.slice - libcontainer container kubepods-besteffort-pode285d422_7e9a_40bd_bc64_700d7b15f4ab.slice. Jul 7 05:56:01.274062 containerd[1711]: time="2025-07-07T05:56:01.273607554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2zxks,Uid:e285d422-7e9a-40bd-bc64-700d7b15f4ab,Namespace:calico-system,Attempt:0,}" Jul 7 05:56:01.317786 systemd[1]: Created slice kubepods-burstable-pod93d92675_dc75_493f_afb1_b5a1b922550a.slice - libcontainer container kubepods-burstable-pod93d92675_dc75_493f_afb1_b5a1b922550a.slice. Jul 7 05:56:01.327299 systemd[1]: Created slice kubepods-besteffort-podd4471ce3_99ef_4485_9426_4f1c145f77dd.slice - libcontainer container kubepods-besteffort-podd4471ce3_99ef_4485_9426_4f1c145f77dd.slice. Jul 7 05:56:01.335509 kubelet[3105]: I0707 05:56:01.333636 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d4471ce3-99ef-4485-9426-4f1c145f77dd-calico-apiserver-certs\") pod \"calico-apiserver-8444f76b56-r26qp\" (UID: \"d4471ce3-99ef-4485-9426-4f1c145f77dd\") " pod="calico-apiserver/calico-apiserver-8444f76b56-r26qp" Jul 7 05:56:01.335509 kubelet[3105]: I0707 05:56:01.333711 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx7pm\" (UniqueName: \"kubernetes.io/projected/d4471ce3-99ef-4485-9426-4f1c145f77dd-kube-api-access-gx7pm\") pod \"calico-apiserver-8444f76b56-r26qp\" (UID: \"d4471ce3-99ef-4485-9426-4f1c145f77dd\") " pod="calico-apiserver/calico-apiserver-8444f76b56-r26qp" Jul 7 05:56:01.335509 kubelet[3105]: I0707 05:56:01.333732 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93d92675-dc75-493f-afb1-b5a1b922550a-config-volume\") pod \"coredns-7c65d6cfc9-7c8tc\" (UID: \"93d92675-dc75-493f-afb1-b5a1b922550a\") " pod="kube-system/coredns-7c65d6cfc9-7c8tc" Jul 7 05:56:01.335509 kubelet[3105]: I0707 05:56:01.333748 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f156b26-7db3-4daa-b70b-8db09e58fe84-config\") pod \"goldmane-58fd7646b9-pvv7q\" (UID: \"6f156b26-7db3-4daa-b70b-8db09e58fe84\") " pod="calico-system/goldmane-58fd7646b9-pvv7q" Jul 7 05:56:01.335509 kubelet[3105]: I0707 05:56:01.333767 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg5bg\" (UniqueName: \"kubernetes.io/projected/b7f453a6-8802-4475-abf7-3bbe1e10231e-kube-api-access-tg5bg\") pod \"coredns-7c65d6cfc9-6jpc6\" (UID: \"b7f453a6-8802-4475-abf7-3bbe1e10231e\") " pod="kube-system/coredns-7c65d6cfc9-6jpc6" Jul 7 05:56:01.335284 systemd[1]: Created slice kubepods-burstable-podb7f453a6_8802_4475_abf7_3bbe1e10231e.slice - libcontainer container kubepods-burstable-podb7f453a6_8802_4475_abf7_3bbe1e10231e.slice. Jul 7 05:56:01.335813 kubelet[3105]: I0707 05:56:01.333835 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7f453a6-8802-4475-abf7-3bbe1e10231e-config-volume\") pod \"coredns-7c65d6cfc9-6jpc6\" (UID: \"b7f453a6-8802-4475-abf7-3bbe1e10231e\") " pod="kube-system/coredns-7c65d6cfc9-6jpc6" Jul 7 05:56:01.335813 kubelet[3105]: I0707 05:56:01.333882 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f156b26-7db3-4daa-b70b-8db09e58fe84-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-pvv7q\" (UID: \"6f156b26-7db3-4daa-b70b-8db09e58fe84\") " pod="calico-system/goldmane-58fd7646b9-pvv7q" Jul 7 05:56:01.335813 kubelet[3105]: I0707 05:56:01.333945 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6f156b26-7db3-4daa-b70b-8db09e58fe84-goldmane-key-pair\") pod \"goldmane-58fd7646b9-pvv7q\" (UID: \"6f156b26-7db3-4daa-b70b-8db09e58fe84\") " pod="calico-system/goldmane-58fd7646b9-pvv7q" Jul 7 05:56:01.335813 kubelet[3105]: I0707 05:56:01.333978 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm2vk\" (UniqueName: \"kubernetes.io/projected/93d92675-dc75-493f-afb1-b5a1b922550a-kube-api-access-rm2vk\") pod \"coredns-7c65d6cfc9-7c8tc\" (UID: \"93d92675-dc75-493f-afb1-b5a1b922550a\") " pod="kube-system/coredns-7c65d6cfc9-7c8tc" Jul 7 05:56:01.335813 kubelet[3105]: I0707 05:56:01.333997 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqq2l\" (UniqueName: \"kubernetes.io/projected/6f156b26-7db3-4daa-b70b-8db09e58fe84-kube-api-access-tqq2l\") pod \"goldmane-58fd7646b9-pvv7q\" (UID: \"6f156b26-7db3-4daa-b70b-8db09e58fe84\") " pod="calico-system/goldmane-58fd7646b9-pvv7q" Jul 7 05:56:01.344306 systemd[1]: Created slice kubepods-besteffort-pod6f156b26_7db3_4daa_b70b_8db09e58fe84.slice - libcontainer container kubepods-besteffort-pod6f156b26_7db3_4daa_b70b_8db09e58fe84.slice. Jul 7 05:56:01.577002 containerd[1711]: time="2025-07-07T05:56:01.576137292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fbcfc8df4-stb7m,Uid:381db6a2-37d3-4dd2-8605-5ee49bf45bbc,Namespace:calico-system,Attempt:0,}" Jul 7 05:56:01.577002 containerd[1711]: time="2025-07-07T05:56:01.576179892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69dff4fc68-6rlws,Uid:9843f4c8-0ee5-49b8-b775-11d693445b56,Namespace:calico-system,Attempt:0,}" Jul 7 05:56:01.595170 containerd[1711]: time="2025-07-07T05:56:01.595120213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8444f76b56-bhmcq,Uid:654bded6-4470-4285-bbbd-8ec39f31a268,Namespace:calico-apiserver,Attempt:0,}" Jul 7 05:56:01.625341 containerd[1711]: time="2025-07-07T05:56:01.625294519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7c8tc,Uid:93d92675-dc75-493f-afb1-b5a1b922550a,Namespace:kube-system,Attempt:0,}" Jul 7 05:56:01.631147 containerd[1711]: time="2025-07-07T05:56:01.631079011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8444f76b56-r26qp,Uid:d4471ce3-99ef-4485-9426-4f1c145f77dd,Namespace:calico-apiserver,Attempt:0,}" Jul 7 05:56:01.642126 containerd[1711]: time="2025-07-07T05:56:01.642043795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6jpc6,Uid:b7f453a6-8802-4475-abf7-3bbe1e10231e,Namespace:kube-system,Attempt:0,}" Jul 7 05:56:01.647226 containerd[1711]: time="2025-07-07T05:56:01.647151886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-pvv7q,Uid:6f156b26-7db3-4daa-b70b-8db09e58fe84,Namespace:calico-system,Attempt:0,}" Jul 7 05:56:01.828186 containerd[1711]: time="2025-07-07T05:56:01.827999000Z" level=info msg="shim disconnected" id=81e8de0db9af390f1242d30e8f80b11690ee3b267df6ba875140f75583bb59de namespace=k8s.io Jul 7 05:56:01.828186 containerd[1711]: time="2025-07-07T05:56:01.828065200Z" level=warning msg="cleaning up after shim disconnected" id=81e8de0db9af390f1242d30e8f80b11690ee3b267df6ba875140f75583bb59de namespace=k8s.io Jul 7 05:56:01.828186 containerd[1711]: time="2025-07-07T05:56:01.828074680Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:56:02.137005 containerd[1711]: time="2025-07-07T05:56:02.136314231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 05:56:02.198935 containerd[1711]: time="2025-07-07T05:56:02.198884567Z" level=error msg="Failed to destroy network for sandbox \"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.205373 containerd[1711]: time="2025-07-07T05:56:02.205305941Z" level=error msg="encountered an error cleaning up failed sandbox \"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.206360 containerd[1711]: time="2025-07-07T05:56:02.206199983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8444f76b56-bhmcq,Uid:654bded6-4470-4285-bbbd-8ec39f31a268,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.206544 kubelet[3105]: E0707 05:56:02.206477 3105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.207358 kubelet[3105]: E0707 05:56:02.206545 3105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8444f76b56-bhmcq" Jul 7 05:56:02.207358 kubelet[3105]: E0707 05:56:02.206564 3105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8444f76b56-bhmcq" Jul 7 05:56:02.207358 kubelet[3105]: E0707 05:56:02.206611 3105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8444f76b56-bhmcq_calico-apiserver(654bded6-4470-4285-bbbd-8ec39f31a268)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8444f76b56-bhmcq_calico-apiserver(654bded6-4470-4285-bbbd-8ec39f31a268)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8444f76b56-bhmcq" podUID="654bded6-4470-4285-bbbd-8ec39f31a268" Jul 7 05:56:02.268545 containerd[1711]: time="2025-07-07T05:56:02.268310158Z" level=error msg="Failed to destroy network for sandbox \"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.270119 containerd[1711]: time="2025-07-07T05:56:02.270017922Z" level=error msg="encountered an error cleaning up failed sandbox \"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.270315 containerd[1711]: time="2025-07-07T05:56:02.270121682Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2zxks,Uid:e285d422-7e9a-40bd-bc64-700d7b15f4ab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.270616 kubelet[3105]: E0707 05:56:02.270484 3105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.270616 kubelet[3105]: E0707 05:56:02.270554 3105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2zxks" Jul 7 05:56:02.270616 kubelet[3105]: E0707 05:56:02.270573 3105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2zxks" Jul 7 05:56:02.270739 kubelet[3105]: E0707 05:56:02.270618 3105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2zxks_calico-system(e285d422-7e9a-40bd-bc64-700d7b15f4ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2zxks_calico-system(e285d422-7e9a-40bd-bc64-700d7b15f4ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2zxks" podUID="e285d422-7e9a-40bd-bc64-700d7b15f4ab" Jul 7 05:56:02.288879 containerd[1711]: time="2025-07-07T05:56:02.288820442Z" level=error msg="Failed to destroy network for sandbox \"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.289542 containerd[1711]: time="2025-07-07T05:56:02.289388524Z" level=error msg="encountered an error cleaning up failed sandbox \"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.289542 containerd[1711]: time="2025-07-07T05:56:02.289441324Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6jpc6,Uid:b7f453a6-8802-4475-abf7-3bbe1e10231e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.290960 kubelet[3105]: E0707 05:56:02.290220 3105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.290960 kubelet[3105]: E0707 05:56:02.290575 3105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6jpc6" Jul 7 05:56:02.290960 kubelet[3105]: E0707 05:56:02.290606 3105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6jpc6" Jul 7 05:56:02.291229 kubelet[3105]: E0707 05:56:02.290655 3105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6jpc6_kube-system(b7f453a6-8802-4475-abf7-3bbe1e10231e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6jpc6_kube-system(b7f453a6-8802-4475-abf7-3bbe1e10231e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6jpc6" podUID="b7f453a6-8802-4475-abf7-3bbe1e10231e" Jul 7 05:56:02.300213 containerd[1711]: time="2025-07-07T05:56:02.300146947Z" level=error msg="Failed to destroy network for sandbox \"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.300548 containerd[1711]: time="2025-07-07T05:56:02.300516948Z" level=error msg="encountered an error cleaning up failed sandbox \"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.300603 containerd[1711]: time="2025-07-07T05:56:02.300570428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7c8tc,Uid:93d92675-dc75-493f-afb1-b5a1b922550a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.301297 kubelet[3105]: E0707 05:56:02.300809 3105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.301297 kubelet[3105]: E0707 05:56:02.300872 3105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7c8tc" Jul 7 05:56:02.301297 kubelet[3105]: E0707 05:56:02.300893 3105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7c8tc" Jul 7 05:56:02.301522 kubelet[3105]: E0707 05:56:02.300932 3105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-7c8tc_kube-system(93d92675-dc75-493f-afb1-b5a1b922550a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-7c8tc_kube-system(93d92675-dc75-493f-afb1-b5a1b922550a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7c8tc" podUID="93d92675-dc75-493f-afb1-b5a1b922550a" Jul 7 05:56:02.303328 containerd[1711]: time="2025-07-07T05:56:02.303218794Z" level=error msg="Failed to destroy network for sandbox \"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.305680 containerd[1711]: time="2025-07-07T05:56:02.303974995Z" level=error msg="encountered an error cleaning up failed sandbox \"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.305680 containerd[1711]: time="2025-07-07T05:56:02.304219356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fbcfc8df4-stb7m,Uid:381db6a2-37d3-4dd2-8605-5ee49bf45bbc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.305835 kubelet[3105]: E0707 05:56:02.304781 3105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.305835 kubelet[3105]: E0707 05:56:02.304850 3105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7fbcfc8df4-stb7m" Jul 7 05:56:02.305835 kubelet[3105]: E0707 05:56:02.304872 3105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7fbcfc8df4-stb7m" Jul 7 05:56:02.305950 kubelet[3105]: E0707 05:56:02.304918 3105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7fbcfc8df4-stb7m_calico-system(381db6a2-37d3-4dd2-8605-5ee49bf45bbc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7fbcfc8df4-stb7m_calico-system(381db6a2-37d3-4dd2-8605-5ee49bf45bbc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7fbcfc8df4-stb7m" podUID="381db6a2-37d3-4dd2-8605-5ee49bf45bbc" Jul 7 05:56:02.314815 containerd[1711]: time="2025-07-07T05:56:02.314740579Z" level=error msg="Failed to destroy network for sandbox \"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.315181 containerd[1711]: time="2025-07-07T05:56:02.315150340Z" level=error msg="encountered an error cleaning up failed sandbox \"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.315254 containerd[1711]: time="2025-07-07T05:56:02.315228020Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69dff4fc68-6rlws,Uid:9843f4c8-0ee5-49b8-b775-11d693445b56,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.315495 kubelet[3105]: E0707 05:56:02.315450 3105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.315557 kubelet[3105]: E0707 05:56:02.315518 3105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69dff4fc68-6rlws" Jul 7 05:56:02.315557 kubelet[3105]: E0707 05:56:02.315537 3105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69dff4fc68-6rlws" Jul 7 05:56:02.315610 kubelet[3105]: E0707 05:56:02.315579 3105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69dff4fc68-6rlws_calico-system(9843f4c8-0ee5-49b8-b775-11d693445b56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69dff4fc68-6rlws_calico-system(9843f4c8-0ee5-49b8-b775-11d693445b56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69dff4fc68-6rlws" podUID="9843f4c8-0ee5-49b8-b775-11d693445b56" Jul 7 05:56:02.316988 containerd[1711]: time="2025-07-07T05:56:02.316863783Z" level=error msg="Failed to destroy network for sandbox \"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.318244 containerd[1711]: time="2025-07-07T05:56:02.318213146Z" level=error msg="encountered an error cleaning up failed sandbox \"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.319111 containerd[1711]: time="2025-07-07T05:56:02.318403387Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-pvv7q,Uid:6f156b26-7db3-4daa-b70b-8db09e58fe84,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.323438 kubelet[3105]: E0707 05:56:02.323137 3105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.323438 kubelet[3105]: E0707 05:56:02.323199 3105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-pvv7q" Jul 7 05:56:02.323438 kubelet[3105]: E0707 05:56:02.323219 3105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-pvv7q" Jul 7 05:56:02.323639 kubelet[3105]: E0707 05:56:02.323264 3105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-pvv7q_calico-system(6f156b26-7db3-4daa-b70b-8db09e58fe84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-pvv7q_calico-system(6f156b26-7db3-4daa-b70b-8db09e58fe84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-pvv7q" podUID="6f156b26-7db3-4daa-b70b-8db09e58fe84" Jul 7 05:56:02.325756 containerd[1711]: time="2025-07-07T05:56:02.325618402Z" level=error msg="Failed to destroy network for sandbox \"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.326211 containerd[1711]: time="2025-07-07T05:56:02.325991483Z" level=error msg="encountered an error cleaning up failed sandbox \"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.326211 containerd[1711]: time="2025-07-07T05:56:02.326046683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8444f76b56-r26qp,Uid:d4471ce3-99ef-4485-9426-4f1c145f77dd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.326700 kubelet[3105]: E0707 05:56:02.326346 3105 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:02.326700 kubelet[3105]: E0707 05:56:02.326399 3105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8444f76b56-r26qp" Jul 7 05:56:02.326700 kubelet[3105]: E0707 05:56:02.326419 3105 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8444f76b56-r26qp" Jul 7 05:56:02.326917 kubelet[3105]: E0707 05:56:02.326495 3105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8444f76b56-r26qp_calico-apiserver(d4471ce3-99ef-4485-9426-4f1c145f77dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8444f76b56-r26qp_calico-apiserver(d4471ce3-99ef-4485-9426-4f1c145f77dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8444f76b56-r26qp" podUID="d4471ce3-99ef-4485-9426-4f1c145f77dd" Jul 7 05:56:02.925277 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d-shm.mount: Deactivated successfully. Jul 7 05:56:02.925376 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5-shm.mount: Deactivated successfully. Jul 7 05:56:03.125607 kubelet[3105]: I0707 05:56:03.125581 3105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Jul 7 05:56:03.127696 containerd[1711]: time="2025-07-07T05:56:03.127171306Z" level=info msg="StopPodSandbox for \"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\"" Jul 7 05:56:03.127696 containerd[1711]: time="2025-07-07T05:56:03.127371267Z" level=info msg="Ensure that sandbox d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877 in task-service has been cleanup successfully" Jul 7 05:56:03.128208 kubelet[3105]: I0707 05:56:03.128161 3105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Jul 7 05:56:03.129190 containerd[1711]: time="2025-07-07T05:56:03.129122791Z" level=info msg="StopPodSandbox for \"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\"" Jul 7 05:56:03.129330 containerd[1711]: time="2025-07-07T05:56:03.129303591Z" level=info msg="Ensure that sandbox d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5 in task-service has been cleanup successfully" Jul 7 05:56:03.134628 kubelet[3105]: I0707 05:56:03.133888 3105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Jul 7 05:56:03.135938 containerd[1711]: time="2025-07-07T05:56:03.135726645Z" level=info msg="StopPodSandbox for \"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\"" Jul 7 05:56:03.136450 containerd[1711]: time="2025-07-07T05:56:03.136331206Z" level=info msg="Ensure that sandbox daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8 in task-service has been cleanup successfully" Jul 7 05:56:03.140599 kubelet[3105]: I0707 05:56:03.139634 3105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Jul 7 05:56:03.140732 containerd[1711]: time="2025-07-07T05:56:03.140140095Z" level=info msg="StopPodSandbox for \"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\"" Jul 7 05:56:03.140732 containerd[1711]: time="2025-07-07T05:56:03.140307295Z" level=info msg="Ensure that sandbox 6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5 in task-service has been cleanup successfully" Jul 7 05:56:03.143170 kubelet[3105]: I0707 05:56:03.142773 3105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Jul 7 05:56:03.144912 kubelet[3105]: I0707 05:56:03.144888 3105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Jul 7 05:56:03.146204 containerd[1711]: time="2025-07-07T05:56:03.145017785Z" level=info msg="StopPodSandbox for \"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\"" Jul 7 05:56:03.147057 containerd[1711]: time="2025-07-07T05:56:03.146910109Z" level=info msg="Ensure that sandbox 86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d in task-service has been cleanup successfully" Jul 7 05:56:03.147490 containerd[1711]: time="2025-07-07T05:56:03.147378510Z" level=info msg="StopPodSandbox for \"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\"" Jul 7 05:56:03.149636 containerd[1711]: time="2025-07-07T05:56:03.149602075Z" level=info msg="Ensure that sandbox 0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973 in task-service has been cleanup successfully" Jul 7 05:56:03.153363 kubelet[3105]: I0707 05:56:03.153236 3105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Jul 7 05:56:03.160864 containerd[1711]: time="2025-07-07T05:56:03.160816380Z" level=info msg="StopPodSandbox for \"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\"" Jul 7 05:56:03.161480 containerd[1711]: time="2025-07-07T05:56:03.161450741Z" level=info msg="Ensure that sandbox 02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c in task-service has been cleanup successfully" Jul 7 05:56:03.168217 kubelet[3105]: I0707 05:56:03.168036 3105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Jul 7 05:56:03.170749 containerd[1711]: time="2025-07-07T05:56:03.170422440Z" level=info msg="StopPodSandbox for \"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\"" Jul 7 05:56:03.171078 containerd[1711]: time="2025-07-07T05:56:03.170974842Z" level=info msg="Ensure that sandbox b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3 in task-service has been cleanup successfully" Jul 7 05:56:03.224739 containerd[1711]: time="2025-07-07T05:56:03.224683158Z" level=error msg="StopPodSandbox for \"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\" failed" error="failed to destroy network for sandbox \"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:03.225554 kubelet[3105]: E0707 05:56:03.225068 3105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Jul 7 05:56:03.225554 kubelet[3105]: E0707 05:56:03.225411 3105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d"} Jul 7 05:56:03.225554 kubelet[3105]: E0707 05:56:03.225490 3105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e285d422-7e9a-40bd-bc64-700d7b15f4ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:56:03.225554 kubelet[3105]: E0707 05:56:03.225522 3105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e285d422-7e9a-40bd-bc64-700d7b15f4ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2zxks" podUID="e285d422-7e9a-40bd-bc64-700d7b15f4ab" Jul 7 05:56:03.241764 containerd[1711]: time="2025-07-07T05:56:03.241216394Z" level=error msg="StopPodSandbox for \"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\" failed" error="failed to destroy network for sandbox \"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:03.241995 kubelet[3105]: E0707 05:56:03.241513 3105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Jul 7 05:56:03.241995 kubelet[3105]: E0707 05:56:03.241564 3105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8"} Jul 7 05:56:03.241995 kubelet[3105]: E0707 05:56:03.241598 3105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"93d92675-dc75-493f-afb1-b5a1b922550a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:56:03.241995 kubelet[3105]: E0707 05:56:03.241630 3105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"93d92675-dc75-493f-afb1-b5a1b922550a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7c8tc" podUID="93d92675-dc75-493f-afb1-b5a1b922550a" Jul 7 05:56:03.270541 containerd[1711]: time="2025-07-07T05:56:03.270412098Z" level=error msg="StopPodSandbox for \"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\" failed" error="failed to destroy network for sandbox \"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:03.271530 kubelet[3105]: E0707 05:56:03.271381 3105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Jul 7 05:56:03.271530 kubelet[3105]: E0707 05:56:03.271438 3105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877"} Jul 7 05:56:03.271530 kubelet[3105]: E0707 05:56:03.271471 3105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b7f453a6-8802-4475-abf7-3bbe1e10231e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:56:03.271530 kubelet[3105]: E0707 05:56:03.271494 3105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b7f453a6-8802-4475-abf7-3bbe1e10231e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6jpc6" podUID="b7f453a6-8802-4475-abf7-3bbe1e10231e" Jul 7 05:56:03.290738 containerd[1711]: time="2025-07-07T05:56:03.290258101Z" level=error msg="StopPodSandbox for \"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\" failed" error="failed to destroy network for sandbox \"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:03.291263 kubelet[3105]: E0707 05:56:03.290963 3105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Jul 7 05:56:03.291263 kubelet[3105]: E0707 05:56:03.291043 3105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5"} Jul 7 05:56:03.291263 kubelet[3105]: E0707 05:56:03.291164 3105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"654bded6-4470-4285-bbbd-8ec39f31a268\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:56:03.291263 kubelet[3105]: E0707 05:56:03.291191 3105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"654bded6-4470-4285-bbbd-8ec39f31a268\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8444f76b56-bhmcq" podUID="654bded6-4470-4285-bbbd-8ec39f31a268" Jul 7 05:56:03.294498 containerd[1711]: time="2025-07-07T05:56:03.294281830Z" level=error msg="StopPodSandbox for \"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\" failed" error="failed to destroy network for sandbox \"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:03.295343 kubelet[3105]: E0707 05:56:03.295166 3105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Jul 7 05:56:03.295343 kubelet[3105]: E0707 05:56:03.295233 3105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973"} Jul 7 05:56:03.295343 kubelet[3105]: E0707 05:56:03.295265 3105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6f156b26-7db3-4daa-b70b-8db09e58fe84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:56:03.295343 kubelet[3105]: E0707 05:56:03.295311 3105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6f156b26-7db3-4daa-b70b-8db09e58fe84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-pvv7q" podUID="6f156b26-7db3-4daa-b70b-8db09e58fe84" Jul 7 05:56:03.296363 containerd[1711]: time="2025-07-07T05:56:03.296326154Z" level=error msg="StopPodSandbox for \"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\" failed" error="failed to destroy network for sandbox \"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:03.296724 kubelet[3105]: E0707 05:56:03.296596 3105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Jul 7 05:56:03.296724 kubelet[3105]: E0707 05:56:03.296634 3105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c"} Jul 7 05:56:03.296724 kubelet[3105]: E0707 05:56:03.296661 3105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4471ce3-99ef-4485-9426-4f1c145f77dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:56:03.296724 kubelet[3105]: E0707 05:56:03.296685 3105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4471ce3-99ef-4485-9426-4f1c145f77dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8444f76b56-r26qp" podUID="d4471ce3-99ef-4485-9426-4f1c145f77dd" Jul 7 05:56:03.297504 containerd[1711]: time="2025-07-07T05:56:03.297472597Z" level=error msg="StopPodSandbox for \"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\" failed" error="failed to destroy network for sandbox \"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:03.298119 kubelet[3105]: E0707 05:56:03.297880 3105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Jul 7 05:56:03.298119 kubelet[3105]: E0707 05:56:03.298061 3105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5"} Jul 7 05:56:03.298224 containerd[1711]: time="2025-07-07T05:56:03.298190038Z" level=error msg="StopPodSandbox for \"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\" failed" error="failed to destroy network for sandbox \"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:56:03.298429 kubelet[3105]: E0707 05:56:03.298314 3105 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Jul 7 05:56:03.298429 kubelet[3105]: E0707 05:56:03.298338 3105 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3"} Jul 7 05:56:03.298429 kubelet[3105]: E0707 05:56:03.298359 3105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"381db6a2-37d3-4dd2-8605-5ee49bf45bbc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:56:03.298429 kubelet[3105]: E0707 05:56:03.298383 3105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"381db6a2-37d3-4dd2-8605-5ee49bf45bbc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7fbcfc8df4-stb7m" podUID="381db6a2-37d3-4dd2-8605-5ee49bf45bbc" Jul 7 05:56:03.298679 kubelet[3105]: E0707 05:56:03.298609 3105 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9843f4c8-0ee5-49b8-b775-11d693445b56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:56:03.298679 kubelet[3105]: E0707 05:56:03.298636 3105 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9843f4c8-0ee5-49b8-b775-11d693445b56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69dff4fc68-6rlws" podUID="9843f4c8-0ee5-49b8-b775-11d693445b56" Jul 7 05:56:06.672716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1031394764.mount: Deactivated successfully. Jul 7 05:56:06.751497 containerd[1711]: time="2025-07-07T05:56:06.750726270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:06.752746 containerd[1711]: time="2025-07-07T05:56:06.752708714Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 7 05:56:06.756125 containerd[1711]: time="2025-07-07T05:56:06.756047121Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:06.759521 containerd[1711]: time="2025-07-07T05:56:06.759466689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:06.760526 containerd[1711]: time="2025-07-07T05:56:06.760123650Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.623754659s" Jul 7 05:56:06.760526 containerd[1711]: time="2025-07-07T05:56:06.760165850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 7 05:56:06.776872 containerd[1711]: time="2025-07-07T05:56:06.776771406Z" level=info msg="CreateContainer within sandbox \"984b25abc499dac7b3131d3b571a16ebde3f7090b42e35bba574136bc8ca2867\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 05:56:06.826982 containerd[1711]: time="2025-07-07T05:56:06.826926715Z" level=info msg="CreateContainer within sandbox \"984b25abc499dac7b3131d3b571a16ebde3f7090b42e35bba574136bc8ca2867\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"de53307f30032366e755499c68834bf35a6b6cc8196f9ca7b758da6640783634\"" Jul 7 05:56:06.827853 containerd[1711]: time="2025-07-07T05:56:06.827805797Z" level=info msg="StartContainer for \"de53307f30032366e755499c68834bf35a6b6cc8196f9ca7b758da6640783634\"" Jul 7 05:56:06.854295 systemd[1]: Started cri-containerd-de53307f30032366e755499c68834bf35a6b6cc8196f9ca7b758da6640783634.scope - libcontainer container de53307f30032366e755499c68834bf35a6b6cc8196f9ca7b758da6640783634. Jul 7 05:56:06.888118 containerd[1711]: time="2025-07-07T05:56:06.888020728Z" level=info msg="StartContainer for \"de53307f30032366e755499c68834bf35a6b6cc8196f9ca7b758da6640783634\" returns successfully" Jul 7 05:56:07.143295 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 05:56:07.143405 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 05:56:07.226734 kubelet[3105]: I0707 05:56:07.226572 3105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9trfb" podStartSLOduration=1.463870257 podStartE2EDuration="16.226550385s" podCreationTimestamp="2025-07-07 05:55:51 +0000 UTC" firstStartedPulling="2025-07-07 05:55:51.998220844 +0000 UTC m=+23.123268816" lastFinishedPulling="2025-07-07 05:56:06.760900972 +0000 UTC m=+37.885948944" observedRunningTime="2025-07-07 05:56:07.226310064 +0000 UTC m=+38.351358036" watchObservedRunningTime="2025-07-07 05:56:07.226550385 +0000 UTC m=+38.351598357" Jul 7 05:56:07.379789 containerd[1711]: time="2025-07-07T05:56:07.379741038Z" level=info msg="StopPodSandbox for \"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\"" Jul 7 05:56:07.555561 containerd[1711]: 2025-07-07 05:56:07.482 [INFO][4346] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Jul 7 05:56:07.555561 containerd[1711]: 2025-07-07 05:56:07.482 [INFO][4346] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" iface="eth0" netns="/var/run/netns/cni-ea463582-9d60-1a18-7af3-ead6c0a738db" Jul 7 05:56:07.555561 containerd[1711]: 2025-07-07 05:56:07.482 [INFO][4346] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" iface="eth0" netns="/var/run/netns/cni-ea463582-9d60-1a18-7af3-ead6c0a738db" Jul 7 05:56:07.555561 containerd[1711]: 2025-07-07 05:56:07.483 [INFO][4346] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" iface="eth0" netns="/var/run/netns/cni-ea463582-9d60-1a18-7af3-ead6c0a738db" Jul 7 05:56:07.555561 containerd[1711]: 2025-07-07 05:56:07.483 [INFO][4346] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Jul 7 05:56:07.555561 containerd[1711]: 2025-07-07 05:56:07.483 [INFO][4346] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Jul 7 05:56:07.555561 containerd[1711]: 2025-07-07 05:56:07.532 [INFO][4358] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" HandleID="k8s-pod-network.b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--7fbcfc8df4--stb7m-eth0" Jul 7 05:56:07.555561 containerd[1711]: 2025-07-07 05:56:07.532 [INFO][4358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:07.555561 containerd[1711]: 2025-07-07 05:56:07.532 [INFO][4358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:07.555561 containerd[1711]: 2025-07-07 05:56:07.548 [WARNING][4358] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" HandleID="k8s-pod-network.b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--7fbcfc8df4--stb7m-eth0" Jul 7 05:56:07.555561 containerd[1711]: 2025-07-07 05:56:07.548 [INFO][4358] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" HandleID="k8s-pod-network.b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--7fbcfc8df4--stb7m-eth0" Jul 7 05:56:07.555561 containerd[1711]: 2025-07-07 05:56:07.551 [INFO][4358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:07.555561 containerd[1711]: 2025-07-07 05:56:07.553 [INFO][4346] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Jul 7 05:56:07.556226 containerd[1711]: time="2025-07-07T05:56:07.555750301Z" level=info msg="TearDown network for sandbox \"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\" successfully" Jul 7 05:56:07.556226 containerd[1711]: time="2025-07-07T05:56:07.555778901Z" level=info msg="StopPodSandbox for \"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\" returns successfully" Jul 7 05:56:07.575907 kubelet[3105]: I0707 05:56:07.575855 3105 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhd4t\" (UniqueName: \"kubernetes.io/projected/381db6a2-37d3-4dd2-8605-5ee49bf45bbc-kube-api-access-hhd4t\") pod \"381db6a2-37d3-4dd2-8605-5ee49bf45bbc\" (UID: \"381db6a2-37d3-4dd2-8605-5ee49bf45bbc\") " Jul 7 05:56:07.575907 kubelet[3105]: I0707 05:56:07.575912 3105 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/381db6a2-37d3-4dd2-8605-5ee49bf45bbc-whisker-backend-key-pair\") pod \"381db6a2-37d3-4dd2-8605-5ee49bf45bbc\" (UID: \"381db6a2-37d3-4dd2-8605-5ee49bf45bbc\") " Jul 7 05:56:07.576112 kubelet[3105]: I0707 05:56:07.575940 3105 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/381db6a2-37d3-4dd2-8605-5ee49bf45bbc-whisker-ca-bundle\") pod \"381db6a2-37d3-4dd2-8605-5ee49bf45bbc\" (UID: \"381db6a2-37d3-4dd2-8605-5ee49bf45bbc\") " Jul 7 05:56:07.576375 kubelet[3105]: I0707 05:56:07.576349 3105 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/381db6a2-37d3-4dd2-8605-5ee49bf45bbc-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "381db6a2-37d3-4dd2-8605-5ee49bf45bbc" (UID: "381db6a2-37d3-4dd2-8605-5ee49bf45bbc"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 05:56:07.580999 kubelet[3105]: I0707 05:56:07.580888 3105 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/381db6a2-37d3-4dd2-8605-5ee49bf45bbc-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "381db6a2-37d3-4dd2-8605-5ee49bf45bbc" (UID: "381db6a2-37d3-4dd2-8605-5ee49bf45bbc"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 05:56:07.583746 kubelet[3105]: I0707 05:56:07.583701 3105 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/381db6a2-37d3-4dd2-8605-5ee49bf45bbc-kube-api-access-hhd4t" (OuterVolumeSpecName: "kube-api-access-hhd4t") pod "381db6a2-37d3-4dd2-8605-5ee49bf45bbc" (UID: "381db6a2-37d3-4dd2-8605-5ee49bf45bbc"). InnerVolumeSpecName "kube-api-access-hhd4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 05:56:07.674477 systemd[1]: run-netns-cni\x2dea463582\x2d9d60\x2d1a18\x2d7af3\x2dead6c0a738db.mount: Deactivated successfully. Jul 7 05:56:07.674570 systemd[1]: var-lib-kubelet-pods-381db6a2\x2d37d3\x2d4dd2\x2d8605\x2d5ee49bf45bbc-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 05:56:07.674629 systemd[1]: var-lib-kubelet-pods-381db6a2\x2d37d3\x2d4dd2\x2d8605\x2d5ee49bf45bbc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhhd4t.mount: Deactivated successfully. Jul 7 05:56:07.677116 kubelet[3105]: I0707 05:56:07.677025 3105 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/381db6a2-37d3-4dd2-8605-5ee49bf45bbc-whisker-backend-key-pair\") on node \"ci-4081.3.4-a-2bf61d9e54\" DevicePath \"\"" Jul 7 05:56:07.677116 kubelet[3105]: I0707 05:56:07.677078 3105 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/381db6a2-37d3-4dd2-8605-5ee49bf45bbc-whisker-ca-bundle\") on node \"ci-4081.3.4-a-2bf61d9e54\" DevicePath \"\"" Jul 7 05:56:07.677116 kubelet[3105]: I0707 05:56:07.677124 3105 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhd4t\" (UniqueName: \"kubernetes.io/projected/381db6a2-37d3-4dd2-8605-5ee49bf45bbc-kube-api-access-hhd4t\") on node \"ci-4081.3.4-a-2bf61d9e54\" DevicePath \"\"" Jul 7 05:56:08.196724 systemd[1]: Removed slice kubepods-besteffort-pod381db6a2_37d3_4dd2_8605_5ee49bf45bbc.slice - libcontainer container kubepods-besteffort-pod381db6a2_37d3_4dd2_8605_5ee49bf45bbc.slice. Jul 7 05:56:08.295210 systemd[1]: Created slice kubepods-besteffort-pod45abbb2d_2c37_4175_a08e_cb6e2900ac7c.slice - libcontainer container kubepods-besteffort-pod45abbb2d_2c37_4175_a08e_cb6e2900ac7c.slice. Jul 7 05:56:08.482805 kubelet[3105]: I0707 05:56:08.482677 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45abbb2d-2c37-4175-a08e-cb6e2900ac7c-whisker-ca-bundle\") pod \"whisker-54cbb6795d-2bdk9\" (UID: \"45abbb2d-2c37-4175-a08e-cb6e2900ac7c\") " pod="calico-system/whisker-54cbb6795d-2bdk9" Jul 7 05:56:08.482805 kubelet[3105]: I0707 05:56:08.482728 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jl8l\" (UniqueName: \"kubernetes.io/projected/45abbb2d-2c37-4175-a08e-cb6e2900ac7c-kube-api-access-9jl8l\") pod \"whisker-54cbb6795d-2bdk9\" (UID: \"45abbb2d-2c37-4175-a08e-cb6e2900ac7c\") " pod="calico-system/whisker-54cbb6795d-2bdk9" Jul 7 05:56:08.482805 kubelet[3105]: I0707 05:56:08.482754 3105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/45abbb2d-2c37-4175-a08e-cb6e2900ac7c-whisker-backend-key-pair\") pod \"whisker-54cbb6795d-2bdk9\" (UID: \"45abbb2d-2c37-4175-a08e-cb6e2900ac7c\") " pod="calico-system/whisker-54cbb6795d-2bdk9" Jul 7 05:56:08.900232 containerd[1711]: time="2025-07-07T05:56:08.900113381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54cbb6795d-2bdk9,Uid:45abbb2d-2c37-4175-a08e-cb6e2900ac7c,Namespace:calico-system,Attempt:0,}" Jul 7 05:56:09.001611 kubelet[3105]: I0707 05:56:09.001431 3105 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="381db6a2-37d3-4dd2-8605-5ee49bf45bbc" path="/var/lib/kubelet/pods/381db6a2-37d3-4dd2-8605-5ee49bf45bbc/volumes" Jul 7 05:56:09.297005 systemd-networkd[1453]: califcac7b8c7ee: Link UP Jul 7 05:56:09.297273 systemd-networkd[1453]: califcac7b8c7ee: Gained carrier Jul 7 05:56:09.321130 containerd[1711]: 2025-07-07 05:56:08.984 [INFO][4493] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 05:56:09.321130 containerd[1711]: 2025-07-07 05:56:09.218 [INFO][4493] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--2bf61d9e54-k8s-whisker--54cbb6795d--2bdk9-eth0 whisker-54cbb6795d- calico-system 45abbb2d-2c37-4175-a08e-cb6e2900ac7c 869 0 2025-07-07 05:56:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54cbb6795d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.4-a-2bf61d9e54 whisker-54cbb6795d-2bdk9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califcac7b8c7ee [] [] }} ContainerID="fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7" Namespace="calico-system" Pod="whisker-54cbb6795d-2bdk9" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--54cbb6795d--2bdk9-" Jul 7 05:56:09.321130 containerd[1711]: 2025-07-07 05:56:09.218 [INFO][4493] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7" Namespace="calico-system" Pod="whisker-54cbb6795d-2bdk9" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--54cbb6795d--2bdk9-eth0" Jul 7 05:56:09.321130 containerd[1711]: 2025-07-07 05:56:09.244 [INFO][4507] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7" HandleID="k8s-pod-network.fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--54cbb6795d--2bdk9-eth0" Jul 7 05:56:09.321130 containerd[1711]: 2025-07-07 05:56:09.244 [INFO][4507] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7" HandleID="k8s-pod-network.fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--54cbb6795d--2bdk9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-2bf61d9e54", "pod":"whisker-54cbb6795d-2bdk9", "timestamp":"2025-07-07 05:56:09.244498158 +0000 UTC"}, Hostname:"ci-4081.3.4-a-2bf61d9e54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:56:09.321130 containerd[1711]: 2025-07-07 05:56:09.244 [INFO][4507] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:09.321130 containerd[1711]: 2025-07-07 05:56:09.244 [INFO][4507] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:09.321130 containerd[1711]: 2025-07-07 05:56:09.244 [INFO][4507] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-2bf61d9e54' Jul 7 05:56:09.321130 containerd[1711]: 2025-07-07 05:56:09.253 [INFO][4507] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:09.321130 containerd[1711]: 2025-07-07 05:56:09.257 [INFO][4507] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:09.321130 containerd[1711]: 2025-07-07 05:56:09.261 [INFO][4507] ipam/ipam.go 511: Trying affinity for 192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:09.321130 containerd[1711]: 2025-07-07 05:56:09.263 [INFO][4507] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:09.321130 containerd[1711]: 2025-07-07 05:56:09.265 [INFO][4507] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:09.321130 containerd[1711]: 2025-07-07 05:56:09.265 [INFO][4507] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.0/26 handle="k8s-pod-network.fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:09.321130 containerd[1711]: 2025-07-07 05:56:09.271 [INFO][4507] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7 Jul 7 05:56:09.321130 containerd[1711]: 2025-07-07 05:56:09.276 [INFO][4507] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.0/26 handle="k8s-pod-network.fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:09.321130 containerd[1711]: 2025-07-07 05:56:09.286 [INFO][4507] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.1/26] block=192.168.105.0/26 handle="k8s-pod-network.fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:09.321130 containerd[1711]: 2025-07-07 05:56:09.286 [INFO][4507] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.1/26] handle="k8s-pod-network.fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:09.321130 containerd[1711]: 2025-07-07 05:56:09.286 [INFO][4507] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:09.321130 containerd[1711]: 2025-07-07 05:56:09.286 [INFO][4507] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.1/26] IPv6=[] ContainerID="fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7" HandleID="k8s-pod-network.fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--54cbb6795d--2bdk9-eth0" Jul 7 05:56:09.321720 containerd[1711]: 2025-07-07 05:56:09.288 [INFO][4493] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7" Namespace="calico-system" Pod="whisker-54cbb6795d-2bdk9" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--54cbb6795d--2bdk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-whisker--54cbb6795d--2bdk9-eth0", GenerateName:"whisker-54cbb6795d-", Namespace:"calico-system", SelfLink:"", UID:"45abbb2d-2c37-4175-a08e-cb6e2900ac7c", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 56, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54cbb6795d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"", Pod:"whisker-54cbb6795d-2bdk9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.105.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califcac7b8c7ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:09.321720 containerd[1711]: 2025-07-07 05:56:09.288 [INFO][4493] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.1/32] ContainerID="fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7" Namespace="calico-system" Pod="whisker-54cbb6795d-2bdk9" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--54cbb6795d--2bdk9-eth0" Jul 7 05:56:09.321720 containerd[1711]: 2025-07-07 05:56:09.288 [INFO][4493] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califcac7b8c7ee ContainerID="fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7" Namespace="calico-system" Pod="whisker-54cbb6795d-2bdk9" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--54cbb6795d--2bdk9-eth0" Jul 7 05:56:09.321720 containerd[1711]: 2025-07-07 05:56:09.299 [INFO][4493] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7" Namespace="calico-system" Pod="whisker-54cbb6795d-2bdk9" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--54cbb6795d--2bdk9-eth0" Jul 7 05:56:09.321720 containerd[1711]: 2025-07-07 05:56:09.299 [INFO][4493] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7" Namespace="calico-system" Pod="whisker-54cbb6795d-2bdk9" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--54cbb6795d--2bdk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-whisker--54cbb6795d--2bdk9-eth0", GenerateName:"whisker-54cbb6795d-", Namespace:"calico-system", SelfLink:"", UID:"45abbb2d-2c37-4175-a08e-cb6e2900ac7c", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 56, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54cbb6795d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7", Pod:"whisker-54cbb6795d-2bdk9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.105.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califcac7b8c7ee", MAC:"02:7a:31:e0:fa:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:09.321720 containerd[1711]: 2025-07-07 05:56:09.316 [INFO][4493] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7" Namespace="calico-system" Pod="whisker-54cbb6795d-2bdk9" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--54cbb6795d--2bdk9-eth0" Jul 7 05:56:09.832825 containerd[1711]: time="2025-07-07T05:56:09.832659576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:56:09.832825 containerd[1711]: time="2025-07-07T05:56:09.832722976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:56:09.832825 containerd[1711]: time="2025-07-07T05:56:09.832739456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:56:09.832825 containerd[1711]: time="2025-07-07T05:56:09.832831336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:56:09.858330 systemd[1]: Started cri-containerd-fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7.scope - libcontainer container fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7. Jul 7 05:56:09.891685 containerd[1711]: time="2025-07-07T05:56:09.891646702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54cbb6795d-2bdk9,Uid:45abbb2d-2c37-4175-a08e-cb6e2900ac7c,Namespace:calico-system,Attempt:0,} returns sandbox id \"fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7\"" Jul 7 05:56:09.894218 containerd[1711]: time="2025-07-07T05:56:09.893853227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 05:56:10.965207 systemd-networkd[1453]: califcac7b8c7ee: Gained IPv6LL Jul 7 05:56:11.087998 containerd[1711]: time="2025-07-07T05:56:11.087950261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:11.090922 containerd[1711]: time="2025-07-07T05:56:11.090880307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 7 05:56:11.095409 containerd[1711]: time="2025-07-07T05:56:11.095210996Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:11.098035 containerd[1711]: time="2025-07-07T05:56:11.097993242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:11.099185 containerd[1711]: time="2025-07-07T05:56:11.098510444Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.204618257s" Jul 7 05:56:11.099185 containerd[1711]: time="2025-07-07T05:56:11.098546164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 7 05:56:11.102308 containerd[1711]: time="2025-07-07T05:56:11.102158851Z" level=info msg="CreateContainer within sandbox \"fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 05:56:11.147324 containerd[1711]: time="2025-07-07T05:56:11.147271028Z" level=info msg="CreateContainer within sandbox \"fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6f27ed19bfc35c68cbe1221941db65771c6e582d3c7e380adeb70c3fd4cca6c8\"" Jul 7 05:56:11.148637 containerd[1711]: time="2025-07-07T05:56:11.148433270Z" level=info msg="StartContainer for \"6f27ed19bfc35c68cbe1221941db65771c6e582d3c7e380adeb70c3fd4cca6c8\"" Jul 7 05:56:11.192358 systemd[1]: Started cri-containerd-6f27ed19bfc35c68cbe1221941db65771c6e582d3c7e380adeb70c3fd4cca6c8.scope - libcontainer container 6f27ed19bfc35c68cbe1221941db65771c6e582d3c7e380adeb70c3fd4cca6c8. Jul 7 05:56:11.264521 containerd[1711]: time="2025-07-07T05:56:11.264220598Z" level=info msg="StartContainer for \"6f27ed19bfc35c68cbe1221941db65771c6e582d3c7e380adeb70c3fd4cca6c8\" returns successfully" Jul 7 05:56:11.267054 containerd[1711]: time="2025-07-07T05:56:11.266670123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 05:56:13.038080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1488324194.mount: Deactivated successfully. Jul 7 05:56:13.094509 containerd[1711]: time="2025-07-07T05:56:13.094466633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:13.096414 containerd[1711]: time="2025-07-07T05:56:13.096364477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 7 05:56:13.099477 containerd[1711]: time="2025-07-07T05:56:13.099422523Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:13.107129 containerd[1711]: time="2025-07-07T05:56:13.107039620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:13.107875 containerd[1711]: time="2025-07-07T05:56:13.107743581Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.840688897s" Jul 7 05:56:13.107875 containerd[1711]: time="2025-07-07T05:56:13.107778861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 7 05:56:13.110337 containerd[1711]: time="2025-07-07T05:56:13.110182506Z" level=info msg="CreateContainer within sandbox \"fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 05:56:13.143890 containerd[1711]: time="2025-07-07T05:56:13.143825578Z" level=info msg="CreateContainer within sandbox \"fa6f5ba3a061c822198310f420dd869d426c31eb1ddc9491010a0de7c3d8d1d7\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d7263a615dfff804f76923abeaeeb288e99fb9d76dc0eab58f029b2892cedba4\"" Jul 7 05:56:13.151212 containerd[1711]: time="2025-07-07T05:56:13.151147874Z" level=info msg="StartContainer for \"d7263a615dfff804f76923abeaeeb288e99fb9d76dc0eab58f029b2892cedba4\"" Jul 7 05:56:13.186221 systemd[1]: Started cri-containerd-d7263a615dfff804f76923abeaeeb288e99fb9d76dc0eab58f029b2892cedba4.scope - libcontainer container d7263a615dfff804f76923abeaeeb288e99fb9d76dc0eab58f029b2892cedba4. Jul 7 05:56:13.250874 containerd[1711]: time="2025-07-07T05:56:13.250807167Z" level=info msg="StartContainer for \"d7263a615dfff804f76923abeaeeb288e99fb9d76dc0eab58f029b2892cedba4\" returns successfully" Jul 7 05:56:14.013355 containerd[1711]: time="2025-07-07T05:56:14.013300198Z" level=info msg="StopPodSandbox for \"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\"" Jul 7 05:56:14.113188 containerd[1711]: 2025-07-07 05:56:14.074 [INFO][4747] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Jul 7 05:56:14.113188 containerd[1711]: 2025-07-07 05:56:14.075 [INFO][4747] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" iface="eth0" netns="/var/run/netns/cni-cba51f7b-d188-a792-4150-289bcf75e59c" Jul 7 05:56:14.113188 containerd[1711]: 2025-07-07 05:56:14.075 [INFO][4747] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" iface="eth0" netns="/var/run/netns/cni-cba51f7b-d188-a792-4150-289bcf75e59c" Jul 7 05:56:14.113188 containerd[1711]: 2025-07-07 05:56:14.076 [INFO][4747] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" iface="eth0" netns="/var/run/netns/cni-cba51f7b-d188-a792-4150-289bcf75e59c" Jul 7 05:56:14.113188 containerd[1711]: 2025-07-07 05:56:14.076 [INFO][4747] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Jul 7 05:56:14.113188 containerd[1711]: 2025-07-07 05:56:14.076 [INFO][4747] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Jul 7 05:56:14.113188 containerd[1711]: 2025-07-07 05:56:14.099 [INFO][4755] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" HandleID="k8s-pod-network.daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0" Jul 7 05:56:14.113188 containerd[1711]: 2025-07-07 05:56:14.099 [INFO][4755] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:14.113188 containerd[1711]: 2025-07-07 05:56:14.099 [INFO][4755] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:14.113188 containerd[1711]: 2025-07-07 05:56:14.108 [WARNING][4755] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" HandleID="k8s-pod-network.daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0" Jul 7 05:56:14.113188 containerd[1711]: 2025-07-07 05:56:14.108 [INFO][4755] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" HandleID="k8s-pod-network.daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0" Jul 7 05:56:14.113188 containerd[1711]: 2025-07-07 05:56:14.110 [INFO][4755] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:14.113188 containerd[1711]: 2025-07-07 05:56:14.111 [INFO][4747] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Jul 7 05:56:14.115391 containerd[1711]: time="2025-07-07T05:56:14.115230856Z" level=info msg="TearDown network for sandbox \"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\" successfully" Jul 7 05:56:14.115391 containerd[1711]: time="2025-07-07T05:56:14.115291376Z" level=info msg="StopPodSandbox for \"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\" returns successfully" Jul 7 05:56:14.116822 systemd[1]: run-netns-cni\x2dcba51f7b\x2dd188\x2da792\x2d4150\x2d289bcf75e59c.mount: Deactivated successfully. Jul 7 05:56:14.117572 containerd[1711]: time="2025-07-07T05:56:14.117522461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7c8tc,Uid:93d92675-dc75-493f-afb1-b5a1b922550a,Namespace:kube-system,Attempt:1,}" Jul 7 05:56:14.248128 kubelet[3105]: I0707 05:56:14.248036 3105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-54cbb6795d-2bdk9" podStartSLOduration=3.032678063 podStartE2EDuration="6.24801598s" podCreationTimestamp="2025-07-07 05:56:08 +0000 UTC" firstStartedPulling="2025-07-07 05:56:09.893452986 +0000 UTC m=+41.018500958" lastFinishedPulling="2025-07-07 05:56:13.108790903 +0000 UTC m=+44.233838875" observedRunningTime="2025-07-07 05:56:14.247632219 +0000 UTC m=+45.372680231" watchObservedRunningTime="2025-07-07 05:56:14.24801598 +0000 UTC m=+45.373063952" Jul 7 05:56:14.292866 systemd-networkd[1453]: cali86ed417a576: Link UP Jul 7 05:56:14.295232 systemd-networkd[1453]: cali86ed417a576: Gained carrier Jul 7 05:56:14.322719 containerd[1711]: 2025-07-07 05:56:14.174 [INFO][4761] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 05:56:14.322719 containerd[1711]: 2025-07-07 05:56:14.187 [INFO][4761] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0 coredns-7c65d6cfc9- kube-system 93d92675-dc75-493f-afb1-b5a1b922550a 897 0 2025-07-07 05:55:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-a-2bf61d9e54 coredns-7c65d6cfc9-7c8tc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali86ed417a576 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7c8tc" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-" Jul 7 05:56:14.322719 containerd[1711]: 2025-07-07 05:56:14.188 [INFO][4761] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7c8tc" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0" Jul 7 05:56:14.322719 containerd[1711]: 2025-07-07 05:56:14.210 [INFO][4773] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018" HandleID="k8s-pod-network.e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0" Jul 7 05:56:14.322719 containerd[1711]: 2025-07-07 05:56:14.210 [INFO][4773] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018" HandleID="k8s-pod-network.e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024bdf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-a-2bf61d9e54", "pod":"coredns-7c65d6cfc9-7c8tc", "timestamp":"2025-07-07 05:56:14.210241339 +0000 UTC"}, Hostname:"ci-4081.3.4-a-2bf61d9e54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:56:14.322719 containerd[1711]: 2025-07-07 05:56:14.210 [INFO][4773] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:14.322719 containerd[1711]: 2025-07-07 05:56:14.210 [INFO][4773] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:14.322719 containerd[1711]: 2025-07-07 05:56:14.210 [INFO][4773] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-2bf61d9e54' Jul 7 05:56:14.322719 containerd[1711]: 2025-07-07 05:56:14.222 [INFO][4773] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:14.322719 containerd[1711]: 2025-07-07 05:56:14.230 [INFO][4773] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:14.322719 containerd[1711]: 2025-07-07 05:56:14.236 [INFO][4773] ipam/ipam.go 511: Trying affinity for 192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:14.322719 containerd[1711]: 2025-07-07 05:56:14.238 [INFO][4773] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:14.322719 containerd[1711]: 2025-07-07 05:56:14.249 [INFO][4773] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:14.322719 containerd[1711]: 2025-07-07 05:56:14.249 [INFO][4773] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.0/26 handle="k8s-pod-network.e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:14.322719 containerd[1711]: 2025-07-07 05:56:14.255 [INFO][4773] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018 Jul 7 05:56:14.322719 containerd[1711]: 2025-07-07 05:56:14.267 [INFO][4773] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.0/26 handle="k8s-pod-network.e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:14.322719 containerd[1711]: 2025-07-07 05:56:14.281 [INFO][4773] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.2/26] block=192.168.105.0/26 handle="k8s-pod-network.e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:14.322719 containerd[1711]: 2025-07-07 05:56:14.281 [INFO][4773] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.2/26] handle="k8s-pod-network.e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:14.322719 containerd[1711]: 2025-07-07 05:56:14.281 [INFO][4773] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:14.322719 containerd[1711]: 2025-07-07 05:56:14.281 [INFO][4773] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.2/26] IPv6=[] ContainerID="e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018" HandleID="k8s-pod-network.e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0" Jul 7 05:56:14.325828 containerd[1711]: 2025-07-07 05:56:14.286 [INFO][4761] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7c8tc" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"93d92675-dc75-493f-afb1-b5a1b922550a", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"", Pod:"coredns-7c65d6cfc9-7c8tc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86ed417a576", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:14.325828 containerd[1711]: 2025-07-07 05:56:14.288 [INFO][4761] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.2/32] ContainerID="e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7c8tc" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0" Jul 7 05:56:14.325828 containerd[1711]: 2025-07-07 05:56:14.288 [INFO][4761] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86ed417a576 ContainerID="e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7c8tc" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0" Jul 7 05:56:14.325828 containerd[1711]: 2025-07-07 05:56:14.296 [INFO][4761] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7c8tc" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0" Jul 7 05:56:14.325828 containerd[1711]: 2025-07-07 05:56:14.296 [INFO][4761] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7c8tc" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"93d92675-dc75-493f-afb1-b5a1b922550a", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018", Pod:"coredns-7c65d6cfc9-7c8tc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86ed417a576", MAC:"7e:7b:b9:a2:79:00", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:14.325828 containerd[1711]: 2025-07-07 05:56:14.312 [INFO][4761] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7c8tc" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0" Jul 7 05:56:14.366579 containerd[1711]: time="2025-07-07T05:56:14.366377273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:56:14.366579 containerd[1711]: time="2025-07-07T05:56:14.366434874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:56:14.366579 containerd[1711]: time="2025-07-07T05:56:14.366446434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:56:14.366579 containerd[1711]: time="2025-07-07T05:56:14.366539434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:56:14.385323 systemd[1]: Started cri-containerd-e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018.scope - libcontainer container e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018. Jul 7 05:56:14.420443 containerd[1711]: time="2025-07-07T05:56:14.420306349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7c8tc,Uid:93d92675-dc75-493f-afb1-b5a1b922550a,Namespace:kube-system,Attempt:1,} returns sandbox id \"e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018\"" Jul 7 05:56:14.435457 containerd[1711]: time="2025-07-07T05:56:14.435074060Z" level=info msg="CreateContainer within sandbox \"e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 05:56:14.472175 containerd[1711]: time="2025-07-07T05:56:14.472081820Z" level=info msg="CreateContainer within sandbox \"e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"69c282ba6290ed8a433ebe2f6ddab09ef786ec97bcb26d09ac29b925a952c9f3\"" Jul 7 05:56:14.473537 containerd[1711]: time="2025-07-07T05:56:14.473292102Z" level=info msg="StartContainer for \"69c282ba6290ed8a433ebe2f6ddab09ef786ec97bcb26d09ac29b925a952c9f3\"" Jul 7 05:56:14.529336 systemd[1]: Started cri-containerd-69c282ba6290ed8a433ebe2f6ddab09ef786ec97bcb26d09ac29b925a952c9f3.scope - libcontainer container 69c282ba6290ed8a433ebe2f6ddab09ef786ec97bcb26d09ac29b925a952c9f3. Jul 7 05:56:14.683865 containerd[1711]: time="2025-07-07T05:56:14.682079829Z" level=info msg="StartContainer for \"69c282ba6290ed8a433ebe2f6ddab09ef786ec97bcb26d09ac29b925a952c9f3\" returns successfully" Jul 7 05:56:14.998583 containerd[1711]: time="2025-07-07T05:56:14.997368463Z" level=info msg="StopPodSandbox for \"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\"" Jul 7 05:56:14.999205 containerd[1711]: time="2025-07-07T05:56:14.998740386Z" level=info msg="StopPodSandbox for \"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\"" Jul 7 05:56:15.000439 containerd[1711]: time="2025-07-07T05:56:15.000406110Z" level=info msg="StopPodSandbox for \"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\"" Jul 7 05:56:15.170388 containerd[1711]: 2025-07-07 05:56:15.102 [INFO][4923] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Jul 7 05:56:15.170388 containerd[1711]: 2025-07-07 05:56:15.102 [INFO][4923] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" iface="eth0" netns="/var/run/netns/cni-e855e5bd-a114-b7e1-d4da-948520d25ef0" Jul 7 05:56:15.170388 containerd[1711]: 2025-07-07 05:56:15.103 [INFO][4923] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" iface="eth0" netns="/var/run/netns/cni-e855e5bd-a114-b7e1-d4da-948520d25ef0" Jul 7 05:56:15.170388 containerd[1711]: 2025-07-07 05:56:15.103 [INFO][4923] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" iface="eth0" netns="/var/run/netns/cni-e855e5bd-a114-b7e1-d4da-948520d25ef0" Jul 7 05:56:15.170388 containerd[1711]: 2025-07-07 05:56:15.103 [INFO][4923] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Jul 7 05:56:15.170388 containerd[1711]: 2025-07-07 05:56:15.103 [INFO][4923] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Jul 7 05:56:15.170388 containerd[1711]: 2025-07-07 05:56:15.151 [INFO][4945] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" HandleID="k8s-pod-network.02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0" Jul 7 05:56:15.170388 containerd[1711]: 2025-07-07 05:56:15.151 [INFO][4945] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:15.170388 containerd[1711]: 2025-07-07 05:56:15.151 [INFO][4945] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:15.170388 containerd[1711]: 2025-07-07 05:56:15.160 [WARNING][4945] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" HandleID="k8s-pod-network.02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0" Jul 7 05:56:15.170388 containerd[1711]: 2025-07-07 05:56:15.160 [INFO][4945] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" HandleID="k8s-pod-network.02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0" Jul 7 05:56:15.170388 containerd[1711]: 2025-07-07 05:56:15.163 [INFO][4945] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:15.170388 containerd[1711]: 2025-07-07 05:56:15.168 [INFO][4923] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Jul 7 05:56:15.174312 containerd[1711]: time="2025-07-07T05:56:15.173998081Z" level=info msg="TearDown network for sandbox \"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\" successfully" Jul 7 05:56:15.174312 containerd[1711]: time="2025-07-07T05:56:15.174043001Z" level=info msg="StopPodSandbox for \"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\" returns successfully" Jul 7 05:56:15.175291 systemd[1]: run-netns-cni\x2de855e5bd\x2da114\x2db7e1\x2dd4da\x2d948520d25ef0.mount: Deactivated successfully. Jul 7 05:56:15.176565 containerd[1711]: time="2025-07-07T05:56:15.176533686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8444f76b56-r26qp,Uid:d4471ce3-99ef-4485-9426-4f1c145f77dd,Namespace:calico-apiserver,Attempt:1,}" Jul 7 05:56:15.181768 kubelet[3105]: I0707 05:56:15.181555 3105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 05:56:15.190970 containerd[1711]: 2025-07-07 05:56:15.095 [INFO][4908] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Jul 7 05:56:15.190970 containerd[1711]: 2025-07-07 05:56:15.096 [INFO][4908] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" iface="eth0" netns="/var/run/netns/cni-751c784c-3411-3a31-df25-e7b308eb1072" Jul 7 05:56:15.190970 containerd[1711]: 2025-07-07 05:56:15.096 [INFO][4908] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" iface="eth0" netns="/var/run/netns/cni-751c784c-3411-3a31-df25-e7b308eb1072" Jul 7 05:56:15.190970 containerd[1711]: 2025-07-07 05:56:15.097 [INFO][4908] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" iface="eth0" netns="/var/run/netns/cni-751c784c-3411-3a31-df25-e7b308eb1072" Jul 7 05:56:15.190970 containerd[1711]: 2025-07-07 05:56:15.097 [INFO][4908] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Jul 7 05:56:15.190970 containerd[1711]: 2025-07-07 05:56:15.098 [INFO][4908] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Jul 7 05:56:15.190970 containerd[1711]: 2025-07-07 05:56:15.152 [INFO][4940] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" HandleID="k8s-pod-network.86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0" Jul 7 05:56:15.190970 containerd[1711]: 2025-07-07 05:56:15.152 [INFO][4940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:15.190970 containerd[1711]: 2025-07-07 05:56:15.163 [INFO][4940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:15.190970 containerd[1711]: 2025-07-07 05:56:15.177 [WARNING][4940] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" HandleID="k8s-pod-network.86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0" Jul 7 05:56:15.190970 containerd[1711]: 2025-07-07 05:56:15.178 [INFO][4940] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" HandleID="k8s-pod-network.86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0" Jul 7 05:56:15.190970 containerd[1711]: 2025-07-07 05:56:15.182 [INFO][4940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:15.190970 containerd[1711]: 2025-07-07 05:56:15.188 [INFO][4908] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Jul 7 05:56:15.195021 containerd[1711]: time="2025-07-07T05:56:15.194544565Z" level=info msg="TearDown network for sandbox \"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\" successfully" Jul 7 05:56:15.195021 containerd[1711]: time="2025-07-07T05:56:15.194581205Z" level=info msg="StopPodSandbox for \"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\" returns successfully" Jul 7 05:56:15.195636 systemd[1]: run-netns-cni\x2d751c784c\x2d3411\x2d3a31\x2ddf25\x2de7b308eb1072.mount: Deactivated successfully. Jul 7 05:56:15.196792 containerd[1711]: time="2025-07-07T05:56:15.196429489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2zxks,Uid:e285d422-7e9a-40bd-bc64-700d7b15f4ab,Namespace:calico-system,Attempt:1,}" Jul 7 05:56:15.209673 containerd[1711]: 2025-07-07 05:56:15.087 [INFO][4922] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Jul 7 05:56:15.209673 containerd[1711]: 2025-07-07 05:56:15.087 [INFO][4922] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" iface="eth0" netns="/var/run/netns/cni-24f95707-5b50-328d-4e3a-4e6bed1ae8b8" Jul 7 05:56:15.209673 containerd[1711]: 2025-07-07 05:56:15.088 [INFO][4922] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" iface="eth0" netns="/var/run/netns/cni-24f95707-5b50-328d-4e3a-4e6bed1ae8b8" Jul 7 05:56:15.209673 containerd[1711]: 2025-07-07 05:56:15.092 [INFO][4922] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" iface="eth0" netns="/var/run/netns/cni-24f95707-5b50-328d-4e3a-4e6bed1ae8b8" Jul 7 05:56:15.209673 containerd[1711]: 2025-07-07 05:56:15.092 [INFO][4922] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Jul 7 05:56:15.209673 containerd[1711]: 2025-07-07 05:56:15.092 [INFO][4922] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Jul 7 05:56:15.209673 containerd[1711]: 2025-07-07 05:56:15.157 [INFO][4938] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" HandleID="k8s-pod-network.6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0" Jul 7 05:56:15.209673 containerd[1711]: 2025-07-07 05:56:15.157 [INFO][4938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:15.209673 containerd[1711]: 2025-07-07 05:56:15.180 [INFO][4938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:15.209673 containerd[1711]: 2025-07-07 05:56:15.200 [WARNING][4938] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" HandleID="k8s-pod-network.6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0" Jul 7 05:56:15.209673 containerd[1711]: 2025-07-07 05:56:15.200 [INFO][4938] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" HandleID="k8s-pod-network.6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0" Jul 7 05:56:15.209673 containerd[1711]: 2025-07-07 05:56:15.205 [INFO][4938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:15.209673 containerd[1711]: 2025-07-07 05:56:15.207 [INFO][4922] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Jul 7 05:56:15.214006 containerd[1711]: time="2025-07-07T05:56:15.210855720Z" level=info msg="TearDown network for sandbox \"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\" successfully" Jul 7 05:56:15.214006 containerd[1711]: time="2025-07-07T05:56:15.213143965Z" level=info msg="StopPodSandbox for \"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\" returns successfully" Jul 7 05:56:15.213209 systemd[1]: run-netns-cni\x2d24f95707\x2d5b50\x2d328d\x2d4e3a\x2d4e6bed1ae8b8.mount: Deactivated successfully. Jul 7 05:56:15.218106 containerd[1711]: time="2025-07-07T05:56:15.217312654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69dff4fc68-6rlws,Uid:9843f4c8-0ee5-49b8-b775-11d693445b56,Namespace:calico-system,Attempt:1,}" Jul 7 05:56:15.328858 kubelet[3105]: I0707 05:56:15.326487 3105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7c8tc" podStartSLOduration=39.326454247 podStartE2EDuration="39.326454247s" podCreationTimestamp="2025-07-07 05:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:56:15.287995565 +0000 UTC m=+46.413043537" watchObservedRunningTime="2025-07-07 05:56:15.326454247 +0000 UTC m=+46.451502219" Jul 7 05:56:15.492045 systemd-networkd[1453]: califc52ab5d702: Link UP Jul 7 05:56:15.492715 systemd-networkd[1453]: califc52ab5d702: Gained carrier Jul 7 05:56:15.518050 containerd[1711]: 2025-07-07 05:56:15.300 [INFO][4959] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 05:56:15.518050 containerd[1711]: 2025-07-07 05:56:15.345 [INFO][4959] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0 calico-apiserver-8444f76b56- calico-apiserver d4471ce3-99ef-4485-9426-4f1c145f77dd 916 0 2025-07-07 05:55:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8444f76b56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-a-2bf61d9e54 calico-apiserver-8444f76b56-r26qp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califc52ab5d702 [] [] }} ContainerID="77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490" Namespace="calico-apiserver" Pod="calico-apiserver-8444f76b56-r26qp" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-" Jul 7 05:56:15.518050 containerd[1711]: 2025-07-07 05:56:15.345 [INFO][4959] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490" Namespace="calico-apiserver" Pod="calico-apiserver-8444f76b56-r26qp" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0" Jul 7 05:56:15.518050 containerd[1711]: 2025-07-07 05:56:15.417 [INFO][4997] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490" HandleID="k8s-pod-network.77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0" Jul 7 05:56:15.518050 containerd[1711]: 2025-07-07 05:56:15.419 [INFO][4997] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490" HandleID="k8s-pod-network.77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b130), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-a-2bf61d9e54", "pod":"calico-apiserver-8444f76b56-r26qp", "timestamp":"2025-07-07 05:56:15.417914483 +0000 UTC"}, Hostname:"ci-4081.3.4-a-2bf61d9e54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:56:15.518050 containerd[1711]: 2025-07-07 05:56:15.419 [INFO][4997] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:15.518050 containerd[1711]: 2025-07-07 05:56:15.419 [INFO][4997] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:15.518050 containerd[1711]: 2025-07-07 05:56:15.419 [INFO][4997] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-2bf61d9e54' Jul 7 05:56:15.518050 containerd[1711]: 2025-07-07 05:56:15.445 [INFO][4997] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.518050 containerd[1711]: 2025-07-07 05:56:15.450 [INFO][4997] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.518050 containerd[1711]: 2025-07-07 05:56:15.456 [INFO][4997] ipam/ipam.go 511: Trying affinity for 192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.518050 containerd[1711]: 2025-07-07 05:56:15.458 [INFO][4997] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.518050 containerd[1711]: 2025-07-07 05:56:15.461 [INFO][4997] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.518050 containerd[1711]: 2025-07-07 05:56:15.461 [INFO][4997] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.0/26 handle="k8s-pod-network.77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.518050 containerd[1711]: 2025-07-07 05:56:15.462 [INFO][4997] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490 Jul 7 05:56:15.518050 containerd[1711]: 2025-07-07 05:56:15.473 [INFO][4997] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.0/26 handle="k8s-pod-network.77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.518050 containerd[1711]: 2025-07-07 05:56:15.483 [INFO][4997] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.3/26] block=192.168.105.0/26 handle="k8s-pod-network.77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.518050 containerd[1711]: 2025-07-07 05:56:15.483 [INFO][4997] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.3/26] handle="k8s-pod-network.77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.518050 containerd[1711]: 2025-07-07 05:56:15.483 [INFO][4997] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:15.518050 containerd[1711]: 2025-07-07 05:56:15.483 [INFO][4997] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.3/26] IPv6=[] ContainerID="77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490" HandleID="k8s-pod-network.77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0" Jul 7 05:56:15.518775 containerd[1711]: 2025-07-07 05:56:15.488 [INFO][4959] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490" Namespace="calico-apiserver" Pod="calico-apiserver-8444f76b56-r26qp" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0", GenerateName:"calico-apiserver-8444f76b56-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4471ce3-99ef-4485-9426-4f1c145f77dd", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8444f76b56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"", Pod:"calico-apiserver-8444f76b56-r26qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc52ab5d702", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:15.518775 containerd[1711]: 2025-07-07 05:56:15.488 [INFO][4959] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.3/32] ContainerID="77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490" Namespace="calico-apiserver" Pod="calico-apiserver-8444f76b56-r26qp" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0" Jul 7 05:56:15.518775 containerd[1711]: 2025-07-07 05:56:15.488 [INFO][4959] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc52ab5d702 ContainerID="77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490" Namespace="calico-apiserver" Pod="calico-apiserver-8444f76b56-r26qp" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0" Jul 7 05:56:15.518775 containerd[1711]: 2025-07-07 05:56:15.491 [INFO][4959] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490" Namespace="calico-apiserver" Pod="calico-apiserver-8444f76b56-r26qp" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0" Jul 7 05:56:15.518775 containerd[1711]: 2025-07-07 05:56:15.491 [INFO][4959] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490" Namespace="calico-apiserver" Pod="calico-apiserver-8444f76b56-r26qp" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0", GenerateName:"calico-apiserver-8444f76b56-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4471ce3-99ef-4485-9426-4f1c145f77dd", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8444f76b56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490", Pod:"calico-apiserver-8444f76b56-r26qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc52ab5d702", MAC:"f2:5f:00:59:ab:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:15.518775 containerd[1711]: 2025-07-07 05:56:15.514 [INFO][4959] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490" Namespace="calico-apiserver" Pod="calico-apiserver-8444f76b56-r26qp" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0" Jul 7 05:56:15.596972 systemd-networkd[1453]: cali1953f46ab77: Link UP Jul 7 05:56:15.597563 containerd[1711]: time="2025-07-07T05:56:15.597063186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:56:15.598557 containerd[1711]: time="2025-07-07T05:56:15.597846468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:56:15.598557 containerd[1711]: time="2025-07-07T05:56:15.598244708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:56:15.599314 containerd[1711]: time="2025-07-07T05:56:15.599157030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:56:15.601043 systemd-networkd[1453]: cali1953f46ab77: Gained carrier Jul 7 05:56:15.635888 systemd[1]: run-containerd-runc-k8s.io-77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490-runc.vGTGwT.mount: Deactivated successfully. Jul 7 05:56:15.644156 containerd[1711]: 2025-07-07 05:56:15.324 [INFO][4971] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 05:56:15.644156 containerd[1711]: 2025-07-07 05:56:15.367 [INFO][4971] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0 csi-node-driver- calico-system e285d422-7e9a-40bd-bc64-700d7b15f4ab 915 0 2025-07-07 05:55:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.4-a-2bf61d9e54 csi-node-driver-2zxks eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1953f46ab77 [] [] }} ContainerID="88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f" Namespace="calico-system" Pod="csi-node-driver-2zxks" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-" Jul 7 05:56:15.644156 containerd[1711]: 2025-07-07 05:56:15.367 [INFO][4971] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f" Namespace="calico-system" Pod="csi-node-driver-2zxks" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0" Jul 7 05:56:15.644156 containerd[1711]: 2025-07-07 05:56:15.428 [INFO][5003] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f" HandleID="k8s-pod-network.88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0" Jul 7 05:56:15.644156 containerd[1711]: 2025-07-07 05:56:15.429 [INFO][5003] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f" HandleID="k8s-pod-network.88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d6a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-2bf61d9e54", "pod":"csi-node-driver-2zxks", "timestamp":"2025-07-07 05:56:15.428302105 +0000 UTC"}, Hostname:"ci-4081.3.4-a-2bf61d9e54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:56:15.644156 containerd[1711]: 2025-07-07 05:56:15.429 [INFO][5003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:15.644156 containerd[1711]: 2025-07-07 05:56:15.483 [INFO][5003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:15.644156 containerd[1711]: 2025-07-07 05:56:15.483 [INFO][5003] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-2bf61d9e54' Jul 7 05:56:15.644156 containerd[1711]: 2025-07-07 05:56:15.546 [INFO][5003] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.644156 containerd[1711]: 2025-07-07 05:56:15.551 [INFO][5003] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.644156 containerd[1711]: 2025-07-07 05:56:15.556 [INFO][5003] ipam/ipam.go 511: Trying affinity for 192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.644156 containerd[1711]: 2025-07-07 05:56:15.558 [INFO][5003] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.644156 containerd[1711]: 2025-07-07 05:56:15.560 [INFO][5003] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.644156 containerd[1711]: 2025-07-07 05:56:15.560 [INFO][5003] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.0/26 handle="k8s-pod-network.88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.644156 containerd[1711]: 2025-07-07 05:56:15.562 [INFO][5003] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f Jul 7 05:56:15.644156 containerd[1711]: 2025-07-07 05:56:15.568 [INFO][5003] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.0/26 handle="k8s-pod-network.88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.644156 containerd[1711]: 2025-07-07 05:56:15.584 [INFO][5003] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.4/26] block=192.168.105.0/26 handle="k8s-pod-network.88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.644156 containerd[1711]: 2025-07-07 05:56:15.584 [INFO][5003] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.4/26] handle="k8s-pod-network.88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.644156 containerd[1711]: 2025-07-07 05:56:15.584 [INFO][5003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:15.644156 containerd[1711]: 2025-07-07 05:56:15.584 [INFO][5003] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.4/26] IPv6=[] ContainerID="88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f" HandleID="k8s-pod-network.88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0" Jul 7 05:56:15.645719 containerd[1711]: 2025-07-07 05:56:15.589 [INFO][4971] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f" Namespace="calico-system" Pod="csi-node-driver-2zxks" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e285d422-7e9a-40bd-bc64-700d7b15f4ab", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"", Pod:"csi-node-driver-2zxks", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1953f46ab77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:15.645719 containerd[1711]: 2025-07-07 05:56:15.589 [INFO][4971] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.4/32] ContainerID="88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f" Namespace="calico-system" Pod="csi-node-driver-2zxks" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0" Jul 7 05:56:15.645719 containerd[1711]: 2025-07-07 05:56:15.589 [INFO][4971] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1953f46ab77 ContainerID="88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f" Namespace="calico-system" Pod="csi-node-driver-2zxks" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0" Jul 7 05:56:15.645719 containerd[1711]: 2025-07-07 05:56:15.599 [INFO][4971] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f" Namespace="calico-system" Pod="csi-node-driver-2zxks" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0" Jul 7 05:56:15.645719 containerd[1711]: 2025-07-07 05:56:15.603 [INFO][4971] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f" Namespace="calico-system" Pod="csi-node-driver-2zxks" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e285d422-7e9a-40bd-bc64-700d7b15f4ab", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f", Pod:"csi-node-driver-2zxks", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1953f46ab77", MAC:"9a:ea:85:63:6c:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:15.645719 containerd[1711]: 2025-07-07 05:56:15.625 [INFO][4971] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f" Namespace="calico-system" Pod="csi-node-driver-2zxks" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0" Jul 7 05:56:15.646917 systemd[1]: Started cri-containerd-77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490.scope - libcontainer container 77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490. Jul 7 05:56:15.718716 containerd[1711]: time="2025-07-07T05:56:15.718367725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:56:15.718716 containerd[1711]: time="2025-07-07T05:56:15.718437886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:56:15.720467 containerd[1711]: time="2025-07-07T05:56:15.720300129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:56:15.722128 containerd[1711]: time="2025-07-07T05:56:15.720936131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:56:15.753491 systemd-networkd[1453]: calieed4d574ddd: Link UP Jul 7 05:56:15.754725 systemd-networkd[1453]: calieed4d574ddd: Gained carrier Jul 7 05:56:15.786250 containerd[1711]: 2025-07-07 05:56:15.356 [INFO][4979] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 05:56:15.786250 containerd[1711]: 2025-07-07 05:56:15.385 [INFO][4979] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0 calico-kube-controllers-69dff4fc68- calico-system 9843f4c8-0ee5-49b8-b775-11d693445b56 914 0 2025-07-07 05:55:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69dff4fc68 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.4-a-2bf61d9e54 calico-kube-controllers-69dff4fc68-6rlws eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calieed4d574ddd [] [] }} ContainerID="8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444" Namespace="calico-system" Pod="calico-kube-controllers-69dff4fc68-6rlws" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-" Jul 7 05:56:15.786250 containerd[1711]: 2025-07-07 05:56:15.385 [INFO][4979] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444" Namespace="calico-system" Pod="calico-kube-controllers-69dff4fc68-6rlws" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0" Jul 7 05:56:15.786250 containerd[1711]: 2025-07-07 05:56:15.457 [INFO][5012] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444" HandleID="k8s-pod-network.8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0" Jul 7 05:56:15.786250 containerd[1711]: 2025-07-07 05:56:15.457 [INFO][5012] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444" HandleID="k8s-pod-network.8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003686c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-2bf61d9e54", "pod":"calico-kube-controllers-69dff4fc68-6rlws", "timestamp":"2025-07-07 05:56:15.457346847 +0000 UTC"}, Hostname:"ci-4081.3.4-a-2bf61d9e54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:56:15.786250 containerd[1711]: 2025-07-07 05:56:15.457 [INFO][5012] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:15.786250 containerd[1711]: 2025-07-07 05:56:15.584 [INFO][5012] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:15.786250 containerd[1711]: 2025-07-07 05:56:15.585 [INFO][5012] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-2bf61d9e54' Jul 7 05:56:15.786250 containerd[1711]: 2025-07-07 05:56:15.652 [INFO][5012] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.786250 containerd[1711]: 2025-07-07 05:56:15.662 [INFO][5012] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.786250 containerd[1711]: 2025-07-07 05:56:15.669 [INFO][5012] ipam/ipam.go 511: Trying affinity for 192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.786250 containerd[1711]: 2025-07-07 05:56:15.671 [INFO][5012] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.786250 containerd[1711]: 2025-07-07 05:56:15.674 [INFO][5012] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.786250 containerd[1711]: 2025-07-07 05:56:15.674 [INFO][5012] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.0/26 handle="k8s-pod-network.8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.786250 containerd[1711]: 2025-07-07 05:56:15.676 [INFO][5012] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444 Jul 7 05:56:15.786250 containerd[1711]: 2025-07-07 05:56:15.702 [INFO][5012] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.0/26 handle="k8s-pod-network.8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.786250 containerd[1711]: 2025-07-07 05:56:15.730 [INFO][5012] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.5/26] block=192.168.105.0/26 handle="k8s-pod-network.8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.786250 containerd[1711]: 2025-07-07 05:56:15.733 [INFO][5012] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.5/26] handle="k8s-pod-network.8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:15.786250 containerd[1711]: 2025-07-07 05:56:15.734 [INFO][5012] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:15.786250 containerd[1711]: 2025-07-07 05:56:15.734 [INFO][5012] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.5/26] IPv6=[] ContainerID="8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444" HandleID="k8s-pod-network.8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0" Jul 7 05:56:15.787921 containerd[1711]: 2025-07-07 05:56:15.746 [INFO][4979] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444" Namespace="calico-system" Pod="calico-kube-controllers-69dff4fc68-6rlws" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0", GenerateName:"calico-kube-controllers-69dff4fc68-", Namespace:"calico-system", SelfLink:"", UID:"9843f4c8-0ee5-49b8-b775-11d693445b56", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69dff4fc68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"", Pod:"calico-kube-controllers-69dff4fc68-6rlws", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calieed4d574ddd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:15.787921 containerd[1711]: 2025-07-07 05:56:15.746 [INFO][4979] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.5/32] ContainerID="8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444" Namespace="calico-system" Pod="calico-kube-controllers-69dff4fc68-6rlws" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0" Jul 7 05:56:15.787921 containerd[1711]: 2025-07-07 05:56:15.746 [INFO][4979] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieed4d574ddd ContainerID="8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444" Namespace="calico-system" Pod="calico-kube-controllers-69dff4fc68-6rlws" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0" Jul 7 05:56:15.787921 containerd[1711]: 2025-07-07 05:56:15.753 [INFO][4979] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444" Namespace="calico-system" Pod="calico-kube-controllers-69dff4fc68-6rlws" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0" Jul 7 05:56:15.787921 containerd[1711]: 2025-07-07 05:56:15.754 [INFO][4979] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444" Namespace="calico-system" Pod="calico-kube-controllers-69dff4fc68-6rlws" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0", GenerateName:"calico-kube-controllers-69dff4fc68-", Namespace:"calico-system", SelfLink:"", UID:"9843f4c8-0ee5-49b8-b775-11d693445b56", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69dff4fc68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444", Pod:"calico-kube-controllers-69dff4fc68-6rlws", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calieed4d574ddd", MAC:"f2:61:f5:66:32:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:15.787921 containerd[1711]: 2025-07-07 05:56:15.777 [INFO][4979] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444" Namespace="calico-system" Pod="calico-kube-controllers-69dff4fc68-6rlws" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0" Jul 7 05:56:15.788333 systemd[1]: Started cri-containerd-88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f.scope - libcontainer container 88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f. Jul 7 05:56:15.804204 containerd[1711]: time="2025-07-07T05:56:15.802894986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8444f76b56-r26qp,Uid:d4471ce3-99ef-4485-9426-4f1c145f77dd,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490\"" Jul 7 05:56:15.808795 containerd[1711]: time="2025-07-07T05:56:15.808746479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 05:56:15.830076 containerd[1711]: time="2025-07-07T05:56:15.829388323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:56:15.830076 containerd[1711]: time="2025-07-07T05:56:15.829576283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:56:15.830076 containerd[1711]: time="2025-07-07T05:56:15.829759844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:56:15.830787 containerd[1711]: time="2025-07-07T05:56:15.830551645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:56:15.869499 containerd[1711]: time="2025-07-07T05:56:15.869348648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2zxks,Uid:e285d422-7e9a-40bd-bc64-700d7b15f4ab,Namespace:calico-system,Attempt:1,} returns sandbox id \"88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f\"" Jul 7 05:56:15.890027 systemd[1]: Started cri-containerd-8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444.scope - libcontainer container 8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444. Jul 7 05:56:15.945734 containerd[1711]: time="2025-07-07T05:56:15.945690772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69dff4fc68-6rlws,Uid:9843f4c8-0ee5-49b8-b775-11d693445b56,Namespace:calico-system,Attempt:1,} returns sandbox id \"8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444\"" Jul 7 05:56:15.996372 containerd[1711]: time="2025-07-07T05:56:15.996006079Z" level=info msg="StopPodSandbox for \"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\"" Jul 7 05:56:16.090130 containerd[1711]: 2025-07-07 05:56:16.043 [INFO][5201] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Jul 7 05:56:16.090130 containerd[1711]: 2025-07-07 05:56:16.043 [INFO][5201] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" iface="eth0" netns="/var/run/netns/cni-756941d0-5715-2ceb-668d-4d1825c5c7f6" Jul 7 05:56:16.090130 containerd[1711]: 2025-07-07 05:56:16.044 [INFO][5201] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" iface="eth0" netns="/var/run/netns/cni-756941d0-5715-2ceb-668d-4d1825c5c7f6" Jul 7 05:56:16.090130 containerd[1711]: 2025-07-07 05:56:16.044 [INFO][5201] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" iface="eth0" netns="/var/run/netns/cni-756941d0-5715-2ceb-668d-4d1825c5c7f6" Jul 7 05:56:16.090130 containerd[1711]: 2025-07-07 05:56:16.044 [INFO][5201] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Jul 7 05:56:16.090130 containerd[1711]: 2025-07-07 05:56:16.044 [INFO][5201] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Jul 7 05:56:16.090130 containerd[1711]: 2025-07-07 05:56:16.068 [INFO][5208] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" HandleID="k8s-pod-network.d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0" Jul 7 05:56:16.090130 containerd[1711]: 2025-07-07 05:56:16.068 [INFO][5208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:16.090130 containerd[1711]: 2025-07-07 05:56:16.069 [INFO][5208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:16.090130 containerd[1711]: 2025-07-07 05:56:16.079 [WARNING][5208] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" HandleID="k8s-pod-network.d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0" Jul 7 05:56:16.090130 containerd[1711]: 2025-07-07 05:56:16.079 [INFO][5208] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" HandleID="k8s-pod-network.d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0" Jul 7 05:56:16.090130 containerd[1711]: 2025-07-07 05:56:16.081 [INFO][5208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:16.090130 containerd[1711]: 2025-07-07 05:56:16.087 [INFO][5201] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Jul 7 05:56:16.095080 containerd[1711]: time="2025-07-07T05:56:16.091309843Z" level=info msg="TearDown network for sandbox \"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\" successfully" Jul 7 05:56:16.095080 containerd[1711]: time="2025-07-07T05:56:16.091381163Z" level=info msg="StopPodSandbox for \"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\" returns successfully" Jul 7 05:56:16.095080 containerd[1711]: time="2025-07-07T05:56:16.092172565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6jpc6,Uid:b7f453a6-8802-4475-abf7-3bbe1e10231e,Namespace:kube-system,Attempt:1,}" Jul 7 05:56:16.278366 systemd-networkd[1453]: cali86ed417a576: Gained IPv6LL Jul 7 05:56:16.280244 kernel: bpftool[5251]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 05:56:16.298660 systemd-networkd[1453]: cali75a9ad07272: Link UP Jul 7 05:56:16.298886 systemd-networkd[1453]: cali75a9ad07272: Gained carrier Jul 7 05:56:16.326316 containerd[1711]: 2025-07-07 05:56:16.164 [INFO][5220] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 05:56:16.326316 containerd[1711]: 2025-07-07 05:56:16.182 [INFO][5220] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0 coredns-7c65d6cfc9- kube-system b7f453a6-8802-4475-abf7-3bbe1e10231e 945 0 2025-07-07 05:55:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-a-2bf61d9e54 coredns-7c65d6cfc9-6jpc6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali75a9ad07272 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6jpc6" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-" Jul 7 05:56:16.326316 containerd[1711]: 2025-07-07 05:56:16.182 [INFO][5220] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6jpc6" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0" Jul 7 05:56:16.326316 containerd[1711]: 2025-07-07 05:56:16.234 [INFO][5235] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f" HandleID="k8s-pod-network.92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0" Jul 7 05:56:16.326316 containerd[1711]: 2025-07-07 05:56:16.234 [INFO][5235] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f" HandleID="k8s-pod-network.92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3a90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-a-2bf61d9e54", "pod":"coredns-7c65d6cfc9-6jpc6", "timestamp":"2025-07-07 05:56:16.234136589 +0000 UTC"}, Hostname:"ci-4081.3.4-a-2bf61d9e54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:56:16.326316 containerd[1711]: 2025-07-07 05:56:16.234 [INFO][5235] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:16.326316 containerd[1711]: 2025-07-07 05:56:16.234 [INFO][5235] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:16.326316 containerd[1711]: 2025-07-07 05:56:16.234 [INFO][5235] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-2bf61d9e54' Jul 7 05:56:16.326316 containerd[1711]: 2025-07-07 05:56:16.244 [INFO][5235] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:16.326316 containerd[1711]: 2025-07-07 05:56:16.252 [INFO][5235] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:16.326316 containerd[1711]: 2025-07-07 05:56:16.259 [INFO][5235] ipam/ipam.go 511: Trying affinity for 192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:16.326316 containerd[1711]: 2025-07-07 05:56:16.262 [INFO][5235] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:16.326316 containerd[1711]: 2025-07-07 05:56:16.265 [INFO][5235] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:16.326316 containerd[1711]: 2025-07-07 05:56:16.265 [INFO][5235] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.0/26 handle="k8s-pod-network.92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:16.326316 containerd[1711]: 2025-07-07 05:56:16.267 [INFO][5235] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f Jul 7 05:56:16.326316 containerd[1711]: 2025-07-07 05:56:16.280 [INFO][5235] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.0/26 handle="k8s-pod-network.92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:16.326316 containerd[1711]: 2025-07-07 05:56:16.291 [INFO][5235] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.6/26] block=192.168.105.0/26 handle="k8s-pod-network.92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:16.326316 containerd[1711]: 2025-07-07 05:56:16.291 [INFO][5235] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.6/26] handle="k8s-pod-network.92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:16.326316 containerd[1711]: 2025-07-07 05:56:16.291 [INFO][5235] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:16.326316 containerd[1711]: 2025-07-07 05:56:16.291 [INFO][5235] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.6/26] IPv6=[] ContainerID="92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f" HandleID="k8s-pod-network.92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0" Jul 7 05:56:16.327350 containerd[1711]: 2025-07-07 05:56:16.295 [INFO][5220] cni-plugin/k8s.go 418: Populated endpoint ContainerID="92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6jpc6" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b7f453a6-8802-4475-abf7-3bbe1e10231e", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"", Pod:"coredns-7c65d6cfc9-6jpc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75a9ad07272", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:16.327350 containerd[1711]: 2025-07-07 05:56:16.295 [INFO][5220] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.6/32] ContainerID="92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6jpc6" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0" Jul 7 05:56:16.327350 containerd[1711]: 2025-07-07 05:56:16.295 [INFO][5220] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75a9ad07272 ContainerID="92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6jpc6" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0" Jul 7 05:56:16.327350 containerd[1711]: 2025-07-07 05:56:16.304 [INFO][5220] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6jpc6" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0" Jul 7 05:56:16.327350 containerd[1711]: 2025-07-07 05:56:16.304 [INFO][5220] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6jpc6" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b7f453a6-8802-4475-abf7-3bbe1e10231e", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f", Pod:"coredns-7c65d6cfc9-6jpc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75a9ad07272", MAC:"ae:96:73:50:21:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:16.327350 containerd[1711]: 2025-07-07 05:56:16.321 [INFO][5220] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6jpc6" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0" Jul 7 05:56:16.347935 containerd[1711]: time="2025-07-07T05:56:16.347832832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:56:16.348412 containerd[1711]: time="2025-07-07T05:56:16.347894272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:56:16.348412 containerd[1711]: time="2025-07-07T05:56:16.348053952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:56:16.348412 containerd[1711]: time="2025-07-07T05:56:16.348237033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:56:16.373346 systemd[1]: Started cri-containerd-92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f.scope - libcontainer container 92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f. Jul 7 05:56:16.411057 containerd[1711]: time="2025-07-07T05:56:16.411011527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6jpc6,Uid:b7f453a6-8802-4475-abf7-3bbe1e10231e,Namespace:kube-system,Attempt:1,} returns sandbox id \"92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f\"" Jul 7 05:56:16.417303 containerd[1711]: time="2025-07-07T05:56:16.417241260Z" level=info msg="CreateContainer within sandbox \"92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 05:56:16.487028 containerd[1711]: time="2025-07-07T05:56:16.486949529Z" level=info msg="CreateContainer within sandbox \"92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e97e1908f1d4e8529cdb5e547faa9bacff45ac6228259eb48d68bff2d8610442\"" Jul 7 05:56:16.488874 containerd[1711]: time="2025-07-07T05:56:16.488750213Z" level=info msg="StartContainer for \"e97e1908f1d4e8529cdb5e547faa9bacff45ac6228259eb48d68bff2d8610442\"" Jul 7 05:56:16.527332 systemd[1]: Started cri-containerd-e97e1908f1d4e8529cdb5e547faa9bacff45ac6228259eb48d68bff2d8610442.scope - libcontainer container e97e1908f1d4e8529cdb5e547faa9bacff45ac6228259eb48d68bff2d8610442. Jul 7 05:56:16.575538 containerd[1711]: time="2025-07-07T05:56:16.575407079Z" level=info msg="StartContainer for \"e97e1908f1d4e8529cdb5e547faa9bacff45ac6228259eb48d68bff2d8610442\" returns successfully" Jul 7 05:56:16.604361 systemd-networkd[1453]: vxlan.calico: Link UP Jul 7 05:56:16.604370 systemd-networkd[1453]: vxlan.calico: Gained carrier Jul 7 05:56:16.615684 systemd[1]: run-netns-cni\x2d756941d0\x2d5715\x2d2ceb\x2d668d\x2d4d1825c5c7f6.mount: Deactivated successfully. Jul 7 05:56:16.917346 systemd-networkd[1453]: califc52ab5d702: Gained IPv6LL Jul 7 05:56:16.998472 containerd[1711]: time="2025-07-07T05:56:16.998192110Z" level=info msg="StopPodSandbox for \"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\"" Jul 7 05:56:17.107125 containerd[1711]: 2025-07-07 05:56:17.068 [INFO][5420] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Jul 7 05:56:17.107125 containerd[1711]: 2025-07-07 05:56:17.070 [INFO][5420] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" iface="eth0" netns="/var/run/netns/cni-a23dac59-2523-f649-5e15-911b8c0c3888" Jul 7 05:56:17.107125 containerd[1711]: 2025-07-07 05:56:17.070 [INFO][5420] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" iface="eth0" netns="/var/run/netns/cni-a23dac59-2523-f649-5e15-911b8c0c3888" Jul 7 05:56:17.107125 containerd[1711]: 2025-07-07 05:56:17.070 [INFO][5420] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" iface="eth0" netns="/var/run/netns/cni-a23dac59-2523-f649-5e15-911b8c0c3888" Jul 7 05:56:17.107125 containerd[1711]: 2025-07-07 05:56:17.070 [INFO][5420] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Jul 7 05:56:17.107125 containerd[1711]: 2025-07-07 05:56:17.070 [INFO][5420] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Jul 7 05:56:17.107125 containerd[1711]: 2025-07-07 05:56:17.091 [INFO][5428] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" HandleID="k8s-pod-network.0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0" Jul 7 05:56:17.107125 containerd[1711]: 2025-07-07 05:56:17.092 [INFO][5428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:17.107125 containerd[1711]: 2025-07-07 05:56:17.092 [INFO][5428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:17.107125 containerd[1711]: 2025-07-07 05:56:17.102 [WARNING][5428] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" HandleID="k8s-pod-network.0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0" Jul 7 05:56:17.107125 containerd[1711]: 2025-07-07 05:56:17.102 [INFO][5428] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" HandleID="k8s-pod-network.0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0" Jul 7 05:56:17.107125 containerd[1711]: 2025-07-07 05:56:17.103 [INFO][5428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:17.107125 containerd[1711]: 2025-07-07 05:56:17.105 [INFO][5420] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Jul 7 05:56:17.107635 containerd[1711]: time="2025-07-07T05:56:17.107283147Z" level=info msg="TearDown network for sandbox \"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\" successfully" Jul 7 05:56:17.107635 containerd[1711]: time="2025-07-07T05:56:17.107310147Z" level=info msg="StopPodSandbox for \"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\" returns successfully" Jul 7 05:56:17.108203 containerd[1711]: time="2025-07-07T05:56:17.108074269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-pvv7q,Uid:6f156b26-7db3-4daa-b70b-8db09e58fe84,Namespace:calico-system,Attempt:1,}" Jul 7 05:56:17.110862 systemd[1]: run-netns-cni\x2da23dac59\x2d2523\x2df649\x2d5e15\x2d911b8c0c3888.mount: Deactivated successfully. Jul 7 05:56:17.237496 systemd-networkd[1453]: cali1953f46ab77: Gained IPv6LL Jul 7 05:56:17.271996 systemd-networkd[1453]: cali4aea4af1bd5: Link UP Jul 7 05:56:17.272275 systemd-networkd[1453]: cali4aea4af1bd5: Gained carrier Jul 7 05:56:17.295916 kubelet[3105]: I0707 05:56:17.295211 3105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6jpc6" podStartSLOduration=41.295193235 podStartE2EDuration="41.295193235s" podCreationTimestamp="2025-07-07 05:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:56:17.292625189 +0000 UTC m=+48.417673121" watchObservedRunningTime="2025-07-07 05:56:17.295193235 +0000 UTC m=+48.420241207" Jul 7 05:56:17.302426 systemd-networkd[1453]: calieed4d574ddd: Gained IPv6LL Jul 7 05:56:17.303972 containerd[1711]: 2025-07-07 05:56:17.190 [INFO][5435] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0 goldmane-58fd7646b9- calico-system 6f156b26-7db3-4daa-b70b-8db09e58fe84 955 0 2025-07-07 05:55:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.4-a-2bf61d9e54 goldmane-58fd7646b9-pvv7q eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4aea4af1bd5 [] [] }} ContainerID="6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db" Namespace="calico-system" Pod="goldmane-58fd7646b9-pvv7q" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-" Jul 7 05:56:17.303972 containerd[1711]: 2025-07-07 05:56:17.190 [INFO][5435] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db" Namespace="calico-system" Pod="goldmane-58fd7646b9-pvv7q" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0" Jul 7 05:56:17.303972 containerd[1711]: 2025-07-07 05:56:17.217 [INFO][5447] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db" HandleID="k8s-pod-network.6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0" Jul 7 05:56:17.303972 containerd[1711]: 2025-07-07 05:56:17.217 [INFO][5447] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db" HandleID="k8s-pod-network.6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3730), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-2bf61d9e54", "pod":"goldmane-58fd7646b9-pvv7q", "timestamp":"2025-07-07 05:56:17.217642827 +0000 UTC"}, Hostname:"ci-4081.3.4-a-2bf61d9e54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:56:17.303972 containerd[1711]: 2025-07-07 05:56:17.217 [INFO][5447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:17.303972 containerd[1711]: 2025-07-07 05:56:17.217 [INFO][5447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:17.303972 containerd[1711]: 2025-07-07 05:56:17.217 [INFO][5447] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-2bf61d9e54' Jul 7 05:56:17.303972 containerd[1711]: 2025-07-07 05:56:17.227 [INFO][5447] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:17.303972 containerd[1711]: 2025-07-07 05:56:17.231 [INFO][5447] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:17.303972 containerd[1711]: 2025-07-07 05:56:17.235 [INFO][5447] ipam/ipam.go 511: Trying affinity for 192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:17.303972 containerd[1711]: 2025-07-07 05:56:17.238 [INFO][5447] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:17.303972 containerd[1711]: 2025-07-07 05:56:17.241 [INFO][5447] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:17.303972 containerd[1711]: 2025-07-07 05:56:17.241 [INFO][5447] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.0/26 handle="k8s-pod-network.6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:17.303972 containerd[1711]: 2025-07-07 05:56:17.243 [INFO][5447] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db Jul 7 05:56:17.303972 containerd[1711]: 2025-07-07 05:56:17.249 [INFO][5447] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.0/26 handle="k8s-pod-network.6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:17.303972 containerd[1711]: 2025-07-07 05:56:17.263 [INFO][5447] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.7/26] block=192.168.105.0/26 handle="k8s-pod-network.6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:17.303972 containerd[1711]: 2025-07-07 05:56:17.263 [INFO][5447] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.7/26] handle="k8s-pod-network.6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:17.303972 containerd[1711]: 2025-07-07 05:56:17.264 [INFO][5447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:17.303972 containerd[1711]: 2025-07-07 05:56:17.264 [INFO][5447] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.7/26] IPv6=[] ContainerID="6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db" HandleID="k8s-pod-network.6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0" Jul 7 05:56:17.305348 containerd[1711]: 2025-07-07 05:56:17.268 [INFO][5435] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db" Namespace="calico-system" Pod="goldmane-58fd7646b9-pvv7q" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"6f156b26-7db3-4daa-b70b-8db09e58fe84", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"", Pod:"goldmane-58fd7646b9-pvv7q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4aea4af1bd5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:17.305348 containerd[1711]: 2025-07-07 05:56:17.268 [INFO][5435] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.7/32] ContainerID="6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db" Namespace="calico-system" Pod="goldmane-58fd7646b9-pvv7q" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0" Jul 7 05:56:17.305348 containerd[1711]: 2025-07-07 05:56:17.268 [INFO][5435] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4aea4af1bd5 ContainerID="6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db" Namespace="calico-system" Pod="goldmane-58fd7646b9-pvv7q" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0" Jul 7 05:56:17.305348 containerd[1711]: 2025-07-07 05:56:17.270 [INFO][5435] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db" Namespace="calico-system" Pod="goldmane-58fd7646b9-pvv7q" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0" Jul 7 05:56:17.305348 containerd[1711]: 2025-07-07 05:56:17.271 [INFO][5435] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db" Namespace="calico-system" Pod="goldmane-58fd7646b9-pvv7q" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"6f156b26-7db3-4daa-b70b-8db09e58fe84", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db", Pod:"goldmane-58fd7646b9-pvv7q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4aea4af1bd5", MAC:"6a:91:8b:f4:f1:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:17.305348 containerd[1711]: 2025-07-07 05:56:17.299 [INFO][5435] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db" Namespace="calico-system" Pod="goldmane-58fd7646b9-pvv7q" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0" Jul 7 05:56:17.335080 containerd[1711]: time="2025-07-07T05:56:17.334548240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:56:17.335080 containerd[1711]: time="2025-07-07T05:56:17.334856161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:56:17.335080 containerd[1711]: time="2025-07-07T05:56:17.334874401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:56:17.336252 containerd[1711]: time="2025-07-07T05:56:17.334975361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:56:17.363325 systemd[1]: Started cri-containerd-6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db.scope - libcontainer container 6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db. Jul 7 05:56:17.405200 containerd[1711]: time="2025-07-07T05:56:17.405145634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-pvv7q,Uid:6f156b26-7db3-4daa-b70b-8db09e58fe84,Namespace:calico-system,Attempt:1,} returns sandbox id \"6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db\"" Jul 7 05:56:17.621296 systemd-networkd[1453]: cali75a9ad07272: Gained IPv6LL Jul 7 05:56:17.996917 containerd[1711]: time="2025-07-07T05:56:17.996774559Z" level=info msg="StopPodSandbox for \"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\"" Jul 7 05:56:18.089787 containerd[1711]: 2025-07-07 05:56:18.046 [INFO][5517] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Jul 7 05:56:18.089787 containerd[1711]: 2025-07-07 05:56:18.046 [INFO][5517] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" iface="eth0" netns="/var/run/netns/cni-e26b328f-68a0-0098-8cd4-bddaa909a7ed" Jul 7 05:56:18.089787 containerd[1711]: 2025-07-07 05:56:18.047 [INFO][5517] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" iface="eth0" netns="/var/run/netns/cni-e26b328f-68a0-0098-8cd4-bddaa909a7ed" Jul 7 05:56:18.089787 containerd[1711]: 2025-07-07 05:56:18.048 [INFO][5517] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" iface="eth0" netns="/var/run/netns/cni-e26b328f-68a0-0098-8cd4-bddaa909a7ed" Jul 7 05:56:18.089787 containerd[1711]: 2025-07-07 05:56:18.048 [INFO][5517] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Jul 7 05:56:18.089787 containerd[1711]: 2025-07-07 05:56:18.048 [INFO][5517] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Jul 7 05:56:18.089787 containerd[1711]: 2025-07-07 05:56:18.070 [INFO][5524] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" HandleID="k8s-pod-network.d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0" Jul 7 05:56:18.089787 containerd[1711]: 2025-07-07 05:56:18.070 [INFO][5524] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:18.089787 containerd[1711]: 2025-07-07 05:56:18.070 [INFO][5524] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:18.089787 containerd[1711]: 2025-07-07 05:56:18.080 [WARNING][5524] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" HandleID="k8s-pod-network.d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0" Jul 7 05:56:18.089787 containerd[1711]: 2025-07-07 05:56:18.080 [INFO][5524] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" HandleID="k8s-pod-network.d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0" Jul 7 05:56:18.089787 containerd[1711]: 2025-07-07 05:56:18.082 [INFO][5524] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:18.089787 containerd[1711]: 2025-07-07 05:56:18.084 [INFO][5517] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Jul 7 05:56:18.091796 containerd[1711]: time="2025-07-07T05:56:18.089890321Z" level=info msg="TearDown network for sandbox \"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\" successfully" Jul 7 05:56:18.091796 containerd[1711]: time="2025-07-07T05:56:18.089919641Z" level=info msg="StopPodSandbox for \"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\" returns successfully" Jul 7 05:56:18.091796 containerd[1711]: time="2025-07-07T05:56:18.090763083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8444f76b56-bhmcq,Uid:654bded6-4470-4285-bbbd-8ec39f31a268,Namespace:calico-apiserver,Attempt:1,}" Jul 7 05:56:18.093511 systemd[1]: run-netns-cni\x2de26b328f\x2d68a0\x2d0098\x2d8cd4\x2dbddaa909a7ed.mount: Deactivated successfully. Jul 7 05:56:18.252188 systemd-networkd[1453]: calibc4c7442c5d: Link UP Jul 7 05:56:18.253410 systemd-networkd[1453]: calibc4c7442c5d: Gained carrier Jul 7 05:56:18.277313 containerd[1711]: 2025-07-07 05:56:18.167 [INFO][5530] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0 calico-apiserver-8444f76b56- calico-apiserver 654bded6-4470-4285-bbbd-8ec39f31a268 970 0 2025-07-07 05:55:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8444f76b56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-a-2bf61d9e54 calico-apiserver-8444f76b56-bhmcq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibc4c7442c5d [] [] }} ContainerID="567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d" Namespace="calico-apiserver" Pod="calico-apiserver-8444f76b56-bhmcq" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-" Jul 7 05:56:18.277313 containerd[1711]: 2025-07-07 05:56:18.167 [INFO][5530] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d" Namespace="calico-apiserver" Pod="calico-apiserver-8444f76b56-bhmcq" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0" Jul 7 05:56:18.277313 containerd[1711]: 2025-07-07 05:56:18.194 [INFO][5542] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d" HandleID="k8s-pod-network.567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0" Jul 7 05:56:18.277313 containerd[1711]: 2025-07-07 05:56:18.194 [INFO][5542] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d" HandleID="k8s-pod-network.567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-a-2bf61d9e54", "pod":"calico-apiserver-8444f76b56-bhmcq", "timestamp":"2025-07-07 05:56:18.194074748 +0000 UTC"}, Hostname:"ci-4081.3.4-a-2bf61d9e54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:56:18.277313 containerd[1711]: 2025-07-07 05:56:18.194 [INFO][5542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:18.277313 containerd[1711]: 2025-07-07 05:56:18.194 [INFO][5542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:18.277313 containerd[1711]: 2025-07-07 05:56:18.194 [INFO][5542] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-2bf61d9e54' Jul 7 05:56:18.277313 containerd[1711]: 2025-07-07 05:56:18.204 [INFO][5542] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:18.277313 containerd[1711]: 2025-07-07 05:56:18.209 [INFO][5542] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:18.277313 containerd[1711]: 2025-07-07 05:56:18.214 [INFO][5542] ipam/ipam.go 511: Trying affinity for 192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:18.277313 containerd[1711]: 2025-07-07 05:56:18.216 [INFO][5542] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:18.277313 containerd[1711]: 2025-07-07 05:56:18.218 [INFO][5542] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.0/26 host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:18.277313 containerd[1711]: 2025-07-07 05:56:18.218 [INFO][5542] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.0/26 handle="k8s-pod-network.567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:18.277313 containerd[1711]: 2025-07-07 05:56:18.220 [INFO][5542] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d Jul 7 05:56:18.277313 containerd[1711]: 2025-07-07 05:56:18.231 [INFO][5542] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.0/26 handle="k8s-pod-network.567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:18.277313 containerd[1711]: 2025-07-07 05:56:18.242 [INFO][5542] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.8/26] block=192.168.105.0/26 handle="k8s-pod-network.567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:18.277313 containerd[1711]: 2025-07-07 05:56:18.242 [INFO][5542] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.8/26] handle="k8s-pod-network.567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d" host="ci-4081.3.4-a-2bf61d9e54" Jul 7 05:56:18.277313 containerd[1711]: 2025-07-07 05:56:18.242 [INFO][5542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:18.277313 containerd[1711]: 2025-07-07 05:56:18.242 [INFO][5542] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.8/26] IPv6=[] ContainerID="567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d" HandleID="k8s-pod-network.567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0" Jul 7 05:56:18.277948 containerd[1711]: 2025-07-07 05:56:18.245 [INFO][5530] cni-plugin/k8s.go 418: Populated endpoint ContainerID="567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d" Namespace="calico-apiserver" Pod="calico-apiserver-8444f76b56-bhmcq" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0", GenerateName:"calico-apiserver-8444f76b56-", Namespace:"calico-apiserver", SelfLink:"", UID:"654bded6-4470-4285-bbbd-8ec39f31a268", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8444f76b56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"", Pod:"calico-apiserver-8444f76b56-bhmcq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc4c7442c5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:18.277948 containerd[1711]: 2025-07-07 05:56:18.245 [INFO][5530] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.8/32] ContainerID="567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d" Namespace="calico-apiserver" Pod="calico-apiserver-8444f76b56-bhmcq" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0" Jul 7 05:56:18.277948 containerd[1711]: 2025-07-07 05:56:18.245 [INFO][5530] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc4c7442c5d ContainerID="567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d" Namespace="calico-apiserver" Pod="calico-apiserver-8444f76b56-bhmcq" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0" Jul 7 05:56:18.277948 containerd[1711]: 2025-07-07 05:56:18.254 [INFO][5530] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d" Namespace="calico-apiserver" Pod="calico-apiserver-8444f76b56-bhmcq" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0" Jul 7 05:56:18.277948 containerd[1711]: 2025-07-07 05:56:18.256 [INFO][5530] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d" Namespace="calico-apiserver" Pod="calico-apiserver-8444f76b56-bhmcq" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0", GenerateName:"calico-apiserver-8444f76b56-", Namespace:"calico-apiserver", SelfLink:"", UID:"654bded6-4470-4285-bbbd-8ec39f31a268", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8444f76b56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d", Pod:"calico-apiserver-8444f76b56-bhmcq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc4c7442c5d", MAC:"5a:89:e5:d4:1b:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:18.277948 containerd[1711]: 2025-07-07 05:56:18.272 [INFO][5530] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d" Namespace="calico-apiserver" Pod="calico-apiserver-8444f76b56-bhmcq" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0" Jul 7 05:56:18.310196 containerd[1711]: time="2025-07-07T05:56:18.306331791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:56:18.310196 containerd[1711]: time="2025-07-07T05:56:18.309229118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:56:18.310196 containerd[1711]: time="2025-07-07T05:56:18.309251958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:56:18.310196 containerd[1711]: time="2025-07-07T05:56:18.309367118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:56:18.348538 systemd[1]: Started cri-containerd-567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d.scope - libcontainer container 567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d. Jul 7 05:56:18.435992 containerd[1711]: time="2025-07-07T05:56:18.435930753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8444f76b56-bhmcq,Uid:654bded6-4470-4285-bbbd-8ec39f31a268,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d\"" Jul 7 05:56:18.645254 systemd-networkd[1453]: vxlan.calico: Gained IPv6LL Jul 7 05:56:18.710385 systemd-networkd[1453]: cali4aea4af1bd5: Gained IPv6LL Jul 7 05:56:19.926418 systemd-networkd[1453]: calibc4c7442c5d: Gained IPv6LL Jul 7 05:56:20.824958 containerd[1711]: time="2025-07-07T05:56:20.824844982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:20.827309 containerd[1711]: time="2025-07-07T05:56:20.827134627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 7 05:56:20.837766 containerd[1711]: time="2025-07-07T05:56:20.830596435Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:20.838391 containerd[1711]: time="2025-07-07T05:56:20.835148364Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 5.026158045s" Jul 7 05:56:20.838391 containerd[1711]: time="2025-07-07T05:56:20.838028211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 7 05:56:20.838722 containerd[1711]: time="2025-07-07T05:56:20.838616852Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:20.841993 containerd[1711]: time="2025-07-07T05:56:20.841884499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 05:56:20.843008 containerd[1711]: time="2025-07-07T05:56:20.842839301Z" level=info msg="CreateContainer within sandbox \"77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 05:56:20.894508 containerd[1711]: time="2025-07-07T05:56:20.894458533Z" level=info msg="CreateContainer within sandbox \"77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dcfb3df41721386d5250ff44d389d45366aad43713f4a6336a2301aef45c4915\"" Jul 7 05:56:20.895291 containerd[1711]: time="2025-07-07T05:56:20.895260135Z" level=info msg="StartContainer for \"dcfb3df41721386d5250ff44d389d45366aad43713f4a6336a2301aef45c4915\"" Jul 7 05:56:20.934341 systemd[1]: Started cri-containerd-dcfb3df41721386d5250ff44d389d45366aad43713f4a6336a2301aef45c4915.scope - libcontainer container dcfb3df41721386d5250ff44d389d45366aad43713f4a6336a2301aef45c4915. Jul 7 05:56:20.985149 containerd[1711]: time="2025-07-07T05:56:20.985054210Z" level=info msg="StartContainer for \"dcfb3df41721386d5250ff44d389d45366aad43713f4a6336a2301aef45c4915\" returns successfully" Jul 7 05:56:22.160150 containerd[1711]: time="2025-07-07T05:56:22.159894242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:22.163763 containerd[1711]: time="2025-07-07T05:56:22.162232447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 7 05:56:22.166059 containerd[1711]: time="2025-07-07T05:56:22.165670375Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:22.172390 containerd[1711]: time="2025-07-07T05:56:22.172347509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:22.173568 containerd[1711]: time="2025-07-07T05:56:22.173040791Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.331113812s" Jul 7 05:56:22.173568 containerd[1711]: time="2025-07-07T05:56:22.173565032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 7 05:56:22.176052 containerd[1711]: time="2025-07-07T05:56:22.175845917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 05:56:22.177938 containerd[1711]: time="2025-07-07T05:56:22.177887881Z" level=info msg="CreateContainer within sandbox \"88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 05:56:22.221618 containerd[1711]: time="2025-07-07T05:56:22.221566376Z" level=info msg="CreateContainer within sandbox \"88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"437cd16e2bc72fcefc1d394c3f82d4aa9592a5383baf132b67eb67df71c1663d\"" Jul 7 05:56:22.225149 containerd[1711]: time="2025-07-07T05:56:22.222388018Z" level=info msg="StartContainer for \"437cd16e2bc72fcefc1d394c3f82d4aa9592a5383baf132b67eb67df71c1663d\"" Jul 7 05:56:22.272308 systemd[1]: Started cri-containerd-437cd16e2bc72fcefc1d394c3f82d4aa9592a5383baf132b67eb67df71c1663d.scope - libcontainer container 437cd16e2bc72fcefc1d394c3f82d4aa9592a5383baf132b67eb67df71c1663d. Jul 7 05:56:22.298633 kubelet[3105]: I0707 05:56:22.298566 3105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 05:56:22.339586 containerd[1711]: time="2025-07-07T05:56:22.339539872Z" level=info msg="StartContainer for \"437cd16e2bc72fcefc1d394c3f82d4aa9592a5383baf132b67eb67df71c1663d\" returns successfully" Jul 7 05:56:23.453588 kubelet[3105]: I0707 05:56:23.453318 3105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8444f76b56-r26qp" podStartSLOduration=31.421758234 podStartE2EDuration="36.453296891s" podCreationTimestamp="2025-07-07 05:55:47 +0000 UTC" firstStartedPulling="2025-07-07 05:56:15.808135317 +0000 UTC m=+46.933183289" lastFinishedPulling="2025-07-07 05:56:20.839673974 +0000 UTC m=+51.964721946" observedRunningTime="2025-07-07 05:56:21.304577344 +0000 UTC m=+52.429625316" watchObservedRunningTime="2025-07-07 05:56:23.453296891 +0000 UTC m=+54.578344823" Jul 7 05:56:24.390036 containerd[1711]: time="2025-07-07T05:56:24.389268565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:24.392754 containerd[1711]: time="2025-07-07T05:56:24.392713572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 7 05:56:24.398340 containerd[1711]: time="2025-07-07T05:56:24.398274424Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:24.404833 containerd[1711]: time="2025-07-07T05:56:24.404761718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:24.405757 containerd[1711]: time="2025-07-07T05:56:24.405574240Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.229690643s" Jul 7 05:56:24.405757 containerd[1711]: time="2025-07-07T05:56:24.405607560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 7 05:56:24.407784 containerd[1711]: time="2025-07-07T05:56:24.407459084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 05:56:24.421579 containerd[1711]: time="2025-07-07T05:56:24.421340634Z" level=info msg="CreateContainer within sandbox \"8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 05:56:24.474344 containerd[1711]: time="2025-07-07T05:56:24.474295869Z" level=info msg="CreateContainer within sandbox \"8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a5ca82ff72fdd4f754ee559439d5b1958a891e6b86de9ad4aedb3377d864777a\"" Jul 7 05:56:24.476356 containerd[1711]: time="2025-07-07T05:56:24.475043431Z" level=info msg="StartContainer for \"a5ca82ff72fdd4f754ee559439d5b1958a891e6b86de9ad4aedb3377d864777a\"" Jul 7 05:56:24.514330 systemd[1]: Started cri-containerd-a5ca82ff72fdd4f754ee559439d5b1958a891e6b86de9ad4aedb3377d864777a.scope - libcontainer container a5ca82ff72fdd4f754ee559439d5b1958a891e6b86de9ad4aedb3377d864777a. Jul 7 05:56:24.558462 containerd[1711]: time="2025-07-07T05:56:24.558410892Z" level=info msg="StartContainer for \"a5ca82ff72fdd4f754ee559439d5b1958a891e6b86de9ad4aedb3377d864777a\" returns successfully" Jul 7 05:56:25.348163 kubelet[3105]: I0707 05:56:25.347571 3105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-69dff4fc68-6rlws" podStartSLOduration=25.889406183 podStartE2EDuration="34.347551568s" podCreationTimestamp="2025-07-07 05:55:51 +0000 UTC" firstStartedPulling="2025-07-07 05:56:15.948583018 +0000 UTC m=+47.073630990" lastFinishedPulling="2025-07-07 05:56:24.406728403 +0000 UTC m=+55.531776375" observedRunningTime="2025-07-07 05:56:25.347057207 +0000 UTC m=+56.472105179" watchObservedRunningTime="2025-07-07 05:56:25.347551568 +0000 UTC m=+56.472599540" Jul 7 05:56:26.302340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1658545040.mount: Deactivated successfully. Jul 7 05:56:27.177874 containerd[1711]: time="2025-07-07T05:56:27.177811992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:27.179882 containerd[1711]: time="2025-07-07T05:56:27.179834196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 7 05:56:27.183293 containerd[1711]: time="2025-07-07T05:56:27.183226123Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:27.191020 containerd[1711]: time="2025-07-07T05:56:27.190590779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:27.192222 containerd[1711]: time="2025-07-07T05:56:27.192011343Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.784511539s" Jul 7 05:56:27.192222 containerd[1711]: time="2025-07-07T05:56:27.192057703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 7 05:56:27.194157 containerd[1711]: time="2025-07-07T05:56:27.193466226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 05:56:27.195246 containerd[1711]: time="2025-07-07T05:56:27.195213350Z" level=info msg="CreateContainer within sandbox \"6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 05:56:27.251406 containerd[1711]: time="2025-07-07T05:56:27.251358032Z" level=info msg="CreateContainer within sandbox \"6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"69e50a23a94a622db127f5c23c291e31c38c02a9b687cd74d154d0c358bc1e31\"" Jul 7 05:56:27.255057 containerd[1711]: time="2025-07-07T05:56:27.253535236Z" level=info msg="StartContainer for \"69e50a23a94a622db127f5c23c291e31c38c02a9b687cd74d154d0c358bc1e31\"" Jul 7 05:56:27.336537 systemd[1]: Started cri-containerd-69e50a23a94a622db127f5c23c291e31c38c02a9b687cd74d154d0c358bc1e31.scope - libcontainer container 69e50a23a94a622db127f5c23c291e31c38c02a9b687cd74d154d0c358bc1e31. Jul 7 05:56:27.519116 containerd[1711]: time="2025-07-07T05:56:27.516394969Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:27.521165 containerd[1711]: time="2025-07-07T05:56:27.521121379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 05:56:27.529469 containerd[1711]: time="2025-07-07T05:56:27.529423437Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 335.914971ms" Jul 7 05:56:27.532569 containerd[1711]: time="2025-07-07T05:56:27.532529644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 7 05:56:27.534930 containerd[1711]: time="2025-07-07T05:56:27.534898809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 05:56:27.540505 containerd[1711]: time="2025-07-07T05:56:27.540290061Z" level=info msg="CreateContainer within sandbox \"567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 05:56:27.544117 containerd[1711]: time="2025-07-07T05:56:27.541683864Z" level=info msg="StartContainer for \"69e50a23a94a622db127f5c23c291e31c38c02a9b687cd74d154d0c358bc1e31\" returns successfully" Jul 7 05:56:27.578783 containerd[1711]: time="2025-07-07T05:56:27.578730384Z" level=info msg="CreateContainer within sandbox \"567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1e47986729c26252cd89eea0e6ca44dd94dbc392452843800373f58e8e73756a\"" Jul 7 05:56:27.579857 containerd[1711]: time="2025-07-07T05:56:27.579709866Z" level=info msg="StartContainer for \"1e47986729c26252cd89eea0e6ca44dd94dbc392452843800373f58e8e73756a\"" Jul 7 05:56:27.623332 systemd[1]: Started cri-containerd-1e47986729c26252cd89eea0e6ca44dd94dbc392452843800373f58e8e73756a.scope - libcontainer container 1e47986729c26252cd89eea0e6ca44dd94dbc392452843800373f58e8e73756a. Jul 7 05:56:27.718328 containerd[1711]: time="2025-07-07T05:56:27.718289528Z" level=info msg="StartContainer for \"1e47986729c26252cd89eea0e6ca44dd94dbc392452843800373f58e8e73756a\" returns successfully" Jul 7 05:56:28.388290 kubelet[3105]: I0707 05:56:28.388182 3105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-pvv7q" podStartSLOduration=27.601646518 podStartE2EDuration="37.388160146s" podCreationTimestamp="2025-07-07 05:55:51 +0000 UTC" firstStartedPulling="2025-07-07 05:56:17.406755997 +0000 UTC m=+48.531803969" lastFinishedPulling="2025-07-07 05:56:27.193269465 +0000 UTC m=+58.318317597" observedRunningTime="2025-07-07 05:56:28.386238542 +0000 UTC m=+59.511286634" watchObservedRunningTime="2025-07-07 05:56:28.388160146 +0000 UTC m=+59.513208118" Jul 7 05:56:28.548054 kubelet[3105]: I0707 05:56:28.547630 3105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 05:56:28.608135 kubelet[3105]: I0707 05:56:28.606120 3105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8444f76b56-bhmcq" podStartSLOduration=32.510142171 podStartE2EDuration="41.6060761s" podCreationTimestamp="2025-07-07 05:55:47 +0000 UTC" firstStartedPulling="2025-07-07 05:56:18.438587479 +0000 UTC m=+49.563635451" lastFinishedPulling="2025-07-07 05:56:27.534521408 +0000 UTC m=+58.659569380" observedRunningTime="2025-07-07 05:56:28.423221142 +0000 UTC m=+59.548269114" watchObservedRunningTime="2025-07-07 05:56:28.6060761 +0000 UTC m=+59.731124072" Jul 7 05:56:29.018339 containerd[1711]: time="2025-07-07T05:56:29.018268677Z" level=info msg="StopPodSandbox for \"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\"" Jul 7 05:56:29.069526 containerd[1711]: time="2025-07-07T05:56:29.068975627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:29.081401 containerd[1711]: time="2025-07-07T05:56:29.081351894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 7 05:56:29.095020 containerd[1711]: time="2025-07-07T05:56:29.094950724Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:29.121829 containerd[1711]: time="2025-07-07T05:56:29.121022101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:29.128221 containerd[1711]: time="2025-07-07T05:56:29.128004476Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.592926987s" Jul 7 05:56:29.128221 containerd[1711]: time="2025-07-07T05:56:29.128050236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 7 05:56:29.133352 containerd[1711]: time="2025-07-07T05:56:29.133170007Z" level=info msg="CreateContainer within sandbox \"88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 05:56:29.184526 containerd[1711]: time="2025-07-07T05:56:29.184470199Z" level=info msg="CreateContainer within sandbox \"88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3b87f3454184cf434219e2eb025af42c38fa44f0155469ca98ceba4ce7694531\"" Jul 7 05:56:29.186576 containerd[1711]: time="2025-07-07T05:56:29.186529443Z" level=info msg="StartContainer for \"3b87f3454184cf434219e2eb025af42c38fa44f0155469ca98ceba4ce7694531\"" Jul 7 05:56:29.212160 containerd[1711]: 2025-07-07 05:56:29.101 [WARNING][5923] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--7fbcfc8df4--stb7m-eth0" Jul 7 05:56:29.212160 containerd[1711]: 2025-07-07 05:56:29.101 [INFO][5923] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Jul 7 05:56:29.212160 containerd[1711]: 2025-07-07 05:56:29.101 [INFO][5923] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" iface="eth0" netns="" Jul 7 05:56:29.212160 containerd[1711]: 2025-07-07 05:56:29.101 [INFO][5923] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Jul 7 05:56:29.212160 containerd[1711]: 2025-07-07 05:56:29.101 [INFO][5923] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Jul 7 05:56:29.212160 containerd[1711]: 2025-07-07 05:56:29.163 [INFO][5931] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" HandleID="k8s-pod-network.b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--7fbcfc8df4--stb7m-eth0" Jul 7 05:56:29.212160 containerd[1711]: 2025-07-07 05:56:29.172 [INFO][5931] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:29.212160 containerd[1711]: 2025-07-07 05:56:29.172 [INFO][5931] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:29.212160 containerd[1711]: 2025-07-07 05:56:29.192 [WARNING][5931] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" HandleID="k8s-pod-network.b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--7fbcfc8df4--stb7m-eth0" Jul 7 05:56:29.212160 containerd[1711]: 2025-07-07 05:56:29.192 [INFO][5931] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" HandleID="k8s-pod-network.b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--7fbcfc8df4--stb7m-eth0" Jul 7 05:56:29.212160 containerd[1711]: 2025-07-07 05:56:29.197 [INFO][5931] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:29.212160 containerd[1711]: 2025-07-07 05:56:29.202 [INFO][5923] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Jul 7 05:56:29.212160 containerd[1711]: time="2025-07-07T05:56:29.211294857Z" level=info msg="TearDown network for sandbox \"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\" successfully" Jul 7 05:56:29.212160 containerd[1711]: time="2025-07-07T05:56:29.211325257Z" level=info msg="StopPodSandbox for \"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\" returns successfully" Jul 7 05:56:29.213779 containerd[1711]: time="2025-07-07T05:56:29.213268702Z" level=info msg="RemovePodSandbox for \"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\"" Jul 7 05:56:29.226263 containerd[1711]: time="2025-07-07T05:56:29.218056472Z" level=info msg="Forcibly stopping sandbox \"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\"" Jul 7 05:56:29.252343 systemd[1]: Started cri-containerd-3b87f3454184cf434219e2eb025af42c38fa44f0155469ca98ceba4ce7694531.scope - libcontainer container 3b87f3454184cf434219e2eb025af42c38fa44f0155469ca98ceba4ce7694531. Jul 7 05:56:29.339363 containerd[1711]: 2025-07-07 05:56:29.292 [WARNING][5963] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" WorkloadEndpoint="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--7fbcfc8df4--stb7m-eth0" Jul 7 05:56:29.339363 containerd[1711]: 2025-07-07 05:56:29.292 [INFO][5963] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Jul 7 05:56:29.339363 containerd[1711]: 2025-07-07 05:56:29.292 [INFO][5963] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" iface="eth0" netns="" Jul 7 05:56:29.339363 containerd[1711]: 2025-07-07 05:56:29.292 [INFO][5963] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Jul 7 05:56:29.339363 containerd[1711]: 2025-07-07 05:56:29.292 [INFO][5963] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Jul 7 05:56:29.339363 containerd[1711]: 2025-07-07 05:56:29.321 [INFO][5978] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" HandleID="k8s-pod-network.b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--7fbcfc8df4--stb7m-eth0" Jul 7 05:56:29.339363 containerd[1711]: 2025-07-07 05:56:29.322 [INFO][5978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:29.339363 containerd[1711]: 2025-07-07 05:56:29.322 [INFO][5978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:29.339363 containerd[1711]: 2025-07-07 05:56:29.332 [WARNING][5978] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" HandleID="k8s-pod-network.b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--7fbcfc8df4--stb7m-eth0" Jul 7 05:56:29.339363 containerd[1711]: 2025-07-07 05:56:29.332 [INFO][5978] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" HandleID="k8s-pod-network.b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-whisker--7fbcfc8df4--stb7m-eth0" Jul 7 05:56:29.339363 containerd[1711]: 2025-07-07 05:56:29.335 [INFO][5978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:29.339363 containerd[1711]: 2025-07-07 05:56:29.336 [INFO][5963] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3" Jul 7 05:56:29.339363 containerd[1711]: time="2025-07-07T05:56:29.339326136Z" level=info msg="TearDown network for sandbox \"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\" successfully" Jul 7 05:56:29.348071 containerd[1711]: time="2025-07-07T05:56:29.347997395Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:56:29.348288 containerd[1711]: time="2025-07-07T05:56:29.348189755Z" level=info msg="RemovePodSandbox \"b5f8d87ce3e233caf3225d98803664bb94f2accf1bb3d84978410cffbe4e12f3\" returns successfully" Jul 7 05:56:29.349142 containerd[1711]: time="2025-07-07T05:56:29.349080357Z" level=info msg="StopPodSandbox for \"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\"" Jul 7 05:56:29.533523 containerd[1711]: time="2025-07-07T05:56:29.533466838Z" level=info msg="StartContainer for \"3b87f3454184cf434219e2eb025af42c38fa44f0155469ca98ceba4ce7694531\" returns successfully" Jul 7 05:56:29.585044 containerd[1711]: 2025-07-07 05:56:29.455 [WARNING][5993] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0", GenerateName:"calico-apiserver-8444f76b56-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4471ce3-99ef-4485-9426-4f1c145f77dd", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8444f76b56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490", Pod:"calico-apiserver-8444f76b56-r26qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc52ab5d702", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:29.585044 containerd[1711]: 2025-07-07 05:56:29.455 [INFO][5993] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Jul 7 05:56:29.585044 containerd[1711]: 2025-07-07 05:56:29.455 [INFO][5993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" iface="eth0" netns="" Jul 7 05:56:29.585044 containerd[1711]: 2025-07-07 05:56:29.455 [INFO][5993] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Jul 7 05:56:29.585044 containerd[1711]: 2025-07-07 05:56:29.455 [INFO][5993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Jul 7 05:56:29.585044 containerd[1711]: 2025-07-07 05:56:29.502 [INFO][6016] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" HandleID="k8s-pod-network.02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0" Jul 7 05:56:29.585044 containerd[1711]: 2025-07-07 05:56:29.503 [INFO][6016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:29.585044 containerd[1711]: 2025-07-07 05:56:29.504 [INFO][6016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:29.585044 containerd[1711]: 2025-07-07 05:56:29.569 [WARNING][6016] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" HandleID="k8s-pod-network.02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0" Jul 7 05:56:29.585044 containerd[1711]: 2025-07-07 05:56:29.569 [INFO][6016] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" HandleID="k8s-pod-network.02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0" Jul 7 05:56:29.585044 containerd[1711]: 2025-07-07 05:56:29.573 [INFO][6016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:29.585044 containerd[1711]: 2025-07-07 05:56:29.580 [INFO][5993] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Jul 7 05:56:29.587500 containerd[1711]: time="2025-07-07T05:56:29.587449716Z" level=info msg="TearDown network for sandbox \"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\" successfully" Jul 7 05:56:29.587619 containerd[1711]: time="2025-07-07T05:56:29.587603556Z" level=info msg="StopPodSandbox for \"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\" returns successfully" Jul 7 05:56:29.588763 containerd[1711]: time="2025-07-07T05:56:29.588727039Z" level=info msg="RemovePodSandbox for \"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\"" Jul 7 05:56:29.588846 containerd[1711]: time="2025-07-07T05:56:29.588768319Z" level=info msg="Forcibly stopping sandbox \"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\"" Jul 7 05:56:29.725392 containerd[1711]: 2025-07-07 05:56:29.671 [WARNING][6046] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0", GenerateName:"calico-apiserver-8444f76b56-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4471ce3-99ef-4485-9426-4f1c145f77dd", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8444f76b56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"77d023fd05184baece74c64077218f4d694be94ff4c0fc323ad676415e668490", Pod:"calico-apiserver-8444f76b56-r26qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc52ab5d702", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:29.725392 containerd[1711]: 2025-07-07 05:56:29.671 [INFO][6046] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Jul 7 05:56:29.725392 containerd[1711]: 2025-07-07 05:56:29.671 [INFO][6046] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" iface="eth0" netns="" Jul 7 05:56:29.725392 containerd[1711]: 2025-07-07 05:56:29.671 [INFO][6046] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Jul 7 05:56:29.725392 containerd[1711]: 2025-07-07 05:56:29.671 [INFO][6046] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Jul 7 05:56:29.725392 containerd[1711]: 2025-07-07 05:56:29.705 [INFO][6057] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" HandleID="k8s-pod-network.02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0" Jul 7 05:56:29.725392 containerd[1711]: 2025-07-07 05:56:29.707 [INFO][6057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:29.725392 containerd[1711]: 2025-07-07 05:56:29.707 [INFO][6057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:29.725392 containerd[1711]: 2025-07-07 05:56:29.717 [WARNING][6057] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" HandleID="k8s-pod-network.02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0" Jul 7 05:56:29.725392 containerd[1711]: 2025-07-07 05:56:29.717 [INFO][6057] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" HandleID="k8s-pod-network.02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--r26qp-eth0" Jul 7 05:56:29.725392 containerd[1711]: 2025-07-07 05:56:29.720 [INFO][6057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:29.725392 containerd[1711]: 2025-07-07 05:56:29.722 [INFO][6046] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c" Jul 7 05:56:29.725809 containerd[1711]: time="2025-07-07T05:56:29.725441616Z" level=info msg="TearDown network for sandbox \"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\" successfully" Jul 7 05:56:29.734696 containerd[1711]: time="2025-07-07T05:56:29.734638436Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:56:29.734850 containerd[1711]: time="2025-07-07T05:56:29.734729876Z" level=info msg="RemovePodSandbox \"02b482ebec38db285c72aabaea90603c56788adad9f11e42f87d427b02cc325c\" returns successfully" Jul 7 05:56:29.735676 containerd[1711]: time="2025-07-07T05:56:29.735641038Z" level=info msg="StopPodSandbox for \"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\"" Jul 7 05:56:29.862842 containerd[1711]: 2025-07-07 05:56:29.802 [WARNING][6071] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"93d92675-dc75-493f-afb1-b5a1b922550a", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018", Pod:"coredns-7c65d6cfc9-7c8tc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86ed417a576", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:29.862842 containerd[1711]: 2025-07-07 05:56:29.802 [INFO][6071] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Jul 7 05:56:29.862842 containerd[1711]: 2025-07-07 05:56:29.802 [INFO][6071] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" iface="eth0" netns="" Jul 7 05:56:29.862842 containerd[1711]: 2025-07-07 05:56:29.802 [INFO][6071] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Jul 7 05:56:29.862842 containerd[1711]: 2025-07-07 05:56:29.802 [INFO][6071] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Jul 7 05:56:29.862842 containerd[1711]: 2025-07-07 05:56:29.840 [INFO][6078] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" HandleID="k8s-pod-network.daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0" Jul 7 05:56:29.862842 containerd[1711]: 2025-07-07 05:56:29.841 [INFO][6078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:29.862842 containerd[1711]: 2025-07-07 05:56:29.841 [INFO][6078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:29.862842 containerd[1711]: 2025-07-07 05:56:29.851 [WARNING][6078] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" HandleID="k8s-pod-network.daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0" Jul 7 05:56:29.862842 containerd[1711]: 2025-07-07 05:56:29.851 [INFO][6078] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" HandleID="k8s-pod-network.daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0" Jul 7 05:56:29.862842 containerd[1711]: 2025-07-07 05:56:29.857 [INFO][6078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:29.862842 containerd[1711]: 2025-07-07 05:56:29.859 [INFO][6071] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Jul 7 05:56:29.862842 containerd[1711]: time="2025-07-07T05:56:29.862711115Z" level=info msg="TearDown network for sandbox \"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\" successfully" Jul 7 05:56:29.862842 containerd[1711]: time="2025-07-07T05:56:29.862737675Z" level=info msg="StopPodSandbox for \"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\" returns successfully" Jul 7 05:56:29.865615 containerd[1711]: time="2025-07-07T05:56:29.864971080Z" level=info msg="RemovePodSandbox for \"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\"" Jul 7 05:56:29.865615 containerd[1711]: time="2025-07-07T05:56:29.865009120Z" level=info msg="Forcibly stopping sandbox \"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\"" Jul 7 05:56:29.988200 containerd[1711]: 2025-07-07 05:56:29.926 [WARNING][6092] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"93d92675-dc75-493f-afb1-b5a1b922550a", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"e7dad9a2ec7f5c65514303e6eca847f1e94c928a5013dfa32e4f77a2d6b5c018", Pod:"coredns-7c65d6cfc9-7c8tc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86ed417a576", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:29.988200 containerd[1711]: 2025-07-07 05:56:29.926 [INFO][6092] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Jul 7 05:56:29.988200 containerd[1711]: 2025-07-07 05:56:29.926 [INFO][6092] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" iface="eth0" netns="" Jul 7 05:56:29.988200 containerd[1711]: 2025-07-07 05:56:29.926 [INFO][6092] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Jul 7 05:56:29.988200 containerd[1711]: 2025-07-07 05:56:29.926 [INFO][6092] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Jul 7 05:56:29.988200 containerd[1711]: 2025-07-07 05:56:29.963 [INFO][6099] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" HandleID="k8s-pod-network.daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0" Jul 7 05:56:29.988200 containerd[1711]: 2025-07-07 05:56:29.965 [INFO][6099] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:29.988200 containerd[1711]: 2025-07-07 05:56:29.965 [INFO][6099] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:29.988200 containerd[1711]: 2025-07-07 05:56:29.977 [WARNING][6099] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" HandleID="k8s-pod-network.daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0" Jul 7 05:56:29.988200 containerd[1711]: 2025-07-07 05:56:29.977 [INFO][6099] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" HandleID="k8s-pod-network.daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--7c8tc-eth0" Jul 7 05:56:29.988200 containerd[1711]: 2025-07-07 05:56:29.980 [INFO][6099] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:29.988200 containerd[1711]: 2025-07-07 05:56:29.983 [INFO][6092] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8" Jul 7 05:56:29.988200 containerd[1711]: time="2025-07-07T05:56:29.986069143Z" level=info msg="TearDown network for sandbox \"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\" successfully" Jul 7 05:56:30.000710 containerd[1711]: time="2025-07-07T05:56:30.000665015Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:56:30.001018 containerd[1711]: time="2025-07-07T05:56:30.000912136Z" level=info msg="RemovePodSandbox \"daddff2581982bd0faaa52ad0bfde0502d56b7b9c0e686600fe8f1fe2c26e5d8\" returns successfully" Jul 7 05:56:30.002212 containerd[1711]: time="2025-07-07T05:56:30.001473217Z" level=info msg="StopPodSandbox for \"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\"" Jul 7 05:56:30.134283 kubelet[3105]: I0707 05:56:30.134237 3105 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 05:56:30.136055 containerd[1711]: 2025-07-07 05:56:30.062 [WARNING][6113] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e285d422-7e9a-40bd-bc64-700d7b15f4ab", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f", Pod:"csi-node-driver-2zxks", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1953f46ab77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:30.136055 containerd[1711]: 2025-07-07 05:56:30.062 [INFO][6113] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Jul 7 05:56:30.136055 containerd[1711]: 2025-07-07 05:56:30.062 [INFO][6113] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" iface="eth0" netns="" Jul 7 05:56:30.136055 containerd[1711]: 2025-07-07 05:56:30.062 [INFO][6113] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Jul 7 05:56:30.136055 containerd[1711]: 2025-07-07 05:56:30.062 [INFO][6113] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Jul 7 05:56:30.136055 containerd[1711]: 2025-07-07 05:56:30.093 [INFO][6120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" HandleID="k8s-pod-network.86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0" Jul 7 05:56:30.136055 containerd[1711]: 2025-07-07 05:56:30.094 [INFO][6120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:30.136055 containerd[1711]: 2025-07-07 05:56:30.094 [INFO][6120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:30.136055 containerd[1711]: 2025-07-07 05:56:30.128 [WARNING][6120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" HandleID="k8s-pod-network.86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0" Jul 7 05:56:30.136055 containerd[1711]: 2025-07-07 05:56:30.128 [INFO][6120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" HandleID="k8s-pod-network.86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0" Jul 7 05:56:30.136055 containerd[1711]: 2025-07-07 05:56:30.131 [INFO][6120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:30.136055 containerd[1711]: 2025-07-07 05:56:30.132 [INFO][6113] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Jul 7 05:56:30.136055 containerd[1711]: time="2025-07-07T05:56:30.135243108Z" level=info msg="TearDown network for sandbox \"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\" successfully" Jul 7 05:56:30.136055 containerd[1711]: time="2025-07-07T05:56:30.135269628Z" level=info msg="StopPodSandbox for \"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\" returns successfully" Jul 7 05:56:30.139136 containerd[1711]: time="2025-07-07T05:56:30.138122514Z" level=info msg="RemovePodSandbox for \"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\"" Jul 7 05:56:30.139136 containerd[1711]: time="2025-07-07T05:56:30.138164354Z" level=info msg="Forcibly stopping sandbox \"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\"" Jul 7 05:56:30.142613 kubelet[3105]: I0707 05:56:30.142519 3105 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 05:56:30.297337 containerd[1711]: 2025-07-07 05:56:30.203 [WARNING][6135] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e285d422-7e9a-40bd-bc64-700d7b15f4ab", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"88457f4e5b8e26883d470b7019774a2a4ecc789bf05773d7030883356ca0be5f", Pod:"csi-node-driver-2zxks", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1953f46ab77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:30.297337 containerd[1711]: 2025-07-07 05:56:30.203 [INFO][6135] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Jul 7 05:56:30.297337 containerd[1711]: 2025-07-07 05:56:30.203 [INFO][6135] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" iface="eth0" netns="" Jul 7 05:56:30.297337 containerd[1711]: 2025-07-07 05:56:30.203 [INFO][6135] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Jul 7 05:56:30.297337 containerd[1711]: 2025-07-07 05:56:30.203 [INFO][6135] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Jul 7 05:56:30.297337 containerd[1711]: 2025-07-07 05:56:30.235 [INFO][6143] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" HandleID="k8s-pod-network.86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0" Jul 7 05:56:30.297337 containerd[1711]: 2025-07-07 05:56:30.236 [INFO][6143] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:30.297337 containerd[1711]: 2025-07-07 05:56:30.236 [INFO][6143] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:30.297337 containerd[1711]: 2025-07-07 05:56:30.261 [WARNING][6143] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" HandleID="k8s-pod-network.86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0" Jul 7 05:56:30.297337 containerd[1711]: 2025-07-07 05:56:30.261 [INFO][6143] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" HandleID="k8s-pod-network.86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-csi--node--driver--2zxks-eth0" Jul 7 05:56:30.297337 containerd[1711]: 2025-07-07 05:56:30.292 [INFO][6143] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:30.297337 containerd[1711]: 2025-07-07 05:56:30.293 [INFO][6135] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d" Jul 7 05:56:30.297337 containerd[1711]: time="2025-07-07T05:56:30.296337459Z" level=info msg="TearDown network for sandbox \"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\" successfully" Jul 7 05:56:30.325289 containerd[1711]: time="2025-07-07T05:56:30.325202601Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:56:30.325612 containerd[1711]: time="2025-07-07T05:56:30.325505362Z" level=info msg="RemovePodSandbox \"86e7605bac2aef3c6b12524e1341014db9ef552a5a0ed225f2720ade5b266b2d\" returns successfully" Jul 7 05:56:30.326592 containerd[1711]: time="2025-07-07T05:56:30.326295284Z" level=info msg="StopPodSandbox for \"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\"" Jul 7 05:56:30.380705 kubelet[3105]: I0707 05:56:30.379813 3105 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 05:56:30.534620 containerd[1711]: 2025-07-07 05:56:30.458 [WARNING][6157] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b7f453a6-8802-4475-abf7-3bbe1e10231e", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f", Pod:"coredns-7c65d6cfc9-6jpc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75a9ad07272", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:30.534620 containerd[1711]: 2025-07-07 05:56:30.459 [INFO][6157] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Jul 7 05:56:30.534620 containerd[1711]: 2025-07-07 05:56:30.459 [INFO][6157] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" iface="eth0" netns="" Jul 7 05:56:30.534620 containerd[1711]: 2025-07-07 05:56:30.459 [INFO][6157] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Jul 7 05:56:30.534620 containerd[1711]: 2025-07-07 05:56:30.459 [INFO][6157] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Jul 7 05:56:30.534620 containerd[1711]: 2025-07-07 05:56:30.511 [INFO][6181] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" HandleID="k8s-pod-network.d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0" Jul 7 05:56:30.534620 containerd[1711]: 2025-07-07 05:56:30.511 [INFO][6181] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:30.534620 containerd[1711]: 2025-07-07 05:56:30.511 [INFO][6181] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:30.534620 containerd[1711]: 2025-07-07 05:56:30.522 [WARNING][6181] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" HandleID="k8s-pod-network.d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0" Jul 7 05:56:30.534620 containerd[1711]: 2025-07-07 05:56:30.523 [INFO][6181] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" HandleID="k8s-pod-network.d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0" Jul 7 05:56:30.534620 containerd[1711]: 2025-07-07 05:56:30.525 [INFO][6181] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:30.534620 containerd[1711]: 2025-07-07 05:56:30.531 [INFO][6157] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Jul 7 05:56:30.535662 containerd[1711]: time="2025-07-07T05:56:30.535621979Z" level=info msg="TearDown network for sandbox \"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\" successfully" Jul 7 05:56:30.535947 containerd[1711]: time="2025-07-07T05:56:30.535879860Z" level=info msg="StopPodSandbox for \"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\" returns successfully" Jul 7 05:56:30.537212 containerd[1711]: time="2025-07-07T05:56:30.536725182Z" level=info msg="RemovePodSandbox for \"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\"" Jul 7 05:56:30.537212 containerd[1711]: time="2025-07-07T05:56:30.536756502Z" level=info msg="Forcibly stopping sandbox \"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\"" Jul 7 05:56:30.655753 containerd[1711]: 2025-07-07 05:56:30.603 [WARNING][6198] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b7f453a6-8802-4475-abf7-3bbe1e10231e", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"92e1aa02180054a5c1ac7234b6c7885202348e1105cf5ffa384df0eef0a0fb7f", Pod:"coredns-7c65d6cfc9-6jpc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75a9ad07272", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:30.655753 containerd[1711]: 2025-07-07 05:56:30.603 [INFO][6198] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Jul 7 05:56:30.655753 containerd[1711]: 2025-07-07 05:56:30.603 [INFO][6198] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" iface="eth0" netns="" Jul 7 05:56:30.655753 containerd[1711]: 2025-07-07 05:56:30.603 [INFO][6198] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Jul 7 05:56:30.655753 containerd[1711]: 2025-07-07 05:56:30.603 [INFO][6198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Jul 7 05:56:30.655753 containerd[1711]: 2025-07-07 05:56:30.635 [INFO][6206] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" HandleID="k8s-pod-network.d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0" Jul 7 05:56:30.655753 containerd[1711]: 2025-07-07 05:56:30.636 [INFO][6206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:30.655753 containerd[1711]: 2025-07-07 05:56:30.636 [INFO][6206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:30.655753 containerd[1711]: 2025-07-07 05:56:30.648 [WARNING][6206] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" HandleID="k8s-pod-network.d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0" Jul 7 05:56:30.655753 containerd[1711]: 2025-07-07 05:56:30.649 [INFO][6206] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" HandleID="k8s-pod-network.d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-coredns--7c65d6cfc9--6jpc6-eth0" Jul 7 05:56:30.655753 containerd[1711]: 2025-07-07 05:56:30.650 [INFO][6206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:30.655753 containerd[1711]: 2025-07-07 05:56:30.653 [INFO][6198] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877" Jul 7 05:56:30.657525 containerd[1711]: time="2025-07-07T05:56:30.657254564Z" level=info msg="TearDown network for sandbox \"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\" successfully" Jul 7 05:56:30.690273 containerd[1711]: time="2025-07-07T05:56:30.689848315Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:56:30.690273 containerd[1711]: time="2025-07-07T05:56:30.689938515Z" level=info msg="RemovePodSandbox \"d4de88af35bdae4e717aee10e2f8a38be8ecf1676b7089f1f962b3c10b454877\" returns successfully" Jul 7 05:56:30.691061 containerd[1711]: time="2025-07-07T05:56:30.690580597Z" level=info msg="StopPodSandbox for \"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\"" Jul 7 05:56:30.823077 containerd[1711]: 2025-07-07 05:56:30.762 [WARNING][6220] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0", GenerateName:"calico-kube-controllers-69dff4fc68-", Namespace:"calico-system", SelfLink:"", UID:"9843f4c8-0ee5-49b8-b775-11d693445b56", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69dff4fc68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444", Pod:"calico-kube-controllers-69dff4fc68-6rlws", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calieed4d574ddd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:30.823077 containerd[1711]: 2025-07-07 05:56:30.764 [INFO][6220] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Jul 7 05:56:30.823077 containerd[1711]: 2025-07-07 05:56:30.764 [INFO][6220] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" iface="eth0" netns="" Jul 7 05:56:30.823077 containerd[1711]: 2025-07-07 05:56:30.764 [INFO][6220] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Jul 7 05:56:30.823077 containerd[1711]: 2025-07-07 05:56:30.764 [INFO][6220] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Jul 7 05:56:30.823077 containerd[1711]: 2025-07-07 05:56:30.793 [INFO][6227] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" HandleID="k8s-pod-network.6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0" Jul 7 05:56:30.823077 containerd[1711]: 2025-07-07 05:56:30.793 [INFO][6227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:30.823077 containerd[1711]: 2025-07-07 05:56:30.793 [INFO][6227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:30.823077 containerd[1711]: 2025-07-07 05:56:30.815 [WARNING][6227] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" HandleID="k8s-pod-network.6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0" Jul 7 05:56:30.823077 containerd[1711]: 2025-07-07 05:56:30.815 [INFO][6227] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" HandleID="k8s-pod-network.6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0" Jul 7 05:56:30.823077 containerd[1711]: 2025-07-07 05:56:30.817 [INFO][6227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:30.823077 containerd[1711]: 2025-07-07 05:56:30.821 [INFO][6220] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Jul 7 05:56:30.824013 containerd[1711]: time="2025-07-07T05:56:30.823639606Z" level=info msg="TearDown network for sandbox \"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\" successfully" Jul 7 05:56:30.824013 containerd[1711]: time="2025-07-07T05:56:30.823671846Z" level=info msg="StopPodSandbox for \"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\" returns successfully" Jul 7 05:56:30.824590 containerd[1711]: time="2025-07-07T05:56:30.824569648Z" level=info msg="RemovePodSandbox for \"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\"" Jul 7 05:56:30.824754 containerd[1711]: time="2025-07-07T05:56:30.824626888Z" level=info msg="Forcibly stopping sandbox \"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\"" Jul 7 05:56:30.915785 containerd[1711]: 2025-07-07 05:56:30.867 [WARNING][6241] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0", GenerateName:"calico-kube-controllers-69dff4fc68-", Namespace:"calico-system", SelfLink:"", UID:"9843f4c8-0ee5-49b8-b775-11d693445b56", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69dff4fc68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"8dd896dd80db9f6c48331c57e1ef1ee4d05fa858e880081446408cb2e63aa444", Pod:"calico-kube-controllers-69dff4fc68-6rlws", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calieed4d574ddd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:30.915785 containerd[1711]: 2025-07-07 05:56:30.867 [INFO][6241] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Jul 7 05:56:30.915785 containerd[1711]: 2025-07-07 05:56:30.868 [INFO][6241] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" iface="eth0" netns="" Jul 7 05:56:30.915785 containerd[1711]: 2025-07-07 05:56:30.868 [INFO][6241] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Jul 7 05:56:30.915785 containerd[1711]: 2025-07-07 05:56:30.868 [INFO][6241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Jul 7 05:56:30.915785 containerd[1711]: 2025-07-07 05:56:30.894 [INFO][6248] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" HandleID="k8s-pod-network.6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0" Jul 7 05:56:30.915785 containerd[1711]: 2025-07-07 05:56:30.895 [INFO][6248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:30.915785 containerd[1711]: 2025-07-07 05:56:30.895 [INFO][6248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:30.915785 containerd[1711]: 2025-07-07 05:56:30.909 [WARNING][6248] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" HandleID="k8s-pod-network.6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0" Jul 7 05:56:30.915785 containerd[1711]: 2025-07-07 05:56:30.909 [INFO][6248] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" HandleID="k8s-pod-network.6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--kube--controllers--69dff4fc68--6rlws-eth0" Jul 7 05:56:30.915785 containerd[1711]: 2025-07-07 05:56:30.911 [INFO][6248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:30.915785 containerd[1711]: 2025-07-07 05:56:30.913 [INFO][6241] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5" Jul 7 05:56:30.915785 containerd[1711]: time="2025-07-07T05:56:30.915068485Z" level=info msg="TearDown network for sandbox \"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\" successfully" Jul 7 05:56:30.943734 containerd[1711]: time="2025-07-07T05:56:30.931671401Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:56:30.943894 containerd[1711]: time="2025-07-07T05:56:30.943784028Z" level=info msg="RemovePodSandbox \"6a21a23da3e2054192f07f7b71d654d409203d8762303a2696b22cc63a8cf4e5\" returns successfully" Jul 7 05:56:30.944649 containerd[1711]: time="2025-07-07T05:56:30.944380189Z" level=info msg="StopPodSandbox for \"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\"" Jul 7 05:56:31.040659 containerd[1711]: 2025-07-07 05:56:30.990 [WARNING][6262] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"6f156b26-7db3-4daa-b70b-8db09e58fe84", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db", Pod:"goldmane-58fd7646b9-pvv7q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4aea4af1bd5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:31.040659 containerd[1711]: 2025-07-07 05:56:30.990 [INFO][6262] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Jul 7 05:56:31.040659 containerd[1711]: 2025-07-07 05:56:30.990 [INFO][6262] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" iface="eth0" netns="" Jul 7 05:56:31.040659 containerd[1711]: 2025-07-07 05:56:30.990 [INFO][6262] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Jul 7 05:56:31.040659 containerd[1711]: 2025-07-07 05:56:30.990 [INFO][6262] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Jul 7 05:56:31.040659 containerd[1711]: 2025-07-07 05:56:31.020 [INFO][6269] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" HandleID="k8s-pod-network.0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0" Jul 7 05:56:31.040659 containerd[1711]: 2025-07-07 05:56:31.021 [INFO][6269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:31.040659 containerd[1711]: 2025-07-07 05:56:31.021 [INFO][6269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:31.040659 containerd[1711]: 2025-07-07 05:56:31.032 [WARNING][6269] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" HandleID="k8s-pod-network.0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0" Jul 7 05:56:31.040659 containerd[1711]: 2025-07-07 05:56:31.032 [INFO][6269] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" HandleID="k8s-pod-network.0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0" Jul 7 05:56:31.040659 containerd[1711]: 2025-07-07 05:56:31.034 [INFO][6269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:31.040659 containerd[1711]: 2025-07-07 05:56:31.037 [INFO][6262] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Jul 7 05:56:31.041597 containerd[1711]: time="2025-07-07T05:56:31.040703279Z" level=info msg="TearDown network for sandbox \"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\" successfully" Jul 7 05:56:31.041597 containerd[1711]: time="2025-07-07T05:56:31.040729239Z" level=info msg="StopPodSandbox for \"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\" returns successfully" Jul 7 05:56:31.042188 containerd[1711]: time="2025-07-07T05:56:31.041897241Z" level=info msg="RemovePodSandbox for \"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\"" Jul 7 05:56:31.042188 containerd[1711]: time="2025-07-07T05:56:31.041930601Z" level=info msg="Forcibly stopping sandbox \"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\"" Jul 7 05:56:31.139648 kubelet[3105]: I0707 05:56:31.139526 3105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2zxks" podStartSLOduration=26.884815635 podStartE2EDuration="40.139505854s" podCreationTimestamp="2025-07-07 05:55:51 +0000 UTC" firstStartedPulling="2025-07-07 05:56:15.876254263 +0000 UTC m=+47.001302275" lastFinishedPulling="2025-07-07 05:56:29.130944522 +0000 UTC m=+60.255992494" observedRunningTime="2025-07-07 05:56:30.456322727 +0000 UTC m=+61.581370699" watchObservedRunningTime="2025-07-07 05:56:31.139505854 +0000 UTC m=+62.264553826" Jul 7 05:56:31.154444 containerd[1711]: 2025-07-07 05:56:31.094 [WARNING][6283] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"6f156b26-7db3-4daa-b70b-8db09e58fe84", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"6bf6317b04323bd770da69ab09e4b2c510a13c8acf81e42138748b99c642b9db", Pod:"goldmane-58fd7646b9-pvv7q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4aea4af1bd5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:31.154444 containerd[1711]: 2025-07-07 05:56:31.095 [INFO][6283] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Jul 7 05:56:31.154444 containerd[1711]: 2025-07-07 05:56:31.095 [INFO][6283] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" iface="eth0" netns="" Jul 7 05:56:31.154444 containerd[1711]: 2025-07-07 05:56:31.095 [INFO][6283] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Jul 7 05:56:31.154444 containerd[1711]: 2025-07-07 05:56:31.095 [INFO][6283] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Jul 7 05:56:31.154444 containerd[1711]: 2025-07-07 05:56:31.131 [INFO][6290] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" HandleID="k8s-pod-network.0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0" Jul 7 05:56:31.154444 containerd[1711]: 2025-07-07 05:56:31.131 [INFO][6290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:31.154444 containerd[1711]: 2025-07-07 05:56:31.131 [INFO][6290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:31.154444 containerd[1711]: 2025-07-07 05:56:31.142 [WARNING][6290] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" HandleID="k8s-pod-network.0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0" Jul 7 05:56:31.154444 containerd[1711]: 2025-07-07 05:56:31.143 [INFO][6290] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" HandleID="k8s-pod-network.0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-goldmane--58fd7646b9--pvv7q-eth0" Jul 7 05:56:31.154444 containerd[1711]: 2025-07-07 05:56:31.147 [INFO][6290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:31.154444 containerd[1711]: 2025-07-07 05:56:31.149 [INFO][6283] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973" Jul 7 05:56:31.154444 containerd[1711]: time="2025-07-07T05:56:31.153818085Z" level=info msg="TearDown network for sandbox \"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\" successfully" Jul 7 05:56:31.169390 containerd[1711]: time="2025-07-07T05:56:31.168380757Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:56:31.169390 containerd[1711]: time="2025-07-07T05:56:31.168580437Z" level=info msg="RemovePodSandbox \"0a1c2ea41300409375a6392e9aa4804eb3e1c203267c453ddab8880d2127e973\" returns successfully" Jul 7 05:56:31.172190 containerd[1711]: time="2025-07-07T05:56:31.170203600Z" level=info msg="StopPodSandbox for \"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\"" Jul 7 05:56:31.273134 containerd[1711]: 2025-07-07 05:56:31.222 [WARNING][6305] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0", GenerateName:"calico-apiserver-8444f76b56-", Namespace:"calico-apiserver", SelfLink:"", UID:"654bded6-4470-4285-bbbd-8ec39f31a268", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8444f76b56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d", Pod:"calico-apiserver-8444f76b56-bhmcq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc4c7442c5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:31.273134 containerd[1711]: 2025-07-07 05:56:31.222 [INFO][6305] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Jul 7 05:56:31.273134 containerd[1711]: 2025-07-07 05:56:31.222 [INFO][6305] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" iface="eth0" netns="" Jul 7 05:56:31.273134 containerd[1711]: 2025-07-07 05:56:31.222 [INFO][6305] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Jul 7 05:56:31.273134 containerd[1711]: 2025-07-07 05:56:31.222 [INFO][6305] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Jul 7 05:56:31.273134 containerd[1711]: 2025-07-07 05:56:31.254 [INFO][6314] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" HandleID="k8s-pod-network.d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0" Jul 7 05:56:31.273134 containerd[1711]: 2025-07-07 05:56:31.254 [INFO][6314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:31.273134 containerd[1711]: 2025-07-07 05:56:31.254 [INFO][6314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:31.273134 containerd[1711]: 2025-07-07 05:56:31.265 [WARNING][6314] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" HandleID="k8s-pod-network.d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0" Jul 7 05:56:31.273134 containerd[1711]: 2025-07-07 05:56:31.265 [INFO][6314] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" HandleID="k8s-pod-network.d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0" Jul 7 05:56:31.273134 containerd[1711]: 2025-07-07 05:56:31.268 [INFO][6314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:31.273134 containerd[1711]: 2025-07-07 05:56:31.270 [INFO][6305] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Jul 7 05:56:31.273134 containerd[1711]: time="2025-07-07T05:56:31.272343423Z" level=info msg="TearDown network for sandbox \"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\" successfully" Jul 7 05:56:31.273134 containerd[1711]: time="2025-07-07T05:56:31.272368663Z" level=info msg="StopPodSandbox for \"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\" returns successfully" Jul 7 05:56:31.274114 containerd[1711]: time="2025-07-07T05:56:31.274037186Z" level=info msg="RemovePodSandbox for \"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\"" Jul 7 05:56:31.274215 containerd[1711]: time="2025-07-07T05:56:31.274127147Z" level=info msg="Forcibly stopping sandbox \"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\"" Jul 7 05:56:31.374623 containerd[1711]: 2025-07-07 05:56:31.321 [WARNING][6328] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0", GenerateName:"calico-apiserver-8444f76b56-", Namespace:"calico-apiserver", SelfLink:"", UID:"654bded6-4470-4285-bbbd-8ec39f31a268", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8444f76b56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-2bf61d9e54", ContainerID:"567aa1d822636935c106d5e5d7e8c778cb45b424a2a23fcf3502f748ca9ba83d", Pod:"calico-apiserver-8444f76b56-bhmcq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc4c7442c5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:31.374623 containerd[1711]: 2025-07-07 05:56:31.322 [INFO][6328] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Jul 7 05:56:31.374623 containerd[1711]: 2025-07-07 05:56:31.322 [INFO][6328] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" iface="eth0" netns="" Jul 7 05:56:31.374623 containerd[1711]: 2025-07-07 05:56:31.322 [INFO][6328] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Jul 7 05:56:31.374623 containerd[1711]: 2025-07-07 05:56:31.322 [INFO][6328] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Jul 7 05:56:31.374623 containerd[1711]: 2025-07-07 05:56:31.354 [INFO][6335] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" HandleID="k8s-pod-network.d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0" Jul 7 05:56:31.374623 containerd[1711]: 2025-07-07 05:56:31.354 [INFO][6335] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:31.374623 containerd[1711]: 2025-07-07 05:56:31.354 [INFO][6335] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:31.374623 containerd[1711]: 2025-07-07 05:56:31.366 [WARNING][6335] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" HandleID="k8s-pod-network.d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0" Jul 7 05:56:31.374623 containerd[1711]: 2025-07-07 05:56:31.366 [INFO][6335] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" HandleID="k8s-pod-network.d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Workload="ci--4081.3.4--a--2bf61d9e54-k8s-calico--apiserver--8444f76b56--bhmcq-eth0" Jul 7 05:56:31.374623 containerd[1711]: 2025-07-07 05:56:31.368 [INFO][6335] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:31.374623 containerd[1711]: 2025-07-07 05:56:31.372 [INFO][6328] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5" Jul 7 05:56:31.376406 containerd[1711]: time="2025-07-07T05:56:31.374672085Z" level=info msg="TearDown network for sandbox \"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\" successfully" Jul 7 05:56:31.386468 containerd[1711]: time="2025-07-07T05:56:31.386125190Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:56:31.386468 containerd[1711]: time="2025-07-07T05:56:31.386213511Z" level=info msg="RemovePodSandbox \"d1925ebca0f0fef5c0b9931a59365b172c9d421d7042cc7813837e6c39a2d3c5\" returns successfully" Jul 7 05:56:31.612946 systemd[1]: run-containerd-runc-k8s.io-a5ca82ff72fdd4f754ee559439d5b1958a891e6b86de9ad4aedb3377d864777a-runc.bI2GAV.mount: Deactivated successfully. Jul 7 05:56:53.360375 systemd[1]: run-containerd-runc-k8s.io-de53307f30032366e755499c68834bf35a6b6cc8196f9ca7b758da6640783634-runc.kfme9m.mount: Deactivated successfully. Jul 7 05:57:01.597424 systemd[1]: run-containerd-runc-k8s.io-a5ca82ff72fdd4f754ee559439d5b1958a891e6b86de9ad4aedb3377d864777a-runc.knGL07.mount: Deactivated successfully. Jul 7 05:57:17.050432 systemd[1]: Started sshd@7-10.200.20.24:22-10.200.16.10:44804.service - OpenSSH per-connection server daemon (10.200.16.10:44804). Jul 7 05:57:17.531125 sshd[6491]: Accepted publickey for core from 10.200.16.10 port 44804 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:57:17.533548 sshd[6491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:57:17.543598 systemd-logind[1662]: New session 10 of user core. Jul 7 05:57:17.547303 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 05:57:17.985805 sshd[6491]: pam_unix(sshd:session): session closed for user core Jul 7 05:57:17.993119 systemd[1]: sshd@7-10.200.20.24:22-10.200.16.10:44804.service: Deactivated successfully. Jul 7 05:57:17.999277 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 05:57:18.000519 systemd-logind[1662]: Session 10 logged out. Waiting for processes to exit. Jul 7 05:57:18.003718 systemd-logind[1662]: Removed session 10. Jul 7 05:57:23.083969 systemd[1]: Started sshd@8-10.200.20.24:22-10.200.16.10:43636.service - OpenSSH per-connection server daemon (10.200.16.10:43636). Jul 7 05:57:23.353673 systemd[1]: run-containerd-runc-k8s.io-de53307f30032366e755499c68834bf35a6b6cc8196f9ca7b758da6640783634-runc.HCrvpA.mount: Deactivated successfully. Jul 7 05:57:23.575506 sshd[6505]: Accepted publickey for core from 10.200.16.10 port 43636 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:57:23.578265 sshd[6505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:57:23.586477 systemd-logind[1662]: New session 11 of user core. Jul 7 05:57:23.591307 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 05:57:24.001824 sshd[6505]: pam_unix(sshd:session): session closed for user core Jul 7 05:57:24.007042 systemd[1]: sshd@8-10.200.20.24:22-10.200.16.10:43636.service: Deactivated successfully. Jul 7 05:57:24.009814 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 05:57:24.010757 systemd-logind[1662]: Session 11 logged out. Waiting for processes to exit. Jul 7 05:57:24.011760 systemd-logind[1662]: Removed session 11. Jul 7 05:57:29.100432 systemd[1]: Started sshd@9-10.200.20.24:22-10.200.16.10:43648.service - OpenSSH per-connection server daemon (10.200.16.10:43648). Jul 7 05:57:29.591613 sshd[6541]: Accepted publickey for core from 10.200.16.10 port 43648 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:57:29.593426 sshd[6541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:57:29.598156 systemd-logind[1662]: New session 12 of user core. Jul 7 05:57:29.604353 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 05:57:30.012981 sshd[6541]: pam_unix(sshd:session): session closed for user core Jul 7 05:57:30.016540 systemd[1]: sshd@9-10.200.20.24:22-10.200.16.10:43648.service: Deactivated successfully. Jul 7 05:57:30.018813 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 05:57:30.020295 systemd-logind[1662]: Session 12 logged out. Waiting for processes to exit. Jul 7 05:57:30.021179 systemd-logind[1662]: Removed session 12. Jul 7 05:57:30.100372 systemd[1]: Started sshd@10-10.200.20.24:22-10.200.16.10:45832.service - OpenSSH per-connection server daemon (10.200.16.10:45832). Jul 7 05:57:30.584066 sshd[6555]: Accepted publickey for core from 10.200.16.10 port 45832 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:57:30.586189 sshd[6555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:57:30.590843 systemd-logind[1662]: New session 13 of user core. Jul 7 05:57:30.602325 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 05:57:31.031754 sshd[6555]: pam_unix(sshd:session): session closed for user core Jul 7 05:57:31.036000 systemd[1]: sshd@10-10.200.20.24:22-10.200.16.10:45832.service: Deactivated successfully. Jul 7 05:57:31.038886 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 05:57:31.039920 systemd-logind[1662]: Session 13 logged out. Waiting for processes to exit. Jul 7 05:57:31.040957 systemd-logind[1662]: Removed session 13. Jul 7 05:57:31.123824 systemd[1]: Started sshd@11-10.200.20.24:22-10.200.16.10:45846.service - OpenSSH per-connection server daemon (10.200.16.10:45846). Jul 7 05:57:31.600116 sshd[6566]: Accepted publickey for core from 10.200.16.10 port 45846 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:57:31.600171 sshd[6566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:57:31.607464 systemd-logind[1662]: New session 14 of user core. Jul 7 05:57:31.609934 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 05:57:32.024959 sshd[6566]: pam_unix(sshd:session): session closed for user core Jul 7 05:57:32.029610 systemd[1]: sshd@11-10.200.20.24:22-10.200.16.10:45846.service: Deactivated successfully. Jul 7 05:57:32.030384 systemd-logind[1662]: Session 14 logged out. Waiting for processes to exit. Jul 7 05:57:32.032910 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 05:57:32.034060 systemd-logind[1662]: Removed session 14. Jul 7 05:57:37.116465 systemd[1]: Started sshd@12-10.200.20.24:22-10.200.16.10:45856.service - OpenSSH per-connection server daemon (10.200.16.10:45856). Jul 7 05:57:37.590394 sshd[6646]: Accepted publickey for core from 10.200.16.10 port 45856 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:57:37.592008 sshd[6646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:57:37.596625 systemd-logind[1662]: New session 15 of user core. Jul 7 05:57:37.604276 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 05:57:37.998962 sshd[6646]: pam_unix(sshd:session): session closed for user core Jul 7 05:57:38.003248 systemd[1]: sshd@12-10.200.20.24:22-10.200.16.10:45856.service: Deactivated successfully. Jul 7 05:57:38.005382 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 05:57:38.007507 systemd-logind[1662]: Session 15 logged out. Waiting for processes to exit. Jul 7 05:57:38.008643 systemd-logind[1662]: Removed session 15. Jul 7 05:57:43.090670 systemd[1]: Started sshd@13-10.200.20.24:22-10.200.16.10:55086.service - OpenSSH per-connection server daemon (10.200.16.10:55086). Jul 7 05:57:43.566274 sshd[6661]: Accepted publickey for core from 10.200.16.10 port 55086 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:57:43.567781 sshd[6661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:57:43.573969 systemd-logind[1662]: New session 16 of user core. Jul 7 05:57:43.584298 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 05:57:43.976806 sshd[6661]: pam_unix(sshd:session): session closed for user core Jul 7 05:57:43.980948 systemd-logind[1662]: Session 16 logged out. Waiting for processes to exit. Jul 7 05:57:43.982347 systemd[1]: sshd@13-10.200.20.24:22-10.200.16.10:55086.service: Deactivated successfully. Jul 7 05:57:43.984724 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 05:57:43.986317 systemd-logind[1662]: Removed session 16. Jul 7 05:57:49.073396 systemd[1]: Started sshd@14-10.200.20.24:22-10.200.16.10:55092.service - OpenSSH per-connection server daemon (10.200.16.10:55092). Jul 7 05:57:49.549723 sshd[6688]: Accepted publickey for core from 10.200.16.10 port 55092 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:57:49.551892 sshd[6688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:57:49.558867 systemd-logind[1662]: New session 17 of user core. Jul 7 05:57:49.564713 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 05:57:49.995635 sshd[6688]: pam_unix(sshd:session): session closed for user core Jul 7 05:57:50.000203 systemd-logind[1662]: Session 17 logged out. Waiting for processes to exit. Jul 7 05:57:50.000967 systemd[1]: sshd@14-10.200.20.24:22-10.200.16.10:55092.service: Deactivated successfully. Jul 7 05:57:50.005701 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 05:57:50.011238 systemd-logind[1662]: Removed session 17. Jul 7 05:57:50.093487 systemd[1]: Started sshd@15-10.200.20.24:22-10.200.16.10:48612.service - OpenSSH per-connection server daemon (10.200.16.10:48612). Jul 7 05:57:50.569046 sshd[6701]: Accepted publickey for core from 10.200.16.10 port 48612 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:57:50.571526 sshd[6701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:57:50.580285 systemd-logind[1662]: New session 18 of user core. Jul 7 05:57:50.583329 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 05:57:51.154973 sshd[6701]: pam_unix(sshd:session): session closed for user core Jul 7 05:57:51.158690 systemd[1]: sshd@15-10.200.20.24:22-10.200.16.10:48612.service: Deactivated successfully. Jul 7 05:57:51.161230 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 05:57:51.162665 systemd-logind[1662]: Session 18 logged out. Waiting for processes to exit. Jul 7 05:57:51.164935 systemd-logind[1662]: Removed session 18. Jul 7 05:57:51.246520 systemd[1]: Started sshd@16-10.200.20.24:22-10.200.16.10:48618.service - OpenSSH per-connection server daemon (10.200.16.10:48618). Jul 7 05:57:51.725702 sshd[6711]: Accepted publickey for core from 10.200.16.10 port 48618 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:57:51.728847 sshd[6711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:57:51.736910 systemd-logind[1662]: New session 19 of user core. Jul 7 05:57:51.740286 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 05:57:54.159176 sshd[6711]: pam_unix(sshd:session): session closed for user core Jul 7 05:57:54.163291 systemd[1]: sshd@16-10.200.20.24:22-10.200.16.10:48618.service: Deactivated successfully. Jul 7 05:57:54.166060 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 05:57:54.167066 systemd-logind[1662]: Session 19 logged out. Waiting for processes to exit. Jul 7 05:57:54.169376 systemd-logind[1662]: Removed session 19. Jul 7 05:57:54.251408 systemd[1]: Started sshd@17-10.200.20.24:22-10.200.16.10:48620.service - OpenSSH per-connection server daemon (10.200.16.10:48620). Jul 7 05:57:54.741164 sshd[6758]: Accepted publickey for core from 10.200.16.10 port 48620 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:57:54.742956 sshd[6758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:57:54.747338 systemd-logind[1662]: New session 20 of user core. Jul 7 05:57:54.755287 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 05:57:55.295344 sshd[6758]: pam_unix(sshd:session): session closed for user core Jul 7 05:57:55.299673 systemd[1]: sshd@17-10.200.20.24:22-10.200.16.10:48620.service: Deactivated successfully. Jul 7 05:57:55.302712 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 05:57:55.304009 systemd-logind[1662]: Session 20 logged out. Waiting for processes to exit. Jul 7 05:57:55.305207 systemd-logind[1662]: Removed session 20. Jul 7 05:57:55.383591 systemd[1]: Started sshd@18-10.200.20.24:22-10.200.16.10:48628.service - OpenSSH per-connection server daemon (10.200.16.10:48628). Jul 7 05:57:55.861333 sshd[6769]: Accepted publickey for core from 10.200.16.10 port 48628 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:57:55.862771 sshd[6769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:57:55.867068 systemd-logind[1662]: New session 21 of user core. Jul 7 05:57:55.873246 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 05:57:56.275278 sshd[6769]: pam_unix(sshd:session): session closed for user core Jul 7 05:57:56.279119 systemd[1]: sshd@18-10.200.20.24:22-10.200.16.10:48628.service: Deactivated successfully. Jul 7 05:57:56.281803 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 05:57:56.284030 systemd-logind[1662]: Session 21 logged out. Waiting for processes to exit. Jul 7 05:57:56.285671 systemd-logind[1662]: Removed session 21. Jul 7 05:58:01.374396 systemd[1]: Started sshd@19-10.200.20.24:22-10.200.16.10:46344.service - OpenSSH per-connection server daemon (10.200.16.10:46344). Jul 7 05:58:01.860248 sshd[6784]: Accepted publickey for core from 10.200.16.10 port 46344 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:58:01.861878 sshd[6784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:58:01.869285 systemd-logind[1662]: New session 22 of user core. Jul 7 05:58:01.871498 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 05:58:02.271366 sshd[6784]: pam_unix(sshd:session): session closed for user core Jul 7 05:58:02.275724 systemd[1]: sshd@19-10.200.20.24:22-10.200.16.10:46344.service: Deactivated successfully. Jul 7 05:58:02.278952 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 05:58:02.280320 systemd-logind[1662]: Session 22 logged out. Waiting for processes to exit. Jul 7 05:58:02.283418 systemd-logind[1662]: Removed session 22. Jul 7 05:58:07.369835 systemd[1]: Started sshd@20-10.200.20.24:22-10.200.16.10:46346.service - OpenSSH per-connection server daemon (10.200.16.10:46346). Jul 7 05:58:07.842771 sshd[6836]: Accepted publickey for core from 10.200.16.10 port 46346 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:58:07.844509 sshd[6836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:58:07.851173 systemd-logind[1662]: New session 23 of user core. Jul 7 05:58:07.854673 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 05:58:08.250689 sshd[6836]: pam_unix(sshd:session): session closed for user core Jul 7 05:58:08.253937 systemd-logind[1662]: Session 23 logged out. Waiting for processes to exit. Jul 7 05:58:08.254171 systemd[1]: sshd@20-10.200.20.24:22-10.200.16.10:46346.service: Deactivated successfully. Jul 7 05:58:08.256806 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 05:58:08.259137 systemd-logind[1662]: Removed session 23. Jul 7 05:58:13.342800 systemd[1]: Started sshd@21-10.200.20.24:22-10.200.16.10:49964.service - OpenSSH per-connection server daemon (10.200.16.10:49964). Jul 7 05:58:13.812003 sshd[6868]: Accepted publickey for core from 10.200.16.10 port 49964 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:58:13.813589 sshd[6868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:58:13.817627 systemd-logind[1662]: New session 24 of user core. Jul 7 05:58:13.828303 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 05:58:14.226842 sshd[6868]: pam_unix(sshd:session): session closed for user core Jul 7 05:58:14.230922 systemd[1]: sshd@21-10.200.20.24:22-10.200.16.10:49964.service: Deactivated successfully. Jul 7 05:58:14.232852 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 05:58:14.234602 systemd-logind[1662]: Session 24 logged out. Waiting for processes to exit. Jul 7 05:58:14.235582 systemd-logind[1662]: Removed session 24. Jul 7 05:58:19.314559 systemd[1]: Started sshd@22-10.200.20.24:22-10.200.16.10:49972.service - OpenSSH per-connection server daemon (10.200.16.10:49972). Jul 7 05:58:19.795690 sshd[6881]: Accepted publickey for core from 10.200.16.10 port 49972 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:58:19.796648 sshd[6881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:58:19.802175 systemd-logind[1662]: New session 25 of user core. Jul 7 05:58:19.805266 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 05:58:20.206974 sshd[6881]: pam_unix(sshd:session): session closed for user core Jul 7 05:58:20.210636 systemd[1]: sshd@22-10.200.20.24:22-10.200.16.10:49972.service: Deactivated successfully. Jul 7 05:58:20.212389 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 05:58:20.212999 systemd-logind[1662]: Session 25 logged out. Waiting for processes to exit. Jul 7 05:58:20.213954 systemd-logind[1662]: Removed session 25. Jul 7 05:58:25.313732 systemd[1]: Started sshd@23-10.200.20.24:22-10.200.16.10:40842.service - OpenSSH per-connection server daemon (10.200.16.10:40842). Jul 7 05:58:25.786565 sshd[6916]: Accepted publickey for core from 10.200.16.10 port 40842 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:58:25.788189 sshd[6916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:58:25.793393 systemd-logind[1662]: New session 26 of user core. Jul 7 05:58:25.800502 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 05:58:26.229162 sshd[6916]: pam_unix(sshd:session): session closed for user core Jul 7 05:58:26.235328 systemd[1]: sshd@23-10.200.20.24:22-10.200.16.10:40842.service: Deactivated successfully. Jul 7 05:58:26.240491 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 05:58:26.241345 systemd-logind[1662]: Session 26 logged out. Waiting for processes to exit. Jul 7 05:58:26.242538 systemd-logind[1662]: Removed session 26. Jul 7 05:58:31.316045 systemd[1]: Started sshd@24-10.200.20.24:22-10.200.16.10:34288.service - OpenSSH per-connection server daemon (10.200.16.10:34288). Jul 7 05:58:31.786312 sshd[6931]: Accepted publickey for core from 10.200.16.10 port 34288 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:58:31.787899 sshd[6931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:58:31.792652 systemd-logind[1662]: New session 27 of user core. Jul 7 05:58:31.798281 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 05:58:32.202301 sshd[6931]: pam_unix(sshd:session): session closed for user core Jul 7 05:58:32.206378 systemd[1]: sshd@24-10.200.20.24:22-10.200.16.10:34288.service: Deactivated successfully. Jul 7 05:58:32.208326 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 05:58:32.209342 systemd-logind[1662]: Session 27 logged out. Waiting for processes to exit. Jul 7 05:58:32.210474 systemd-logind[1662]: Removed session 27.