Jul 7 05:52:09.386857 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 7 05:52:09.386881 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Sun Jul 6 22:28:26 -00 2025 Jul 7 05:52:09.386889 kernel: KASLR enabled Jul 7 05:52:09.386895 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 7 05:52:09.386902 kernel: printk: bootconsole [pl11] enabled Jul 7 05:52:09.386908 kernel: efi: EFI v2.7 by EDK II Jul 7 05:52:09.386915 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jul 7 05:52:09.386921 kernel: random: crng init done Jul 7 05:52:09.386927 kernel: ACPI: Early table checksum verification disabled Jul 7 05:52:09.386933 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 7 05:52:09.386939 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:52:09.386945 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:52:09.386953 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 7 05:52:09.386959 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:52:09.386966 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:52:09.386973 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:52:09.386979 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:52:09.386987 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:52:09.386994 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:52:09.387000 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 7 05:52:09.387007 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:52:09.387013 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 7 05:52:09.387019 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 7 05:52:09.387026 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 7 05:52:09.387032 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 7 05:52:09.387039 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 7 05:52:09.387045 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 7 05:52:09.387051 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 7 05:52:09.387059 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 7 05:52:09.387066 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 7 05:52:09.387072 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 7 05:52:09.387078 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 7 05:52:09.387085 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 7 05:52:09.387091 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 7 05:52:09.387097 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jul 7 05:52:09.387103 kernel: Zone ranges: Jul 7 05:52:09.387110 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 7 05:52:09.387116 kernel: DMA32 empty Jul 7 05:52:09.387122 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 7 05:52:09.387129 kernel: Movable zone start for each node Jul 7 05:52:09.387139 kernel: Early memory node ranges Jul 7 05:52:09.387146 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 7 05:52:09.387153 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jul 7 05:52:09.387160 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 7 05:52:09.387166 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 7 05:52:09.387174 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 7 05:52:09.387181 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 7 05:52:09.387188 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 7 05:52:09.387195 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 7 05:52:09.387202 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 7 05:52:09.387208 kernel: psci: probing for conduit method from ACPI. Jul 7 05:52:09.387215 kernel: psci: PSCIv1.1 detected in firmware. Jul 7 05:52:09.387222 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 05:52:09.387228 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 7 05:52:09.387235 kernel: psci: SMC Calling Convention v1.4 Jul 7 05:52:09.387242 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 7 05:52:09.387249 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 7 05:52:09.387257 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 7 05:52:09.387264 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 7 05:52:09.387271 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 7 05:52:09.387278 kernel: Detected PIPT I-cache on CPU0 Jul 7 05:52:09.387285 kernel: CPU features: detected: GIC system register CPU interface Jul 7 05:52:09.387292 kernel: CPU features: detected: Hardware dirty bit management Jul 7 05:52:09.387299 kernel: CPU features: detected: Spectre-BHB Jul 7 05:52:09.387305 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 7 05:52:09.387312 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 7 05:52:09.387319 kernel: CPU features: detected: ARM erratum 1418040 Jul 7 05:52:09.387326 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 7 05:52:09.387334 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 7 05:52:09.387341 kernel: alternatives: applying boot alternatives Jul 7 05:52:09.387349 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 05:52:09.387357 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 05:52:09.387363 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 05:52:09.387370 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 05:52:09.389442 kernel: Fallback order for Node 0: 0 Jul 7 05:52:09.389454 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 7 05:52:09.389461 kernel: Policy zone: Normal Jul 7 05:52:09.389468 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 05:52:09.389476 kernel: software IO TLB: area num 2. Jul 7 05:52:09.389489 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jul 7 05:52:09.389497 kernel: Memory: 3982628K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 211532K reserved, 0K cma-reserved) Jul 7 05:52:09.389505 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 05:52:09.389512 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 05:52:09.389519 kernel: rcu: RCU event tracing is enabled. Jul 7 05:52:09.389527 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 05:52:09.389534 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 05:52:09.389541 kernel: Tracing variant of Tasks RCU enabled. Jul 7 05:52:09.389548 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 05:52:09.389555 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 05:52:09.389562 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 05:52:09.389570 kernel: GICv3: 960 SPIs implemented Jul 7 05:52:09.389577 kernel: GICv3: 0 Extended SPIs implemented Jul 7 05:52:09.389584 kernel: Root IRQ handler: gic_handle_irq Jul 7 05:52:09.389591 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 7 05:52:09.389598 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 7 05:52:09.389605 kernel: ITS: No ITS available, not enabling LPIs Jul 7 05:52:09.389612 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 05:52:09.389619 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 05:52:09.389626 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 7 05:52:09.389633 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 7 05:52:09.389640 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 7 05:52:09.389649 kernel: Console: colour dummy device 80x25 Jul 7 05:52:09.389656 kernel: printk: console [tty1] enabled Jul 7 05:52:09.389663 kernel: ACPI: Core revision 20230628 Jul 7 05:52:09.389670 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 7 05:52:09.389678 kernel: pid_max: default: 32768 minimum: 301 Jul 7 05:52:09.389685 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 05:52:09.389691 kernel: landlock: Up and running. Jul 7 05:52:09.389698 kernel: SELinux: Initializing. Jul 7 05:52:09.389705 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 05:52:09.389713 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 05:52:09.389722 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 05:52:09.389729 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 05:52:09.389737 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 7 05:52:09.389744 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Jul 7 05:52:09.389751 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 7 05:52:09.389758 kernel: rcu: Hierarchical SRCU implementation. Jul 7 05:52:09.389765 kernel: rcu: Max phase no-delay instances is 400. Jul 7 05:52:09.389779 kernel: Remapping and enabling EFI services. Jul 7 05:52:09.389787 kernel: smp: Bringing up secondary CPUs ... Jul 7 05:52:09.389794 kernel: Detected PIPT I-cache on CPU1 Jul 7 05:52:09.389801 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 7 05:52:09.389811 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 05:52:09.389818 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 7 05:52:09.389825 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 05:52:09.389833 kernel: SMP: Total of 2 processors activated. Jul 7 05:52:09.389840 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 05:52:09.389849 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 7 05:52:09.389857 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 7 05:52:09.389864 kernel: CPU features: detected: CRC32 instructions Jul 7 05:52:09.389872 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 7 05:52:09.389879 kernel: CPU features: detected: LSE atomic instructions Jul 7 05:52:09.389887 kernel: CPU features: detected: Privileged Access Never Jul 7 05:52:09.389894 kernel: CPU: All CPU(s) started at EL1 Jul 7 05:52:09.389901 kernel: alternatives: applying system-wide alternatives Jul 7 05:52:09.389909 kernel: devtmpfs: initialized Jul 7 05:52:09.389918 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 05:52:09.389926 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 05:52:09.389934 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 05:52:09.389941 kernel: SMBIOS 3.1.0 present. Jul 7 05:52:09.389949 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 7 05:52:09.389956 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 05:52:09.389964 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 7 05:52:09.389971 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 05:52:09.389979 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 05:52:09.389988 kernel: audit: initializing netlink subsys (disabled) Jul 7 05:52:09.389996 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jul 7 05:52:09.390003 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 05:52:09.390011 kernel: cpuidle: using governor menu Jul 7 05:52:09.390018 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 05:52:09.390026 kernel: ASID allocator initialised with 32768 entries Jul 7 05:52:09.390033 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 05:52:09.390041 kernel: Serial: AMBA PL011 UART driver Jul 7 05:52:09.390048 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 7 05:52:09.390057 kernel: Modules: 0 pages in range for non-PLT usage Jul 7 05:52:09.390064 kernel: Modules: 509008 pages in range for PLT usage Jul 7 05:52:09.390072 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 05:52:09.390079 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 05:52:09.390087 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 05:52:09.390095 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 05:52:09.390102 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 05:52:09.390110 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 05:52:09.390117 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 05:52:09.390126 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 05:52:09.390134 kernel: ACPI: Added _OSI(Module Device) Jul 7 05:52:09.390172 kernel: ACPI: Added _OSI(Processor Device) Jul 7 05:52:09.390223 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 05:52:09.390251 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 05:52:09.390275 kernel: ACPI: Interpreter enabled Jul 7 05:52:09.390283 kernel: ACPI: Using GIC for interrupt routing Jul 7 05:52:09.390291 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 7 05:52:09.390299 kernel: printk: console [ttyAMA0] enabled Jul 7 05:52:09.390331 kernel: printk: bootconsole [pl11] disabled Jul 7 05:52:09.390354 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 7 05:52:09.390362 kernel: iommu: Default domain type: Translated Jul 7 05:52:09.390370 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 05:52:09.390428 kernel: efivars: Registered efivars operations Jul 7 05:52:09.390437 kernel: vgaarb: loaded Jul 7 05:52:09.390444 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 05:52:09.390452 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 05:52:09.390475 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 05:52:09.390503 kernel: pnp: PnP ACPI init Jul 7 05:52:09.390511 kernel: pnp: PnP ACPI: found 0 devices Jul 7 05:52:09.390519 kernel: NET: Registered PF_INET protocol family Jul 7 05:52:09.390526 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 05:52:09.390549 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 05:52:09.390557 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 05:52:09.390565 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 05:52:09.390572 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 05:52:09.390596 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 05:52:09.390606 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 05:52:09.390613 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 05:52:09.390635 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 05:52:09.390643 kernel: PCI: CLS 0 bytes, default 64 Jul 7 05:52:09.390667 kernel: kvm [1]: HYP mode not available Jul 7 05:52:09.390675 kernel: Initialise system trusted keyrings Jul 7 05:52:09.390682 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 05:52:09.390690 kernel: Key type asymmetric registered Jul 7 05:52:09.390709 kernel: Asymmetric key parser 'x509' registered Jul 7 05:52:09.390718 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 05:52:09.390739 kernel: io scheduler mq-deadline registered Jul 7 05:52:09.390746 kernel: io scheduler kyber registered Jul 7 05:52:09.390754 kernel: io scheduler bfq registered Jul 7 05:52:09.390761 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 05:52:09.390791 kernel: thunder_xcv, ver 1.0 Jul 7 05:52:09.390815 kernel: thunder_bgx, ver 1.0 Jul 7 05:52:09.390823 kernel: nicpf, ver 1.0 Jul 7 05:52:09.390831 kernel: nicvf, ver 1.0 Jul 7 05:52:09.391124 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 05:52:09.391294 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T05:52:08 UTC (1751867528) Jul 7 05:52:09.391307 kernel: efifb: probing for efifb Jul 7 05:52:09.391315 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 7 05:52:09.391338 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 7 05:52:09.391346 kernel: efifb: scrolling: redraw Jul 7 05:52:09.391369 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 05:52:09.393450 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 05:52:09.393474 kernel: fb0: EFI VGA frame buffer device Jul 7 05:52:09.393482 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 7 05:52:09.393489 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 05:52:09.393497 kernel: No ACPI PMU IRQ for CPU0 Jul 7 05:52:09.393505 kernel: No ACPI PMU IRQ for CPU1 Jul 7 05:52:09.393512 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 7 05:52:09.393520 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 7 05:52:09.393527 kernel: watchdog: Hard watchdog permanently disabled Jul 7 05:52:09.393535 kernel: NET: Registered PF_INET6 protocol family Jul 7 05:52:09.393545 kernel: Segment Routing with IPv6 Jul 7 05:52:09.393552 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 05:52:09.393560 kernel: NET: Registered PF_PACKET protocol family Jul 7 05:52:09.393568 kernel: Key type dns_resolver registered Jul 7 05:52:09.393576 kernel: registered taskstats version 1 Jul 7 05:52:09.393583 kernel: Loading compiled-in X.509 certificates Jul 7 05:52:09.393591 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 238b9dc1e5bb098e9decff566778e6505241ab94' Jul 7 05:52:09.393598 kernel: Key type .fscrypt registered Jul 7 05:52:09.393610 kernel: Key type fscrypt-provisioning registered Jul 7 05:52:09.393619 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 05:52:09.393627 kernel: ima: Allocated hash algorithm: sha1 Jul 7 05:52:09.393634 kernel: ima: No architecture policies found Jul 7 05:52:09.393642 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 05:52:09.393649 kernel: clk: Disabling unused clocks Jul 7 05:52:09.393657 kernel: Freeing unused kernel memory: 39424K Jul 7 05:52:09.393664 kernel: Run /init as init process Jul 7 05:52:09.393672 kernel: with arguments: Jul 7 05:52:09.393680 kernel: /init Jul 7 05:52:09.393689 kernel: with environment: Jul 7 05:52:09.393696 kernel: HOME=/ Jul 7 05:52:09.393703 kernel: TERM=linux Jul 7 05:52:09.393711 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 05:52:09.393721 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 05:52:09.393731 systemd[1]: Detected virtualization microsoft. Jul 7 05:52:09.393739 systemd[1]: Detected architecture arm64. Jul 7 05:52:09.393747 systemd[1]: Running in initrd. Jul 7 05:52:09.393757 systemd[1]: No hostname configured, using default hostname. Jul 7 05:52:09.393765 systemd[1]: Hostname set to . Jul 7 05:52:09.393773 systemd[1]: Initializing machine ID from random generator. Jul 7 05:52:09.393781 systemd[1]: Queued start job for default target initrd.target. Jul 7 05:52:09.393789 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:52:09.393797 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:52:09.393806 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 05:52:09.393814 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 05:52:09.393824 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 05:52:09.393832 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 05:52:09.393842 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 05:52:09.393850 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 05:52:09.393858 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:52:09.393866 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:52:09.393876 systemd[1]: Reached target paths.target - Path Units. Jul 7 05:52:09.393884 systemd[1]: Reached target slices.target - Slice Units. Jul 7 05:52:09.393892 systemd[1]: Reached target swap.target - Swaps. Jul 7 05:52:09.393900 systemd[1]: Reached target timers.target - Timer Units. Jul 7 05:52:09.393908 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 05:52:09.393916 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 05:52:09.393924 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 05:52:09.393933 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 05:52:09.393941 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:52:09.393951 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 05:52:09.393960 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:52:09.393968 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 05:52:09.393976 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 05:52:09.393984 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 05:52:09.393992 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 05:52:09.394001 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 05:52:09.394009 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 05:52:09.394053 systemd-journald[216]: Collecting audit messages is disabled. Jul 7 05:52:09.394078 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 05:52:09.394086 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:09.394095 systemd-journald[216]: Journal started Jul 7 05:52:09.394116 systemd-journald[216]: Runtime Journal (/run/log/journal/9c419361a05141728affc10e60987087) is 8.0M, max 78.5M, 70.5M free. Jul 7 05:52:09.400910 systemd-modules-load[217]: Inserted module 'overlay' Jul 7 05:52:09.440981 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 05:52:09.441025 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 05:52:09.452311 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 05:52:09.463595 kernel: Bridge firewalling registered Jul 7 05:52:09.458715 systemd-modules-load[217]: Inserted module 'br_netfilter' Jul 7 05:52:09.466955 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:52:09.480829 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 05:52:09.493897 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 05:52:09.506264 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:09.534148 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:52:09.552620 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:52:09.575625 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 05:52:09.593785 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 05:52:09.616412 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:52:09.627537 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:52:09.646615 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:52:09.654095 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:52:09.689590 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 05:52:09.704594 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 05:52:09.715643 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 05:52:09.738198 dracut-cmdline[248]: dracut-dracut-053 Jul 7 05:52:09.738198 dracut-cmdline[248]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 05:52:09.791868 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:52:09.800708 systemd-resolved[252]: Positive Trust Anchors: Jul 7 05:52:09.800719 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 05:52:09.800751 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 05:52:09.803956 systemd-resolved[252]: Defaulting to hostname 'linux'. Jul 7 05:52:09.806032 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 05:52:09.819119 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:52:09.949396 kernel: SCSI subsystem initialized Jul 7 05:52:09.956412 kernel: Loading iSCSI transport class v2.0-870. Jul 7 05:52:09.967407 kernel: iscsi: registered transport (tcp) Jul 7 05:52:09.986687 kernel: iscsi: registered transport (qla4xxx) Jul 7 05:52:09.986777 kernel: QLogic iSCSI HBA Driver Jul 7 05:52:10.033019 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 05:52:10.047645 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 05:52:10.083286 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 05:52:10.083360 kernel: device-mapper: uevent: version 1.0.3 Jul 7 05:52:10.090401 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 05:52:10.142409 kernel: raid6: neonx8 gen() 15739 MB/s Jul 7 05:52:10.162389 kernel: raid6: neonx4 gen() 15421 MB/s Jul 7 05:52:10.182422 kernel: raid6: neonx2 gen() 13051 MB/s Jul 7 05:52:10.203404 kernel: raid6: neonx1 gen() 10284 MB/s Jul 7 05:52:10.223416 kernel: raid6: int64x8 gen() 6812 MB/s Jul 7 05:52:10.243406 kernel: raid6: int64x4 gen() 7196 MB/s Jul 7 05:52:10.265390 kernel: raid6: int64x2 gen() 6026 MB/s Jul 7 05:52:10.289851 kernel: raid6: int64x1 gen() 4986 MB/s Jul 7 05:52:10.289873 kernel: raid6: using algorithm neonx8 gen() 15739 MB/s Jul 7 05:52:10.316610 kernel: raid6: .... xor() 11715 MB/s, rmw enabled Jul 7 05:52:10.316721 kernel: raid6: using neon recovery algorithm Jul 7 05:52:10.327396 kernel: xor: measuring software checksum speed Jul 7 05:52:10.335387 kernel: 8regs : 17991 MB/sec Jul 7 05:52:10.335445 kernel: 32regs : 19529 MB/sec Jul 7 05:52:10.340134 kernel: arm64_neon : 26874 MB/sec Jul 7 05:52:10.346674 kernel: xor: using function: arm64_neon (26874 MB/sec) Jul 7 05:52:10.401414 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 05:52:10.413746 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 05:52:10.433580 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:52:10.459996 systemd-udevd[435]: Using default interface naming scheme 'v255'. Jul 7 05:52:10.467038 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:52:10.504550 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 05:52:10.523544 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Jul 7 05:52:10.557872 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 05:52:10.583726 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 05:52:10.629385 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:52:10.655170 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 05:52:10.682083 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 05:52:10.698311 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 05:52:10.723639 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:52:10.740799 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 05:52:10.774431 kernel: hv_vmbus: Vmbus version:5.3 Jul 7 05:52:10.766871 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 05:52:10.814809 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 7 05:52:10.814837 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 7 05:52:10.814857 kernel: hv_vmbus: registering driver hv_netvsc Jul 7 05:52:10.778394 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 05:52:10.856540 kernel: hv_vmbus: registering driver hid_hyperv Jul 7 05:52:10.856569 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 7 05:52:10.856580 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 7 05:52:10.856590 kernel: PTP clock support registered Jul 7 05:52:10.778569 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:52:10.887097 kernel: hv_utils: Registering HyperV Utility Driver Jul 7 05:52:10.887125 kernel: hv_vmbus: registering driver hv_utils Jul 7 05:52:10.887135 kernel: hv_utils: Heartbeat IC version 3.0 Jul 7 05:52:10.835881 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:52:11.229664 kernel: hv_utils: Shutdown IC version 3.2 Jul 7 05:52:11.229700 kernel: hv_utils: TimeSync IC version 4.0 Jul 7 05:52:11.229710 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 7 05:52:11.229721 kernel: hv_vmbus: registering driver hv_storvsc Jul 7 05:52:11.229733 kernel: scsi host1: storvsc_host_t Jul 7 05:52:11.229954 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 7 05:52:11.230073 kernel: scsi host0: storvsc_host_t Jul 7 05:52:10.871944 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:52:11.253953 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 7 05:52:11.254010 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 7 05:52:10.872214 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:10.899350 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:11.196037 systemd-resolved[252]: Clock change detected. Flushing caches. Jul 7 05:52:11.247416 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:11.271940 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 05:52:11.291092 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:11.310449 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:52:11.310642 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:11.320099 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:11.377254 kernel: hv_netvsc 002248b8-2498-0022-48b8-2498002248b8 eth0: VF slot 1 added Jul 7 05:52:11.377468 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 7 05:52:11.377401 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:11.398290 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 05:52:11.415112 kernel: hv_vmbus: registering driver hv_pci Jul 7 05:52:11.415177 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 7 05:52:11.425846 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:11.454658 kernel: hv_pci d3d7938b-013e-45f8-b634-b211d987b93c: PCI VMBus probing: Using version 0x10004 Jul 7 05:52:11.454849 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 7 05:52:11.454966 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 7 05:52:11.455432 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 7 05:52:11.455394 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:52:11.562530 kernel: hv_pci d3d7938b-013e-45f8-b634-b211d987b93c: PCI host bridge to bus 013e:00 Jul 7 05:52:11.562721 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 7 05:52:11.562826 kernel: pci_bus 013e:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 7 05:52:11.562930 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 7 05:52:11.563025 kernel: pci_bus 013e:00: No busn resource found for root bus, will use [bus 00-ff] Jul 7 05:52:11.563132 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 05:52:11.563143 kernel: pci 013e:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 7 05:52:11.563167 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 7 05:52:11.563255 kernel: pci 013e:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 7 05:52:11.563272 kernel: pci 013e:00:02.0: enabling Extended Tags Jul 7 05:52:11.563286 kernel: pci 013e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 013e:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 7 05:52:11.580093 kernel: pci_bus 013e:00: busn_res: [bus 00-ff] end is updated to 00 Jul 7 05:52:11.580352 kernel: pci 013e:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 7 05:52:11.588465 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:52:11.644024 kernel: mlx5_core 013e:00:02.0: enabling device (0000 -> 0002) Jul 7 05:52:11.652095 kernel: mlx5_core 013e:00:02.0: firmware version: 16.31.2424 Jul 7 05:52:11.952642 kernel: hv_netvsc 002248b8-2498-0022-48b8-2498002248b8 eth0: VF registering: eth1 Jul 7 05:52:11.952900 kernel: mlx5_core 013e:00:02.0 eth1: joined to eth0 Jul 7 05:52:11.964290 kernel: mlx5_core 013e:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 7 05:52:11.976079 kernel: mlx5_core 013e:00:02.0 enP318s1: renamed from eth1 Jul 7 05:52:11.992144 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 7 05:52:12.083256 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 7 05:52:12.118821 kernel: BTRFS: device fsid 8b9ce65a-b4d6-4744-987c-133e7f159d2d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (481) Jul 7 05:52:12.118878 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (497) Jul 7 05:52:12.134765 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 7 05:52:12.141818 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 7 05:52:12.155409 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 7 05:52:12.182380 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 05:52:12.207105 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 05:52:13.225126 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 05:52:13.225626 disk-uuid[605]: The operation has completed successfully. Jul 7 05:52:13.288177 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 05:52:13.288281 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 05:52:13.322260 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 05:52:13.348012 sh[718]: Success Jul 7 05:52:13.381160 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 7 05:52:13.551887 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 05:52:13.584277 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 05:52:13.591152 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 05:52:13.642574 kernel: BTRFS info (device dm-0): first mount of filesystem 8b9ce65a-b4d6-4744-987c-133e7f159d2d Jul 7 05:52:13.642637 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:52:13.651610 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 05:52:13.657683 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 05:52:13.662254 kernel: BTRFS info (device dm-0): using free space tree Jul 7 05:52:13.940402 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 05:52:13.947414 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 05:52:13.973363 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 05:52:13.984482 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 05:52:14.023121 kernel: BTRFS info (device sda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:14.023155 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:52:14.029371 kernel: BTRFS info (device sda6): using free space tree Jul 7 05:52:14.071111 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 05:52:14.080574 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 05:52:14.096099 kernel: BTRFS info (device sda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:14.108089 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 05:52:14.126473 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 05:52:14.167646 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 05:52:14.185273 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 05:52:14.223756 systemd-networkd[902]: lo: Link UP Jul 7 05:52:14.223770 systemd-networkd[902]: lo: Gained carrier Jul 7 05:52:14.225466 systemd-networkd[902]: Enumeration completed Jul 7 05:52:14.227649 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 05:52:14.228207 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:52:14.228210 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:52:14.234833 systemd[1]: Reached target network.target - Network. Jul 7 05:52:14.335097 kernel: mlx5_core 013e:00:02.0 enP318s1: Link up Jul 7 05:52:14.420086 kernel: hv_netvsc 002248b8-2498-0022-48b8-2498002248b8 eth0: Data path switched to VF: enP318s1 Jul 7 05:52:14.420934 systemd-networkd[902]: enP318s1: Link UP Jul 7 05:52:14.421088 systemd-networkd[902]: eth0: Link UP Jul 7 05:52:14.421239 systemd-networkd[902]: eth0: Gained carrier Jul 7 05:52:14.421249 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:52:14.448766 systemd-networkd[902]: enP318s1: Gained carrier Jul 7 05:52:14.465140 systemd-networkd[902]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 7 05:52:14.789806 ignition[868]: Ignition 2.19.0 Jul 7 05:52:14.793464 ignition[868]: Stage: fetch-offline Jul 7 05:52:14.793527 ignition[868]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:14.795047 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 05:52:14.793536 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:52:14.812488 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 05:52:14.793662 ignition[868]: parsed url from cmdline: "" Jul 7 05:52:14.793665 ignition[868]: no config URL provided Jul 7 05:52:14.793670 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 05:52:14.793677 ignition[868]: no config at "/usr/lib/ignition/user.ign" Jul 7 05:52:14.793683 ignition[868]: failed to fetch config: resource requires networking Jul 7 05:52:14.793890 ignition[868]: Ignition finished successfully Jul 7 05:52:14.850353 ignition[911]: Ignition 2.19.0 Jul 7 05:52:14.850361 ignition[911]: Stage: fetch Jul 7 05:52:14.850601 ignition[911]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:14.850612 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:52:14.850731 ignition[911]: parsed url from cmdline: "" Jul 7 05:52:14.850738 ignition[911]: no config URL provided Jul 7 05:52:14.850742 ignition[911]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 05:52:14.850750 ignition[911]: no config at "/usr/lib/ignition/user.ign" Jul 7 05:52:14.850775 ignition[911]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 7 05:52:14.968625 ignition[911]: GET result: OK Jul 7 05:52:14.968724 ignition[911]: config has been read from IMDS userdata Jul 7 05:52:14.968767 ignition[911]: parsing config with SHA512: 7c3f39e9fe3a8d8e76d5ee4c07283baf81a91df59d8a97849773e1dd5ab6a7a1ff9947705ce3ee796461f93090ba4568462861caf90b72f28cb5a4b2c119c36e Jul 7 05:52:14.973356 unknown[911]: fetched base config from "system" Jul 7 05:52:14.973838 ignition[911]: fetch: fetch complete Jul 7 05:52:14.973364 unknown[911]: fetched base config from "system" Jul 7 05:52:14.973843 ignition[911]: fetch: fetch passed Jul 7 05:52:14.973369 unknown[911]: fetched user config from "azure" Jul 7 05:52:14.973891 ignition[911]: Ignition finished successfully Jul 7 05:52:14.984607 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 05:52:15.008268 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 05:52:15.032496 ignition[917]: Ignition 2.19.0 Jul 7 05:52:15.032509 ignition[917]: Stage: kargs Jul 7 05:52:15.032708 ignition[917]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:15.040363 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 05:52:15.032720 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:52:15.033729 ignition[917]: kargs: kargs passed Jul 7 05:52:15.033787 ignition[917]: Ignition finished successfully Jul 7 05:52:15.065393 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 05:52:15.089547 ignition[923]: Ignition 2.19.0 Jul 7 05:52:15.089559 ignition[923]: Stage: disks Jul 7 05:52:15.096871 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 05:52:15.089775 ignition[923]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:15.089785 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:52:15.108656 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 05:52:15.090953 ignition[923]: disks: disks passed Jul 7 05:52:15.120375 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 05:52:15.091036 ignition[923]: Ignition finished successfully Jul 7 05:52:15.133573 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 05:52:15.144757 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 05:52:15.154227 systemd[1]: Reached target basic.target - Basic System. Jul 7 05:52:15.187263 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 05:52:15.253444 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 7 05:52:15.262956 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 05:52:15.281273 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 05:52:15.338131 kernel: EXT4-fs (sda9): mounted filesystem bea371b7-1069-4e98-84b2-bf5b94f934f3 r/w with ordered data mode. Quota mode: none. Jul 7 05:52:15.339172 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 05:52:15.344272 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 05:52:15.388178 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 05:52:15.398319 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 05:52:15.420104 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Jul 7 05:52:15.420151 kernel: BTRFS info (device sda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:15.434595 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:52:15.438825 kernel: BTRFS info (device sda6): using free space tree Jul 7 05:52:15.445501 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 05:52:15.461189 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 05:52:15.454325 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 05:52:15.454389 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 05:52:15.469535 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 05:52:15.485469 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 05:52:15.510392 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 05:52:15.918080 coreos-metadata[945]: Jul 07 05:52:15.918 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 7 05:52:15.929426 coreos-metadata[945]: Jul 07 05:52:15.929 INFO Fetch successful Jul 7 05:52:15.935203 coreos-metadata[945]: Jul 07 05:52:15.934 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 7 05:52:15.947163 coreos-metadata[945]: Jul 07 05:52:15.946 INFO Fetch successful Jul 7 05:52:15.959272 coreos-metadata[945]: Jul 07 05:52:15.959 INFO wrote hostname ci-4081.3.4-a-5429f7cfbd to /sysroot/etc/hostname Jul 7 05:52:15.968849 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 05:52:16.021259 systemd-networkd[902]: eth0: Gained IPv6LL Jul 7 05:52:16.161905 initrd-setup-root[973]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 05:52:16.185981 initrd-setup-root[980]: cut: /sysroot/etc/group: No such file or directory Jul 7 05:52:16.197233 initrd-setup-root[987]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 05:52:16.205213 initrd-setup-root[994]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 05:52:16.469264 systemd-networkd[902]: enP318s1: Gained IPv6LL Jul 7 05:52:16.975917 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 05:52:16.997306 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 05:52:17.010374 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 05:52:17.040638 kernel: BTRFS info (device sda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:17.034338 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 05:52:17.069128 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 05:52:17.084586 ignition[1062]: INFO : Ignition 2.19.0 Jul 7 05:52:17.091009 ignition[1062]: INFO : Stage: mount Jul 7 05:52:17.091009 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:17.091009 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:52:17.091009 ignition[1062]: INFO : mount: mount passed Jul 7 05:52:17.091009 ignition[1062]: INFO : Ignition finished successfully Jul 7 05:52:17.096155 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 05:52:17.128311 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 05:52:17.159478 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 05:52:17.196314 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1074) Jul 7 05:52:17.196377 kernel: BTRFS info (device sda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:17.203120 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:52:17.208008 kernel: BTRFS info (device sda6): using free space tree Jul 7 05:52:17.216077 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 05:52:17.218333 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 05:52:17.247323 ignition[1092]: INFO : Ignition 2.19.0 Jul 7 05:52:17.247323 ignition[1092]: INFO : Stage: files Jul 7 05:52:17.256461 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:17.256461 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:52:17.256461 ignition[1092]: DEBUG : files: compiled without relabeling support, skipping Jul 7 05:52:17.277994 ignition[1092]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 05:52:17.277994 ignition[1092]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 05:52:17.352679 ignition[1092]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 05:52:17.361168 ignition[1092]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 05:52:17.369310 ignition[1092]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 05:52:17.361718 unknown[1092]: wrote ssh authorized keys file for user: core Jul 7 05:52:17.388318 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 7 05:52:17.400769 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 7 05:52:17.400769 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 05:52:17.400769 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 7 05:52:17.468390 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 05:52:17.570901 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 05:52:17.570901 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 7 05:52:18.370340 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 05:52:18.607447 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:52:18.607447 ignition[1092]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 7 05:52:18.628806 ignition[1092]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 7 05:52:18.628806 ignition[1092]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 7 05:52:18.628806 ignition[1092]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 7 05:52:18.628806 ignition[1092]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 7 05:52:18.628806 ignition[1092]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 05:52:18.628806 ignition[1092]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 05:52:18.628806 ignition[1092]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 7 05:52:18.628806 ignition[1092]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 7 05:52:18.728888 ignition[1092]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 05:52:18.728888 ignition[1092]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 05:52:18.728888 ignition[1092]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 05:52:18.728888 ignition[1092]: INFO : files: files passed Jul 7 05:52:18.728888 ignition[1092]: INFO : Ignition finished successfully Jul 7 05:52:18.643670 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 05:52:18.697342 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 05:52:18.711471 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 05:52:18.793425 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:52:18.793425 initrd-setup-root-after-ignition[1119]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:52:18.728997 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 05:52:18.829967 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:52:18.729132 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 05:52:18.790834 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 05:52:18.800840 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 05:52:18.845385 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 05:52:18.893038 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 05:52:18.893194 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 05:52:18.906334 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 05:52:18.919616 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 05:52:18.931003 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 05:52:18.947348 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 05:52:18.971859 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 05:52:18.988370 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 05:52:19.010140 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 05:52:19.010252 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 05:52:19.024040 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:52:19.038251 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:52:19.050872 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 05:52:19.068093 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 05:52:19.068177 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 05:52:19.085675 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 05:52:19.097602 systemd[1]: Stopped target basic.target - Basic System. Jul 7 05:52:19.108205 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 05:52:19.118937 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 05:52:19.130897 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 05:52:19.143365 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 05:52:19.154863 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 05:52:19.167831 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 05:52:19.181584 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 05:52:19.192798 systemd[1]: Stopped target swap.target - Swaps. Jul 7 05:52:19.203882 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 05:52:19.203981 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 05:52:19.219598 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:52:19.225983 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:52:19.238276 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 05:52:19.243828 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:52:19.251437 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 05:52:19.251523 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 05:52:19.269150 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 05:52:19.269225 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 05:52:19.283955 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 05:52:19.284017 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 05:52:19.294615 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 05:52:19.294667 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 05:52:19.327291 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 05:52:19.374616 ignition[1144]: INFO : Ignition 2.19.0 Jul 7 05:52:19.374616 ignition[1144]: INFO : Stage: umount Jul 7 05:52:19.374616 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:19.374616 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:52:19.374616 ignition[1144]: INFO : umount: umount passed Jul 7 05:52:19.374616 ignition[1144]: INFO : Ignition finished successfully Jul 7 05:52:19.354200 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 05:52:19.366183 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 05:52:19.366274 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:52:19.379381 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 05:52:19.379450 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 05:52:19.397957 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 05:52:19.398582 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 05:52:19.398705 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 05:52:19.412586 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 05:52:19.412701 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 05:52:19.419692 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 05:52:19.419764 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 05:52:19.428831 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 05:52:19.428892 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 05:52:19.439422 systemd[1]: Stopped target network.target - Network. Jul 7 05:52:19.444315 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 05:52:19.444397 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 05:52:19.457101 systemd[1]: Stopped target paths.target - Path Units. Jul 7 05:52:19.467456 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 05:52:19.473330 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:52:19.480864 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 05:52:19.491206 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 05:52:19.505582 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 05:52:19.505649 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 05:52:19.516900 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 05:52:19.516951 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 05:52:19.527533 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 05:52:19.527595 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 05:52:19.544087 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 05:52:19.544163 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 05:52:19.555496 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 05:52:19.566621 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 05:52:19.578120 systemd-networkd[902]: eth0: DHCPv6 lease lost Jul 7 05:52:19.585037 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 05:52:19.585311 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 05:52:19.600538 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 05:52:19.600670 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 05:52:19.614885 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 05:52:19.614946 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:52:19.647335 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 05:52:19.853895 kernel: hv_netvsc 002248b8-2498-0022-48b8-2498002248b8 eth0: Data path switched from VF: enP318s1 Jul 7 05:52:19.657535 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 05:52:19.657629 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 05:52:19.669561 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 05:52:19.669637 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:52:19.682550 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 05:52:19.682615 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 05:52:19.693601 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 05:52:19.693659 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:52:19.705514 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:52:19.740543 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 05:52:19.741907 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:52:19.754663 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 05:52:19.754744 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 05:52:19.765539 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 05:52:19.765585 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:52:19.777474 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 05:52:19.777539 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 05:52:19.792498 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 05:52:19.792566 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 05:52:19.803921 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 05:52:19.803983 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:52:19.834407 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 05:52:19.867943 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 05:52:19.868043 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:52:19.875539 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 05:52:19.875607 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:52:19.882967 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 05:52:19.883026 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:52:19.895086 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:52:19.895146 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:19.907472 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 05:52:19.909663 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 05:52:19.997462 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 05:52:19.997613 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 05:52:20.057686 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 05:52:20.057852 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 05:52:20.069462 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 05:52:20.081075 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 05:52:20.081174 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 05:52:20.109338 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 05:52:20.248532 systemd[1]: Switching root. Jul 7 05:52:20.272898 systemd-journald[216]: Journal stopped Jul 7 05:52:09.386857 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 7 05:52:09.386881 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Sun Jul 6 22:28:26 -00 2025 Jul 7 05:52:09.386889 kernel: KASLR enabled Jul 7 05:52:09.386895 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 7 05:52:09.386902 kernel: printk: bootconsole [pl11] enabled Jul 7 05:52:09.386908 kernel: efi: EFI v2.7 by EDK II Jul 7 05:52:09.386915 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jul 7 05:52:09.386921 kernel: random: crng init done Jul 7 05:52:09.386927 kernel: ACPI: Early table checksum verification disabled Jul 7 05:52:09.386933 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 7 05:52:09.386939 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:52:09.386945 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:52:09.386953 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 7 05:52:09.386959 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:52:09.386966 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:52:09.386973 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:52:09.386979 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:52:09.386987 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:52:09.386994 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:52:09.387000 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 7 05:52:09.387007 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 05:52:09.387013 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 7 05:52:09.387019 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 7 05:52:09.387026 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 7 05:52:09.387032 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 7 05:52:09.387039 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 7 05:52:09.387045 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 7 05:52:09.387051 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 7 05:52:09.387059 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 7 05:52:09.387066 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 7 05:52:09.387072 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 7 05:52:09.387078 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 7 05:52:09.387085 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 7 05:52:09.387091 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 7 05:52:09.387097 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jul 7 05:52:09.387103 kernel: Zone ranges: Jul 7 05:52:09.387110 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 7 05:52:09.387116 kernel: DMA32 empty Jul 7 05:52:09.387122 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 7 05:52:09.387129 kernel: Movable zone start for each node Jul 7 05:52:09.387139 kernel: Early memory node ranges Jul 7 05:52:09.387146 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 7 05:52:09.387153 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jul 7 05:52:09.387160 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 7 05:52:09.387166 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 7 05:52:09.387174 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 7 05:52:09.387181 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 7 05:52:09.387188 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 7 05:52:09.387195 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 7 05:52:09.387202 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 7 05:52:09.387208 kernel: psci: probing for conduit method from ACPI. Jul 7 05:52:09.387215 kernel: psci: PSCIv1.1 detected in firmware. Jul 7 05:52:09.387222 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 05:52:09.387228 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 7 05:52:09.387235 kernel: psci: SMC Calling Convention v1.4 Jul 7 05:52:09.387242 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 7 05:52:09.387249 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 7 05:52:09.387257 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 7 05:52:09.387264 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 7 05:52:09.387271 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 7 05:52:09.387278 kernel: Detected PIPT I-cache on CPU0 Jul 7 05:52:09.387285 kernel: CPU features: detected: GIC system register CPU interface Jul 7 05:52:09.387292 kernel: CPU features: detected: Hardware dirty bit management Jul 7 05:52:09.387299 kernel: CPU features: detected: Spectre-BHB Jul 7 05:52:09.387305 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 7 05:52:09.387312 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 7 05:52:09.387319 kernel: CPU features: detected: ARM erratum 1418040 Jul 7 05:52:09.387326 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 7 05:52:09.387334 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 7 05:52:09.387341 kernel: alternatives: applying boot alternatives Jul 7 05:52:09.387349 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 05:52:09.387357 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 05:52:09.387363 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 05:52:09.387370 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 05:52:09.389442 kernel: Fallback order for Node 0: 0 Jul 7 05:52:09.389454 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 7 05:52:09.389461 kernel: Policy zone: Normal Jul 7 05:52:09.389468 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 05:52:09.389476 kernel: software IO TLB: area num 2. Jul 7 05:52:09.389489 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jul 7 05:52:09.389497 kernel: Memory: 3982628K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 211532K reserved, 0K cma-reserved) Jul 7 05:52:09.389505 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 05:52:09.389512 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 05:52:09.389519 kernel: rcu: RCU event tracing is enabled. Jul 7 05:52:09.389527 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 05:52:09.389534 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 05:52:09.389541 kernel: Tracing variant of Tasks RCU enabled. Jul 7 05:52:09.389548 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 05:52:09.389555 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 05:52:09.389562 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 05:52:09.389570 kernel: GICv3: 960 SPIs implemented Jul 7 05:52:09.389577 kernel: GICv3: 0 Extended SPIs implemented Jul 7 05:52:09.389584 kernel: Root IRQ handler: gic_handle_irq Jul 7 05:52:09.389591 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 7 05:52:09.389598 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 7 05:52:09.389605 kernel: ITS: No ITS available, not enabling LPIs Jul 7 05:52:09.389612 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 05:52:09.389619 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 05:52:09.389626 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 7 05:52:09.389633 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 7 05:52:09.389640 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 7 05:52:09.389649 kernel: Console: colour dummy device 80x25 Jul 7 05:52:09.389656 kernel: printk: console [tty1] enabled Jul 7 05:52:09.389663 kernel: ACPI: Core revision 20230628 Jul 7 05:52:09.389670 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 7 05:52:09.389678 kernel: pid_max: default: 32768 minimum: 301 Jul 7 05:52:09.389685 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 05:52:09.389691 kernel: landlock: Up and running. Jul 7 05:52:09.389698 kernel: SELinux: Initializing. Jul 7 05:52:09.389705 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 05:52:09.389713 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 05:52:09.389722 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 05:52:09.389729 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 05:52:09.389737 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 7 05:52:09.389744 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Jul 7 05:52:09.389751 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 7 05:52:09.389758 kernel: rcu: Hierarchical SRCU implementation. Jul 7 05:52:09.389765 kernel: rcu: Max phase no-delay instances is 400. Jul 7 05:52:09.389779 kernel: Remapping and enabling EFI services. Jul 7 05:52:09.389787 kernel: smp: Bringing up secondary CPUs ... Jul 7 05:52:09.389794 kernel: Detected PIPT I-cache on CPU1 Jul 7 05:52:09.389801 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 7 05:52:09.389811 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 05:52:09.389818 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 7 05:52:09.389825 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 05:52:09.389833 kernel: SMP: Total of 2 processors activated. Jul 7 05:52:09.389840 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 05:52:09.389849 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 7 05:52:09.389857 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 7 05:52:09.389864 kernel: CPU features: detected: CRC32 instructions Jul 7 05:52:09.389872 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 7 05:52:09.389879 kernel: CPU features: detected: LSE atomic instructions Jul 7 05:52:09.389887 kernel: CPU features: detected: Privileged Access Never Jul 7 05:52:09.389894 kernel: CPU: All CPU(s) started at EL1 Jul 7 05:52:09.389901 kernel: alternatives: applying system-wide alternatives Jul 7 05:52:09.389909 kernel: devtmpfs: initialized Jul 7 05:52:09.389918 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 05:52:09.389926 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 05:52:09.389934 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 05:52:09.389941 kernel: SMBIOS 3.1.0 present. Jul 7 05:52:09.389949 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 7 05:52:09.389956 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 05:52:09.389964 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 7 05:52:09.389971 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 05:52:09.389979 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 05:52:09.389988 kernel: audit: initializing netlink subsys (disabled) Jul 7 05:52:09.389996 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jul 7 05:52:09.390003 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 05:52:09.390011 kernel: cpuidle: using governor menu Jul 7 05:52:09.390018 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 05:52:09.390026 kernel: ASID allocator initialised with 32768 entries Jul 7 05:52:09.390033 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 05:52:09.390041 kernel: Serial: AMBA PL011 UART driver Jul 7 05:52:09.390048 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 7 05:52:09.390057 kernel: Modules: 0 pages in range for non-PLT usage Jul 7 05:52:09.390064 kernel: Modules: 509008 pages in range for PLT usage Jul 7 05:52:09.390072 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 05:52:09.390079 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 05:52:09.390087 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 05:52:09.390095 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 05:52:09.390102 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 05:52:09.390110 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 05:52:09.390117 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 05:52:09.390126 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 05:52:09.390134 kernel: ACPI: Added _OSI(Module Device) Jul 7 05:52:09.390172 kernel: ACPI: Added _OSI(Processor Device) Jul 7 05:52:09.390223 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 05:52:09.390251 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 05:52:09.390275 kernel: ACPI: Interpreter enabled Jul 7 05:52:09.390283 kernel: ACPI: Using GIC for interrupt routing Jul 7 05:52:09.390291 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 7 05:52:09.390299 kernel: printk: console [ttyAMA0] enabled Jul 7 05:52:09.390331 kernel: printk: bootconsole [pl11] disabled Jul 7 05:52:09.390354 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 7 05:52:09.390362 kernel: iommu: Default domain type: Translated Jul 7 05:52:09.390370 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 05:52:09.390428 kernel: efivars: Registered efivars operations Jul 7 05:52:09.390437 kernel: vgaarb: loaded Jul 7 05:52:09.390444 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 05:52:09.390452 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 05:52:09.390475 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 05:52:09.390503 kernel: pnp: PnP ACPI init Jul 7 05:52:09.390511 kernel: pnp: PnP ACPI: found 0 devices Jul 7 05:52:09.390519 kernel: NET: Registered PF_INET protocol family Jul 7 05:52:09.390526 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 05:52:09.390549 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 05:52:09.390557 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 05:52:09.390565 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 05:52:09.390572 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 05:52:09.390596 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 05:52:09.390606 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 05:52:09.390613 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 05:52:09.390635 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 05:52:09.390643 kernel: PCI: CLS 0 bytes, default 64 Jul 7 05:52:09.390667 kernel: kvm [1]: HYP mode not available Jul 7 05:52:09.390675 kernel: Initialise system trusted keyrings Jul 7 05:52:09.390682 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 05:52:09.390690 kernel: Key type asymmetric registered Jul 7 05:52:09.390709 kernel: Asymmetric key parser 'x509' registered Jul 7 05:52:09.390718 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 05:52:09.390739 kernel: io scheduler mq-deadline registered Jul 7 05:52:09.390746 kernel: io scheduler kyber registered Jul 7 05:52:09.390754 kernel: io scheduler bfq registered Jul 7 05:52:09.390761 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 05:52:09.390791 kernel: thunder_xcv, ver 1.0 Jul 7 05:52:09.390815 kernel: thunder_bgx, ver 1.0 Jul 7 05:52:09.390823 kernel: nicpf, ver 1.0 Jul 7 05:52:09.390831 kernel: nicvf, ver 1.0 Jul 7 05:52:09.391124 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 05:52:09.391294 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T05:52:08 UTC (1751867528) Jul 7 05:52:09.391307 kernel: efifb: probing for efifb Jul 7 05:52:09.391315 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 7 05:52:09.391338 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 7 05:52:09.391346 kernel: efifb: scrolling: redraw Jul 7 05:52:09.391369 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 05:52:09.393450 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 05:52:09.393474 kernel: fb0: EFI VGA frame buffer device Jul 7 05:52:09.393482 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 7 05:52:09.393489 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 05:52:09.393497 kernel: No ACPI PMU IRQ for CPU0 Jul 7 05:52:09.393505 kernel: No ACPI PMU IRQ for CPU1 Jul 7 05:52:09.393512 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 7 05:52:09.393520 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 7 05:52:09.393527 kernel: watchdog: Hard watchdog permanently disabled Jul 7 05:52:09.393535 kernel: NET: Registered PF_INET6 protocol family Jul 7 05:52:09.393545 kernel: Segment Routing with IPv6 Jul 7 05:52:09.393552 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 05:52:09.393560 kernel: NET: Registered PF_PACKET protocol family Jul 7 05:52:09.393568 kernel: Key type dns_resolver registered Jul 7 05:52:09.393576 kernel: registered taskstats version 1 Jul 7 05:52:09.393583 kernel: Loading compiled-in X.509 certificates Jul 7 05:52:09.393591 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 238b9dc1e5bb098e9decff566778e6505241ab94' Jul 7 05:52:09.393598 kernel: Key type .fscrypt registered Jul 7 05:52:09.393610 kernel: Key type fscrypt-provisioning registered Jul 7 05:52:09.393619 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 05:52:09.393627 kernel: ima: Allocated hash algorithm: sha1 Jul 7 05:52:09.393634 kernel: ima: No architecture policies found Jul 7 05:52:09.393642 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 05:52:09.393649 kernel: clk: Disabling unused clocks Jul 7 05:52:09.393657 kernel: Freeing unused kernel memory: 39424K Jul 7 05:52:09.393664 kernel: Run /init as init process Jul 7 05:52:09.393672 kernel: with arguments: Jul 7 05:52:09.393680 kernel: /init Jul 7 05:52:09.393689 kernel: with environment: Jul 7 05:52:09.393696 kernel: HOME=/ Jul 7 05:52:09.393703 kernel: TERM=linux Jul 7 05:52:09.393711 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 05:52:09.393721 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 05:52:09.393731 systemd[1]: Detected virtualization microsoft. Jul 7 05:52:09.393739 systemd[1]: Detected architecture arm64. Jul 7 05:52:09.393747 systemd[1]: Running in initrd. Jul 7 05:52:09.393757 systemd[1]: No hostname configured, using default hostname. Jul 7 05:52:09.393765 systemd[1]: Hostname set to . Jul 7 05:52:09.393773 systemd[1]: Initializing machine ID from random generator. Jul 7 05:52:09.393781 systemd[1]: Queued start job for default target initrd.target. Jul 7 05:52:09.393789 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:52:09.393797 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:52:09.393806 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 05:52:09.393814 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 05:52:09.393824 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 05:52:09.393832 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 05:52:09.393842 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 05:52:09.393850 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 05:52:09.393858 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:52:09.393866 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:52:09.393876 systemd[1]: Reached target paths.target - Path Units. Jul 7 05:52:09.393884 systemd[1]: Reached target slices.target - Slice Units. Jul 7 05:52:09.393892 systemd[1]: Reached target swap.target - Swaps. Jul 7 05:52:09.393900 systemd[1]: Reached target timers.target - Timer Units. Jul 7 05:52:09.393908 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 05:52:09.393916 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 05:52:09.393924 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 05:52:09.393933 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 05:52:09.393941 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:52:09.393951 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 05:52:09.393960 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:52:09.393968 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 05:52:09.393976 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 05:52:09.393984 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 05:52:09.393992 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 05:52:09.394001 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 05:52:09.394009 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 05:52:09.394053 systemd-journald[216]: Collecting audit messages is disabled. Jul 7 05:52:09.394078 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 05:52:09.394086 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:09.394095 systemd-journald[216]: Journal started Jul 7 05:52:09.394116 systemd-journald[216]: Runtime Journal (/run/log/journal/9c419361a05141728affc10e60987087) is 8.0M, max 78.5M, 70.5M free. Jul 7 05:52:09.400910 systemd-modules-load[217]: Inserted module 'overlay' Jul 7 05:52:09.440981 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 05:52:09.441025 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 05:52:09.452311 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 05:52:09.463595 kernel: Bridge firewalling registered Jul 7 05:52:09.458715 systemd-modules-load[217]: Inserted module 'br_netfilter' Jul 7 05:52:09.466955 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:52:09.480829 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 05:52:09.493897 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 05:52:09.506264 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:09.534148 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:52:09.552620 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:52:09.575625 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 05:52:09.593785 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 05:52:09.616412 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:52:09.627537 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:52:09.646615 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:52:09.654095 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:52:09.689590 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 05:52:09.704594 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 05:52:09.715643 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 05:52:09.738198 dracut-cmdline[248]: dracut-dracut-053 Jul 7 05:52:09.738198 dracut-cmdline[248]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 05:52:09.791868 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:52:09.800708 systemd-resolved[252]: Positive Trust Anchors: Jul 7 05:52:09.800719 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 05:52:09.800751 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 05:52:09.803956 systemd-resolved[252]: Defaulting to hostname 'linux'. Jul 7 05:52:09.806032 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 05:52:09.819119 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:52:09.949396 kernel: SCSI subsystem initialized Jul 7 05:52:09.956412 kernel: Loading iSCSI transport class v2.0-870. Jul 7 05:52:09.967407 kernel: iscsi: registered transport (tcp) Jul 7 05:52:09.986687 kernel: iscsi: registered transport (qla4xxx) Jul 7 05:52:09.986777 kernel: QLogic iSCSI HBA Driver Jul 7 05:52:10.033019 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 05:52:10.047645 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 05:52:10.083286 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 05:52:10.083360 kernel: device-mapper: uevent: version 1.0.3 Jul 7 05:52:10.090401 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 05:52:10.142409 kernel: raid6: neonx8 gen() 15739 MB/s Jul 7 05:52:10.162389 kernel: raid6: neonx4 gen() 15421 MB/s Jul 7 05:52:10.182422 kernel: raid6: neonx2 gen() 13051 MB/s Jul 7 05:52:10.203404 kernel: raid6: neonx1 gen() 10284 MB/s Jul 7 05:52:10.223416 kernel: raid6: int64x8 gen() 6812 MB/s Jul 7 05:52:10.243406 kernel: raid6: int64x4 gen() 7196 MB/s Jul 7 05:52:10.265390 kernel: raid6: int64x2 gen() 6026 MB/s Jul 7 05:52:10.289851 kernel: raid6: int64x1 gen() 4986 MB/s Jul 7 05:52:10.289873 kernel: raid6: using algorithm neonx8 gen() 15739 MB/s Jul 7 05:52:10.316610 kernel: raid6: .... xor() 11715 MB/s, rmw enabled Jul 7 05:52:10.316721 kernel: raid6: using neon recovery algorithm Jul 7 05:52:10.327396 kernel: xor: measuring software checksum speed Jul 7 05:52:10.335387 kernel: 8regs : 17991 MB/sec Jul 7 05:52:10.335445 kernel: 32regs : 19529 MB/sec Jul 7 05:52:10.340134 kernel: arm64_neon : 26874 MB/sec Jul 7 05:52:10.346674 kernel: xor: using function: arm64_neon (26874 MB/sec) Jul 7 05:52:10.401414 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 05:52:10.413746 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 05:52:10.433580 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:52:10.459996 systemd-udevd[435]: Using default interface naming scheme 'v255'. Jul 7 05:52:10.467038 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:52:10.504550 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 05:52:10.523544 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Jul 7 05:52:10.557872 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 05:52:10.583726 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 05:52:10.629385 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:52:10.655170 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 05:52:10.682083 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 05:52:10.698311 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 05:52:10.723639 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:52:10.740799 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 05:52:10.774431 kernel: hv_vmbus: Vmbus version:5.3 Jul 7 05:52:10.766871 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 05:52:10.814809 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 7 05:52:10.814837 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 7 05:52:10.814857 kernel: hv_vmbus: registering driver hv_netvsc Jul 7 05:52:10.778394 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 05:52:10.856540 kernel: hv_vmbus: registering driver hid_hyperv Jul 7 05:52:10.856569 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 7 05:52:10.856580 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 7 05:52:10.856590 kernel: PTP clock support registered Jul 7 05:52:10.778569 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:52:10.887097 kernel: hv_utils: Registering HyperV Utility Driver Jul 7 05:52:10.887125 kernel: hv_vmbus: registering driver hv_utils Jul 7 05:52:10.887135 kernel: hv_utils: Heartbeat IC version 3.0 Jul 7 05:52:10.835881 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:52:11.229664 kernel: hv_utils: Shutdown IC version 3.2 Jul 7 05:52:11.229700 kernel: hv_utils: TimeSync IC version 4.0 Jul 7 05:52:11.229710 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 7 05:52:11.229721 kernel: hv_vmbus: registering driver hv_storvsc Jul 7 05:52:11.229733 kernel: scsi host1: storvsc_host_t Jul 7 05:52:11.229954 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 7 05:52:11.230073 kernel: scsi host0: storvsc_host_t Jul 7 05:52:10.871944 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:52:11.253953 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 7 05:52:11.254010 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 7 05:52:10.872214 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:10.899350 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:11.196037 systemd-resolved[252]: Clock change detected. Flushing caches. Jul 7 05:52:11.247416 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:11.271940 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 05:52:11.291092 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:11.310449 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:52:11.310642 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:11.320099 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:11.377254 kernel: hv_netvsc 002248b8-2498-0022-48b8-2498002248b8 eth0: VF slot 1 added Jul 7 05:52:11.377468 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 7 05:52:11.377401 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:11.398290 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 05:52:11.415112 kernel: hv_vmbus: registering driver hv_pci Jul 7 05:52:11.415177 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 7 05:52:11.425846 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:11.454658 kernel: hv_pci d3d7938b-013e-45f8-b634-b211d987b93c: PCI VMBus probing: Using version 0x10004 Jul 7 05:52:11.454849 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 7 05:52:11.454966 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 7 05:52:11.455432 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 7 05:52:11.455394 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:52:11.562530 kernel: hv_pci d3d7938b-013e-45f8-b634-b211d987b93c: PCI host bridge to bus 013e:00 Jul 7 05:52:11.562721 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 7 05:52:11.562826 kernel: pci_bus 013e:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 7 05:52:11.562930 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 7 05:52:11.563025 kernel: pci_bus 013e:00: No busn resource found for root bus, will use [bus 00-ff] Jul 7 05:52:11.563132 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 05:52:11.563143 kernel: pci 013e:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 7 05:52:11.563167 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 7 05:52:11.563255 kernel: pci 013e:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 7 05:52:11.563272 kernel: pci 013e:00:02.0: enabling Extended Tags Jul 7 05:52:11.563286 kernel: pci 013e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 013e:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 7 05:52:11.580093 kernel: pci_bus 013e:00: busn_res: [bus 00-ff] end is updated to 00 Jul 7 05:52:11.580352 kernel: pci 013e:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 7 05:52:11.588465 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:52:11.644024 kernel: mlx5_core 013e:00:02.0: enabling device (0000 -> 0002) Jul 7 05:52:11.652095 kernel: mlx5_core 013e:00:02.0: firmware version: 16.31.2424 Jul 7 05:52:11.952642 kernel: hv_netvsc 002248b8-2498-0022-48b8-2498002248b8 eth0: VF registering: eth1 Jul 7 05:52:11.952900 kernel: mlx5_core 013e:00:02.0 eth1: joined to eth0 Jul 7 05:52:11.964290 kernel: mlx5_core 013e:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 7 05:52:11.976079 kernel: mlx5_core 013e:00:02.0 enP318s1: renamed from eth1 Jul 7 05:52:11.992144 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 7 05:52:12.083256 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 7 05:52:12.118821 kernel: BTRFS: device fsid 8b9ce65a-b4d6-4744-987c-133e7f159d2d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (481) Jul 7 05:52:12.118878 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (497) Jul 7 05:52:12.134765 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 7 05:52:12.141818 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 7 05:52:12.155409 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 7 05:52:12.182380 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 05:52:12.207105 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 05:52:13.225126 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 05:52:13.225626 disk-uuid[605]: The operation has completed successfully. Jul 7 05:52:13.288177 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 05:52:13.288281 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 05:52:13.322260 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 05:52:13.348012 sh[718]: Success Jul 7 05:52:13.381160 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 7 05:52:13.551887 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 05:52:13.584277 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 05:52:13.591152 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 05:52:13.642574 kernel: BTRFS info (device dm-0): first mount of filesystem 8b9ce65a-b4d6-4744-987c-133e7f159d2d Jul 7 05:52:13.642637 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:52:13.651610 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 05:52:13.657683 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 05:52:13.662254 kernel: BTRFS info (device dm-0): using free space tree Jul 7 05:52:13.940402 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 05:52:13.947414 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 05:52:13.973363 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 05:52:13.984482 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 05:52:14.023121 kernel: BTRFS info (device sda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:14.023155 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:52:14.029371 kernel: BTRFS info (device sda6): using free space tree Jul 7 05:52:14.071111 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 05:52:14.080574 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 05:52:14.096099 kernel: BTRFS info (device sda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:14.108089 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 05:52:14.126473 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 05:52:14.167646 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 05:52:14.185273 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 05:52:14.223756 systemd-networkd[902]: lo: Link UP Jul 7 05:52:14.223770 systemd-networkd[902]: lo: Gained carrier Jul 7 05:52:14.225466 systemd-networkd[902]: Enumeration completed Jul 7 05:52:14.227649 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 05:52:14.228207 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:52:14.228210 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:52:14.234833 systemd[1]: Reached target network.target - Network. Jul 7 05:52:14.335097 kernel: mlx5_core 013e:00:02.0 enP318s1: Link up Jul 7 05:52:14.420086 kernel: hv_netvsc 002248b8-2498-0022-48b8-2498002248b8 eth0: Data path switched to VF: enP318s1 Jul 7 05:52:14.420934 systemd-networkd[902]: enP318s1: Link UP Jul 7 05:52:14.421088 systemd-networkd[902]: eth0: Link UP Jul 7 05:52:14.421239 systemd-networkd[902]: eth0: Gained carrier Jul 7 05:52:14.421249 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:52:14.448766 systemd-networkd[902]: enP318s1: Gained carrier Jul 7 05:52:14.465140 systemd-networkd[902]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 7 05:52:14.789806 ignition[868]: Ignition 2.19.0 Jul 7 05:52:14.793464 ignition[868]: Stage: fetch-offline Jul 7 05:52:14.793527 ignition[868]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:14.795047 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 05:52:14.793536 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:52:14.812488 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 05:52:14.793662 ignition[868]: parsed url from cmdline: "" Jul 7 05:52:14.793665 ignition[868]: no config URL provided Jul 7 05:52:14.793670 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 05:52:14.793677 ignition[868]: no config at "/usr/lib/ignition/user.ign" Jul 7 05:52:14.793683 ignition[868]: failed to fetch config: resource requires networking Jul 7 05:52:14.793890 ignition[868]: Ignition finished successfully Jul 7 05:52:14.850353 ignition[911]: Ignition 2.19.0 Jul 7 05:52:14.850361 ignition[911]: Stage: fetch Jul 7 05:52:14.850601 ignition[911]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:14.850612 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:52:14.850731 ignition[911]: parsed url from cmdline: "" Jul 7 05:52:14.850738 ignition[911]: no config URL provided Jul 7 05:52:14.850742 ignition[911]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 05:52:14.850750 ignition[911]: no config at "/usr/lib/ignition/user.ign" Jul 7 05:52:14.850775 ignition[911]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 7 05:52:14.968625 ignition[911]: GET result: OK Jul 7 05:52:14.968724 ignition[911]: config has been read from IMDS userdata Jul 7 05:52:14.968767 ignition[911]: parsing config with SHA512: 7c3f39e9fe3a8d8e76d5ee4c07283baf81a91df59d8a97849773e1dd5ab6a7a1ff9947705ce3ee796461f93090ba4568462861caf90b72f28cb5a4b2c119c36e Jul 7 05:52:14.973356 unknown[911]: fetched base config from "system" Jul 7 05:52:14.973838 ignition[911]: fetch: fetch complete Jul 7 05:52:14.973364 unknown[911]: fetched base config from "system" Jul 7 05:52:14.973843 ignition[911]: fetch: fetch passed Jul 7 05:52:14.973369 unknown[911]: fetched user config from "azure" Jul 7 05:52:14.973891 ignition[911]: Ignition finished successfully Jul 7 05:52:14.984607 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 05:52:15.008268 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 05:52:15.032496 ignition[917]: Ignition 2.19.0 Jul 7 05:52:15.032509 ignition[917]: Stage: kargs Jul 7 05:52:15.032708 ignition[917]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:15.040363 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 05:52:15.032720 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:52:15.033729 ignition[917]: kargs: kargs passed Jul 7 05:52:15.033787 ignition[917]: Ignition finished successfully Jul 7 05:52:15.065393 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 05:52:15.089547 ignition[923]: Ignition 2.19.0 Jul 7 05:52:15.089559 ignition[923]: Stage: disks Jul 7 05:52:15.096871 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 05:52:15.089775 ignition[923]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:15.089785 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:52:15.108656 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 05:52:15.090953 ignition[923]: disks: disks passed Jul 7 05:52:15.120375 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 05:52:15.091036 ignition[923]: Ignition finished successfully Jul 7 05:52:15.133573 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 05:52:15.144757 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 05:52:15.154227 systemd[1]: Reached target basic.target - Basic System. Jul 7 05:52:15.187263 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 05:52:15.253444 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 7 05:52:15.262956 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 05:52:15.281273 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 05:52:15.338131 kernel: EXT4-fs (sda9): mounted filesystem bea371b7-1069-4e98-84b2-bf5b94f934f3 r/w with ordered data mode. Quota mode: none. Jul 7 05:52:15.339172 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 05:52:15.344272 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 05:52:15.388178 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 05:52:15.398319 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 05:52:15.420104 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Jul 7 05:52:15.420151 kernel: BTRFS info (device sda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:15.434595 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:52:15.438825 kernel: BTRFS info (device sda6): using free space tree Jul 7 05:52:15.445501 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 05:52:15.461189 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 05:52:15.454325 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 05:52:15.454389 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 05:52:15.469535 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 05:52:15.485469 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 05:52:15.510392 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 05:52:15.918080 coreos-metadata[945]: Jul 07 05:52:15.918 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 7 05:52:15.929426 coreos-metadata[945]: Jul 07 05:52:15.929 INFO Fetch successful Jul 7 05:52:15.935203 coreos-metadata[945]: Jul 07 05:52:15.934 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 7 05:52:15.947163 coreos-metadata[945]: Jul 07 05:52:15.946 INFO Fetch successful Jul 7 05:52:15.959272 coreos-metadata[945]: Jul 07 05:52:15.959 INFO wrote hostname ci-4081.3.4-a-5429f7cfbd to /sysroot/etc/hostname Jul 7 05:52:15.968849 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 05:52:16.021259 systemd-networkd[902]: eth0: Gained IPv6LL Jul 7 05:52:16.161905 initrd-setup-root[973]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 05:52:16.185981 initrd-setup-root[980]: cut: /sysroot/etc/group: No such file or directory Jul 7 05:52:16.197233 initrd-setup-root[987]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 05:52:16.205213 initrd-setup-root[994]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 05:52:16.469264 systemd-networkd[902]: enP318s1: Gained IPv6LL Jul 7 05:52:16.975917 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 05:52:16.997306 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 05:52:17.010374 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 05:52:17.040638 kernel: BTRFS info (device sda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:17.034338 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 05:52:17.069128 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 05:52:17.084586 ignition[1062]: INFO : Ignition 2.19.0 Jul 7 05:52:17.091009 ignition[1062]: INFO : Stage: mount Jul 7 05:52:17.091009 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:17.091009 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:52:17.091009 ignition[1062]: INFO : mount: mount passed Jul 7 05:52:17.091009 ignition[1062]: INFO : Ignition finished successfully Jul 7 05:52:17.096155 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 05:52:17.128311 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 05:52:17.159478 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 05:52:17.196314 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1074) Jul 7 05:52:17.196377 kernel: BTRFS info (device sda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:17.203120 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:52:17.208008 kernel: BTRFS info (device sda6): using free space tree Jul 7 05:52:17.216077 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 05:52:17.218333 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 05:52:17.247323 ignition[1092]: INFO : Ignition 2.19.0 Jul 7 05:52:17.247323 ignition[1092]: INFO : Stage: files Jul 7 05:52:17.256461 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:17.256461 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:52:17.256461 ignition[1092]: DEBUG : files: compiled without relabeling support, skipping Jul 7 05:52:17.277994 ignition[1092]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 05:52:17.277994 ignition[1092]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 05:52:17.352679 ignition[1092]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 05:52:17.361168 ignition[1092]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 05:52:17.369310 ignition[1092]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 05:52:17.361718 unknown[1092]: wrote ssh authorized keys file for user: core Jul 7 05:52:17.388318 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 7 05:52:17.400769 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 7 05:52:17.400769 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 05:52:17.400769 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 7 05:52:17.468390 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 05:52:17.570901 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 05:52:17.570901 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:52:17.594945 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 7 05:52:18.370340 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 05:52:18.607447 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:52:18.607447 ignition[1092]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 7 05:52:18.628806 ignition[1092]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 7 05:52:18.628806 ignition[1092]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 7 05:52:18.628806 ignition[1092]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 7 05:52:18.628806 ignition[1092]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 7 05:52:18.628806 ignition[1092]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 05:52:18.628806 ignition[1092]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 05:52:18.628806 ignition[1092]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 7 05:52:18.628806 ignition[1092]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 7 05:52:18.728888 ignition[1092]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 05:52:18.728888 ignition[1092]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 05:52:18.728888 ignition[1092]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 05:52:18.728888 ignition[1092]: INFO : files: files passed Jul 7 05:52:18.728888 ignition[1092]: INFO : Ignition finished successfully Jul 7 05:52:18.643670 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 05:52:18.697342 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 05:52:18.711471 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 05:52:18.793425 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:52:18.793425 initrd-setup-root-after-ignition[1119]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:52:18.728997 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 05:52:18.829967 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:52:18.729132 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 05:52:18.790834 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 05:52:18.800840 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 05:52:18.845385 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 05:52:18.893038 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 05:52:18.893194 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 05:52:18.906334 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 05:52:18.919616 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 05:52:18.931003 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 05:52:18.947348 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 05:52:18.971859 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 05:52:18.988370 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 05:52:19.010140 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 05:52:19.010252 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 05:52:19.024040 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:52:19.038251 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:52:19.050872 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 05:52:19.068093 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 05:52:19.068177 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 05:52:19.085675 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 05:52:19.097602 systemd[1]: Stopped target basic.target - Basic System. Jul 7 05:52:19.108205 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 05:52:19.118937 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 05:52:19.130897 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 05:52:19.143365 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 05:52:19.154863 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 05:52:19.167831 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 05:52:19.181584 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 05:52:19.192798 systemd[1]: Stopped target swap.target - Swaps. Jul 7 05:52:19.203882 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 05:52:19.203981 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 05:52:19.219598 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:52:19.225983 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:52:19.238276 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 05:52:19.243828 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:52:19.251437 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 05:52:19.251523 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 05:52:19.269150 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 05:52:19.269225 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 05:52:19.283955 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 05:52:19.284017 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 05:52:19.294615 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 05:52:19.294667 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 05:52:19.327291 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 05:52:19.374616 ignition[1144]: INFO : Ignition 2.19.0 Jul 7 05:52:19.374616 ignition[1144]: INFO : Stage: umount Jul 7 05:52:19.374616 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:19.374616 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 05:52:19.374616 ignition[1144]: INFO : umount: umount passed Jul 7 05:52:19.374616 ignition[1144]: INFO : Ignition finished successfully Jul 7 05:52:19.354200 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 05:52:19.366183 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 05:52:19.366274 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:52:19.379381 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 05:52:19.379450 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 05:52:19.397957 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 05:52:19.398582 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 05:52:19.398705 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 05:52:19.412586 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 05:52:19.412701 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 05:52:19.419692 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 05:52:19.419764 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 05:52:19.428831 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 05:52:19.428892 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 05:52:19.439422 systemd[1]: Stopped target network.target - Network. Jul 7 05:52:19.444315 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 05:52:19.444397 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 05:52:19.457101 systemd[1]: Stopped target paths.target - Path Units. Jul 7 05:52:19.467456 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 05:52:19.473330 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:52:19.480864 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 05:52:19.491206 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 05:52:19.505582 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 05:52:19.505649 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 05:52:19.516900 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 05:52:19.516951 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 05:52:19.527533 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 05:52:19.527595 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 05:52:19.544087 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 05:52:19.544163 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 05:52:19.555496 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 05:52:19.566621 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 05:52:19.578120 systemd-networkd[902]: eth0: DHCPv6 lease lost Jul 7 05:52:19.585037 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 05:52:19.585311 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 05:52:19.600538 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 05:52:19.600670 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 05:52:19.614885 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 05:52:19.614946 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:52:19.647335 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 05:52:19.853895 kernel: hv_netvsc 002248b8-2498-0022-48b8-2498002248b8 eth0: Data path switched from VF: enP318s1 Jul 7 05:52:19.657535 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 05:52:19.657629 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 05:52:19.669561 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 05:52:19.669637 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:52:19.682550 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 05:52:19.682615 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 05:52:19.693601 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 05:52:19.693659 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:52:19.705514 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:52:19.740543 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 05:52:19.741907 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:52:19.754663 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 05:52:19.754744 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 05:52:19.765539 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 05:52:19.765585 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:52:19.777474 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 05:52:19.777539 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 05:52:19.792498 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 05:52:19.792566 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 05:52:19.803921 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 05:52:19.803983 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:52:19.834407 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 05:52:19.867943 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 05:52:19.868043 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:52:19.875539 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 05:52:19.875607 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:52:19.882967 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 05:52:19.883026 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:52:19.895086 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:52:19.895146 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:19.907472 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 05:52:19.909663 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 05:52:19.997462 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 05:52:19.997613 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 05:52:20.057686 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 05:52:20.057852 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 05:52:20.069462 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 05:52:20.081075 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 05:52:20.081174 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 05:52:20.109338 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 05:52:20.248532 systemd[1]: Switching root. Jul 7 05:52:20.272898 systemd-journald[216]: Journal stopped Jul 7 05:52:24.224156 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Jul 7 05:52:24.224209 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 05:52:24.224221 kernel: SELinux: policy capability open_perms=1 Jul 7 05:52:24.224235 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 05:52:24.224243 kernel: SELinux: policy capability always_check_network=0 Jul 7 05:52:24.224251 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 05:52:24.224260 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 05:52:24.224269 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 05:52:24.224277 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 05:52:24.224286 kernel: audit: type=1403 audit(1751867541.678:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 05:52:24.224298 systemd[1]: Successfully loaded SELinux policy in 153.441ms. Jul 7 05:52:24.224309 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.890ms. Jul 7 05:52:24.224320 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 05:52:24.224329 systemd[1]: Detected virtualization microsoft. Jul 7 05:52:24.224339 systemd[1]: Detected architecture arm64. Jul 7 05:52:24.224350 systemd[1]: Detected first boot. Jul 7 05:52:24.224359 systemd[1]: Hostname set to . Jul 7 05:52:24.224368 systemd[1]: Initializing machine ID from random generator. Jul 7 05:52:24.224377 zram_generator::config[1203]: No configuration found. Jul 7 05:52:24.224390 systemd[1]: Populated /etc with preset unit settings. Jul 7 05:52:24.224399 systemd[1]: Queued start job for default target multi-user.target. Jul 7 05:52:24.224411 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 7 05:52:24.224420 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 05:52:24.224430 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 05:52:24.224440 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 05:52:24.224449 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 05:52:24.224459 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 05:52:24.224468 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 05:52:24.224480 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 05:52:24.224489 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 05:52:24.224505 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:52:24.224515 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:52:24.224527 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 05:52:24.224537 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 05:52:24.224546 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 05:52:24.224556 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 05:52:24.224566 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 7 05:52:24.224577 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:52:24.224587 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 05:52:24.224600 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:52:24.224614 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 05:52:24.224625 systemd[1]: Reached target slices.target - Slice Units. Jul 7 05:52:24.224641 systemd[1]: Reached target swap.target - Swaps. Jul 7 05:52:24.224653 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 05:52:24.224667 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 05:52:24.224679 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 05:52:24.224691 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 05:52:24.224703 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:52:24.224716 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 05:52:24.224728 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:52:24.224740 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 05:52:24.224756 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 05:52:24.224768 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 05:52:24.224780 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 05:52:24.224792 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 05:52:24.224804 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 05:52:24.224815 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 05:52:24.224827 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 05:52:24.224837 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:52:24.224850 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 05:52:24.224862 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 05:52:24.224875 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:52:24.224887 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 05:52:24.224899 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:52:24.224912 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 05:52:24.224924 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:52:24.224940 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 05:52:24.224951 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 7 05:52:24.224961 kernel: fuse: init (API version 7.39) Jul 7 05:52:24.224970 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 7 05:52:24.224980 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 05:52:24.224990 kernel: loop: module loaded Jul 7 05:52:24.224999 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 05:52:24.225009 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 05:52:24.225021 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 05:52:24.225087 systemd-journald[1315]: Collecting audit messages is disabled. Jul 7 05:52:24.225111 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 05:52:24.225120 kernel: ACPI: bus type drm_connector registered Jul 7 05:52:24.225132 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 05:52:24.225143 systemd-journald[1315]: Journal started Jul 7 05:52:24.225164 systemd-journald[1315]: Runtime Journal (/run/log/journal/82cf6d5071e94544a5f1ac5c4d0f204f) is 8.0M, max 78.5M, 70.5M free. Jul 7 05:52:24.248801 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 05:52:24.252367 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 05:52:24.259512 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 05:52:24.265270 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 05:52:24.272158 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 05:52:24.279560 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 05:52:24.286237 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 05:52:24.294245 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:52:24.304530 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 05:52:24.304751 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 05:52:24.313576 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:52:24.313750 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:52:24.321806 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 05:52:24.321992 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 05:52:24.328852 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:52:24.329050 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:52:24.337302 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 05:52:24.337468 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 05:52:24.344684 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:52:24.344898 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:52:24.353211 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 05:52:24.360892 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 05:52:24.369499 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 05:52:24.377611 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:52:24.395817 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 05:52:24.408158 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 05:52:24.418237 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 05:52:24.425464 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 05:52:24.441328 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 05:52:24.450224 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 05:52:24.458097 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 05:52:24.459436 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 05:52:24.470530 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 05:52:24.473370 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:52:24.482243 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 05:52:24.504401 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 05:52:24.514010 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 05:52:24.516580 systemd-journald[1315]: Time spent on flushing to /var/log/journal/82cf6d5071e94544a5f1ac5c4d0f204f is 15.809ms for 887 entries. Jul 7 05:52:24.516580 systemd-journald[1315]: System Journal (/var/log/journal/82cf6d5071e94544a5f1ac5c4d0f204f) is 8.0M, max 2.6G, 2.6G free. Jul 7 05:52:24.602547 systemd-journald[1315]: Received client request to flush runtime journal. Jul 7 05:52:24.534354 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 05:52:24.543318 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 05:52:24.554740 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 05:52:24.565975 udevadm[1363]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 7 05:52:24.588513 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:52:24.604321 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 05:52:24.668921 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Jul 7 05:52:24.668941 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Jul 7 05:52:24.674563 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:52:24.691234 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 05:52:24.778145 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 05:52:24.794427 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 05:52:24.812474 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Jul 7 05:52:24.812495 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Jul 7 05:52:24.817561 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:52:25.627688 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 05:52:25.639244 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:52:25.678391 systemd-udevd[1387]: Using default interface naming scheme 'v255'. Jul 7 05:52:25.980859 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:52:26.004494 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 05:52:26.062507 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 05:52:26.073423 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jul 7 05:52:26.155488 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 05:52:26.189370 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 05:52:26.259614 kernel: hv_vmbus: registering driver hv_balloon Jul 7 05:52:26.259730 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 7 05:52:26.268425 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 7 05:52:26.289179 systemd-networkd[1397]: lo: Link UP Jul 7 05:52:26.289550 systemd-networkd[1397]: lo: Gained carrier Jul 7 05:52:26.292970 systemd-networkd[1397]: Enumeration completed Jul 7 05:52:26.293379 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 05:52:26.293991 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:52:26.293997 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:52:26.304582 kernel: hv_vmbus: registering driver hyperv_fb Jul 7 05:52:26.304675 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 7 05:52:26.317031 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 7 05:52:26.334943 kernel: Console: switching to colour dummy device 80x25 Jul 7 05:52:26.324854 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 05:52:26.344137 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 05:52:26.362519 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:26.417100 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1401) Jul 7 05:52:26.431247 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:52:26.431509 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:26.458084 kernel: mlx5_core 013e:00:02.0 enP318s1: Link up Jul 7 05:52:26.472728 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 7 05:52:26.504432 kernel: hv_netvsc 002248b8-2498-0022-48b8-2498002248b8 eth0: Data path switched to VF: enP318s1 Jul 7 05:52:26.504804 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:26.516861 systemd-networkd[1397]: enP318s1: Link UP Jul 7 05:52:26.517356 systemd-networkd[1397]: eth0: Link UP Jul 7 05:52:26.518042 systemd-networkd[1397]: eth0: Gained carrier Jul 7 05:52:26.518301 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:52:26.522437 systemd-networkd[1397]: enP318s1: Gained carrier Jul 7 05:52:26.530125 systemd-networkd[1397]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 7 05:52:26.564515 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 05:52:26.580234 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 05:52:26.701092 lvm[1477]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 05:52:26.728600 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 05:52:26.737505 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:52:26.756285 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 05:52:26.761832 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 05:52:26.788629 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 05:52:26.797015 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 05:52:26.806629 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 05:52:26.806672 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 05:52:26.813223 systemd[1]: Reached target machines.target - Containers. Jul 7 05:52:26.820048 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 05:52:26.840276 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 05:52:26.848401 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 05:52:26.859220 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:52:26.862256 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 05:52:26.875490 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 05:52:26.887700 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 05:52:26.895875 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:26.908100 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 05:52:26.929820 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 05:52:26.974977 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 05:52:26.980890 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 05:52:26.982083 kernel: loop0: detected capacity change from 0 to 31320 Jul 7 05:52:27.311096 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 05:52:27.443095 kernel: loop1: detected capacity change from 0 to 114328 Jul 7 05:52:27.541242 systemd-networkd[1397]: enP318s1: Gained IPv6LL Jul 7 05:52:27.752097 kernel: loop2: detected capacity change from 0 to 203944 Jul 7 05:52:27.795091 kernel: loop3: detected capacity change from 0 to 114432 Jul 7 05:52:28.053265 systemd-networkd[1397]: eth0: Gained IPv6LL Jul 7 05:52:28.056095 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 05:52:28.124096 kernel: loop4: detected capacity change from 0 to 31320 Jul 7 05:52:28.134104 kernel: loop5: detected capacity change from 0 to 114328 Jul 7 05:52:28.145112 kernel: loop6: detected capacity change from 0 to 203944 Jul 7 05:52:28.155094 kernel: loop7: detected capacity change from 0 to 114432 Jul 7 05:52:28.159606 (sd-merge)[1507]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 7 05:52:28.160140 (sd-merge)[1507]: Merged extensions into '/usr'. Jul 7 05:52:28.164043 systemd[1]: Reloading requested from client PID 1491 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 05:52:28.164236 systemd[1]: Reloading... Jul 7 05:52:28.235351 zram_generator::config[1534]: No configuration found. Jul 7 05:52:28.397836 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:52:28.470469 systemd[1]: Reloading finished in 305 ms. Jul 7 05:52:28.481544 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 05:52:28.497230 systemd[1]: Starting ensure-sysext.service... Jul 7 05:52:28.503422 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 05:52:28.513253 systemd[1]: Reloading requested from client PID 1595 ('systemctl') (unit ensure-sysext.service)... Jul 7 05:52:28.513410 systemd[1]: Reloading... Jul 7 05:52:28.528393 systemd-tmpfiles[1596]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 05:52:28.528706 systemd-tmpfiles[1596]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 05:52:28.531594 systemd-tmpfiles[1596]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 05:52:28.532028 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Jul 7 05:52:28.532246 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Jul 7 05:52:28.548231 systemd-tmpfiles[1596]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 05:52:28.548243 systemd-tmpfiles[1596]: Skipping /boot Jul 7 05:52:28.560197 systemd-tmpfiles[1596]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 05:52:28.560354 systemd-tmpfiles[1596]: Skipping /boot Jul 7 05:52:28.612204 zram_generator::config[1626]: No configuration found. Jul 7 05:52:28.756784 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:52:28.832527 systemd[1]: Reloading finished in 318 ms. Jul 7 05:52:28.848943 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:52:28.872978 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 05:52:28.897318 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 05:52:28.914356 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 05:52:28.931312 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 05:52:28.947322 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 05:52:28.960047 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:52:28.963479 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:52:28.980515 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:52:29.007468 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:52:29.015998 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:52:29.025506 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 05:52:29.037834 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:52:29.038032 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:52:29.045999 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:52:29.046225 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:52:29.055833 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:52:29.056194 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:52:29.079845 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:52:29.087595 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:52:29.097340 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:52:29.107743 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:52:29.115447 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:52:29.120002 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 05:52:29.135980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:52:29.136236 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:52:29.137742 systemd-resolved[1694]: Positive Trust Anchors: Jul 7 05:52:29.138198 systemd-resolved[1694]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 05:52:29.138238 systemd-resolved[1694]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 05:52:29.145397 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:52:29.145573 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:52:29.154067 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:52:29.154314 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:52:29.162858 augenrules[1728]: No rules Jul 7 05:52:29.168349 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 05:52:29.179553 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:52:29.179878 systemd-resolved[1694]: Using system hostname 'ci-4081.3.4-a-5429f7cfbd'. Jul 7 05:52:29.185646 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:52:29.194403 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 05:52:29.207341 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:52:29.219400 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:52:29.227872 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:52:29.228728 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 05:52:29.238879 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 05:52:29.250843 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:52:29.251019 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:52:29.260331 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 05:52:29.260512 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 05:52:29.268356 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:52:29.268590 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:52:29.277173 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:52:29.277407 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:52:29.290164 systemd[1]: Finished ensure-sysext.service. Jul 7 05:52:29.297890 systemd[1]: Reached target network.target - Network. Jul 7 05:52:29.303808 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 05:52:29.312600 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:52:29.320240 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 05:52:29.320330 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 05:52:29.955438 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 05:52:29.964544 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 05:52:31.082117 ldconfig[1485]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 05:52:31.095343 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 05:52:31.106315 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 05:52:31.124218 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 05:52:31.131296 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 05:52:31.137421 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 05:52:31.144856 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 05:52:31.152628 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 05:52:31.161743 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 05:52:31.169324 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 05:52:31.176906 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 05:52:31.176949 systemd[1]: Reached target paths.target - Path Units. Jul 7 05:52:31.182545 systemd[1]: Reached target timers.target - Timer Units. Jul 7 05:52:31.189793 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 05:52:31.198856 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 05:52:31.205913 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 05:52:31.214629 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 05:52:31.221622 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 05:52:31.227793 systemd[1]: Reached target basic.target - Basic System. Jul 7 05:52:31.234498 systemd[1]: System is tainted: cgroupsv1 Jul 7 05:52:31.234583 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 05:52:31.234615 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 05:52:31.247158 systemd[1]: Starting chronyd.service - NTP client/server... Jul 7 05:52:31.256252 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 05:52:31.270335 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 05:52:31.287291 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 05:52:31.294189 (chronyd)[1765]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 7 05:52:31.295898 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 05:52:31.305903 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 05:52:31.312272 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 05:52:31.312321 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jul 7 05:52:31.315308 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 7 05:52:31.323302 chronyd[1776]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 7 05:52:31.326752 jq[1772]: false Jul 7 05:52:31.328001 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 7 05:52:31.335358 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:52:31.337623 KVP[1774]: KVP starting; pid is:1774 Jul 7 05:52:31.351393 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 05:52:31.359940 KVP[1774]: KVP LIC Version: 3.1 Jul 7 05:52:31.360104 kernel: hv_utils: KVP IC version 4.0 Jul 7 05:52:31.365668 chronyd[1776]: Timezone right/UTC failed leap second check, ignoring Jul 7 05:52:31.366144 chronyd[1776]: Loaded seccomp filter (level 2) Jul 7 05:52:31.369496 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 05:52:31.376319 extend-filesystems[1773]: Found loop4 Jul 7 05:52:31.376319 extend-filesystems[1773]: Found loop5 Jul 7 05:52:31.390789 extend-filesystems[1773]: Found loop6 Jul 7 05:52:31.390789 extend-filesystems[1773]: Found loop7 Jul 7 05:52:31.390789 extend-filesystems[1773]: Found sda Jul 7 05:52:31.390789 extend-filesystems[1773]: Found sda1 Jul 7 05:52:31.390789 extend-filesystems[1773]: Found sda2 Jul 7 05:52:31.390789 extend-filesystems[1773]: Found sda3 Jul 7 05:52:31.390789 extend-filesystems[1773]: Found usr Jul 7 05:52:31.390789 extend-filesystems[1773]: Found sda4 Jul 7 05:52:31.390789 extend-filesystems[1773]: Found sda6 Jul 7 05:52:31.390789 extend-filesystems[1773]: Found sda7 Jul 7 05:52:31.390789 extend-filesystems[1773]: Found sda9 Jul 7 05:52:31.390789 extend-filesystems[1773]: Checking size of /dev/sda9 Jul 7 05:52:31.387367 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 05:52:31.610545 extend-filesystems[1773]: Old size kept for /dev/sda9 Jul 7 05:52:31.610545 extend-filesystems[1773]: Found sr0 Jul 7 05:52:31.467916 dbus-daemon[1769]: [system] SELinux support is enabled Jul 7 05:52:31.675737 coreos-metadata[1767]: Jul 07 05:52:31.603 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 7 05:52:31.675737 coreos-metadata[1767]: Jul 07 05:52:31.617 INFO Fetch successful Jul 7 05:52:31.675737 coreos-metadata[1767]: Jul 07 05:52:31.617 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 7 05:52:31.675737 coreos-metadata[1767]: Jul 07 05:52:31.629 INFO Fetch successful Jul 7 05:52:31.675737 coreos-metadata[1767]: Jul 07 05:52:31.629 INFO Fetching http://168.63.129.16/machine/e0866c12-ad4d-4d6c-8584-e4c25cfc31e7/c08044db%2D6a37%2D417d%2D8540%2Dea928e8bf1f7.%5Fci%2D4081.3.4%2Da%2D5429f7cfbd?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 7 05:52:31.675737 coreos-metadata[1767]: Jul 07 05:52:31.635 INFO Fetch successful Jul 7 05:52:31.675737 coreos-metadata[1767]: Jul 07 05:52:31.635 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 7 05:52:31.675737 coreos-metadata[1767]: Jul 07 05:52:31.654 INFO Fetch successful Jul 7 05:52:31.402951 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 05:52:31.428434 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 05:52:31.462361 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 05:52:31.487475 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 05:52:31.498408 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 05:52:31.520198 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 05:52:31.676845 update_engine[1810]: I20250707 05:52:31.631451 1810 main.cc:92] Flatcar Update Engine starting Jul 7 05:52:31.676845 update_engine[1810]: I20250707 05:52:31.636008 1810 update_check_scheduler.cc:74] Next update check in 7m30s Jul 7 05:52:31.548826 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 05:52:31.678324 jq[1813]: true Jul 7 05:52:31.564395 systemd[1]: Started chronyd.service - NTP client/server. Jul 7 05:52:31.591527 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 05:52:31.591792 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 05:52:31.592103 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 05:52:31.592327 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 05:52:31.626591 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 05:52:31.626845 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 05:52:31.627771 systemd-logind[1801]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 05:52:31.634321 systemd-logind[1801]: New seat seat0. Jul 7 05:52:31.646478 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 05:52:31.657029 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 05:52:31.689527 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 05:52:31.689794 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 05:52:31.770802 (ntainerd)[1834]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 05:52:31.786391 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 05:52:31.786424 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 05:52:31.795725 dbus-daemon[1769]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 05:52:31.802321 jq[1833]: true Jul 7 05:52:31.803187 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 05:52:31.803212 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 05:52:31.832370 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 05:52:31.908784 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1821) Jul 7 05:52:31.914532 systemd[1]: Started update-engine.service - Update Engine. Jul 7 05:52:31.923068 tar[1830]: linux-arm64/helm Jul 7 05:52:31.928785 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 05:52:31.930367 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 05:52:31.946485 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 05:52:32.069458 bash[1894]: Updated "/home/core/.ssh/authorized_keys" Jul 7 05:52:32.071883 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 05:52:32.080713 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 05:52:32.158465 locksmithd[1879]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 05:52:32.438274 containerd[1834]: time="2025-07-07T05:52:32.438142560Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 05:52:32.505563 containerd[1834]: time="2025-07-07T05:52:32.503784120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:52:32.509741 containerd[1834]: time="2025-07-07T05:52:32.509689720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:52:32.510182 containerd[1834]: time="2025-07-07T05:52:32.510161600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 05:52:32.510271 containerd[1834]: time="2025-07-07T05:52:32.510257840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 05:52:32.510486 containerd[1834]: time="2025-07-07T05:52:32.510469480Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 05:52:32.510989 containerd[1834]: time="2025-07-07T05:52:32.510970120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 05:52:32.511238 containerd[1834]: time="2025-07-07T05:52:32.511214720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:52:32.511390 containerd[1834]: time="2025-07-07T05:52:32.511373160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:52:32.513153 containerd[1834]: time="2025-07-07T05:52:32.511693200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:52:32.513153 containerd[1834]: time="2025-07-07T05:52:32.511715360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 05:52:32.513153 containerd[1834]: time="2025-07-07T05:52:32.511729920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:52:32.513153 containerd[1834]: time="2025-07-07T05:52:32.511739600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 05:52:32.513153 containerd[1834]: time="2025-07-07T05:52:32.511815400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:52:32.513153 containerd[1834]: time="2025-07-07T05:52:32.512027920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:52:32.513153 containerd[1834]: time="2025-07-07T05:52:32.512211640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:52:32.513153 containerd[1834]: time="2025-07-07T05:52:32.512228240Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 05:52:32.513153 containerd[1834]: time="2025-07-07T05:52:32.512321560Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 05:52:32.513153 containerd[1834]: time="2025-07-07T05:52:32.512365680Z" level=info msg="metadata content store policy set" policy=shared Jul 7 05:52:32.530848 containerd[1834]: time="2025-07-07T05:52:32.530105720Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 05:52:32.530848 containerd[1834]: time="2025-07-07T05:52:32.530188320Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 05:52:32.530848 containerd[1834]: time="2025-07-07T05:52:32.530205640Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 05:52:32.530848 containerd[1834]: time="2025-07-07T05:52:32.530222000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 05:52:32.530848 containerd[1834]: time="2025-07-07T05:52:32.530240320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 05:52:32.530848 containerd[1834]: time="2025-07-07T05:52:32.530467280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 05:52:32.530848 containerd[1834]: time="2025-07-07T05:52:32.530817200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 05:52:32.531308 containerd[1834]: time="2025-07-07T05:52:32.530938240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 05:52:32.531308 containerd[1834]: time="2025-07-07T05:52:32.530954040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 05:52:32.531308 containerd[1834]: time="2025-07-07T05:52:32.530972240Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 05:52:32.531308 containerd[1834]: time="2025-07-07T05:52:32.530992480Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 05:52:32.531308 containerd[1834]: time="2025-07-07T05:52:32.531006640Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 05:52:32.531308 containerd[1834]: time="2025-07-07T05:52:32.531021560Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 05:52:32.531308 containerd[1834]: time="2025-07-07T05:52:32.531036040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 05:52:32.531308 containerd[1834]: time="2025-07-07T05:52:32.531050800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 05:52:32.531308 containerd[1834]: time="2025-07-07T05:52:32.531096120Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 05:52:32.531308 containerd[1834]: time="2025-07-07T05:52:32.531111680Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 05:52:32.531308 containerd[1834]: time="2025-07-07T05:52:32.531125360Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 05:52:32.531308 containerd[1834]: time="2025-07-07T05:52:32.531149160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 05:52:32.531308 containerd[1834]: time="2025-07-07T05:52:32.531169200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 05:52:32.531308 containerd[1834]: time="2025-07-07T05:52:32.531183240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 05:52:32.531596 containerd[1834]: time="2025-07-07T05:52:32.531198080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 05:52:32.531596 containerd[1834]: time="2025-07-07T05:52:32.531211920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 05:52:32.531596 containerd[1834]: time="2025-07-07T05:52:32.531226240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 05:52:32.531596 containerd[1834]: time="2025-07-07T05:52:32.531238240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 05:52:32.531596 containerd[1834]: time="2025-07-07T05:52:32.531252320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 05:52:32.531596 containerd[1834]: time="2025-07-07T05:52:32.531265480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 05:52:32.531596 containerd[1834]: time="2025-07-07T05:52:32.531299880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 05:52:32.531596 containerd[1834]: time="2025-07-07T05:52:32.531314200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 05:52:32.531596 containerd[1834]: time="2025-07-07T05:52:32.531334360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 05:52:32.531596 containerd[1834]: time="2025-07-07T05:52:32.531348800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 05:52:32.531596 containerd[1834]: time="2025-07-07T05:52:32.531370440Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 05:52:32.531596 containerd[1834]: time="2025-07-07T05:52:32.531395600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 05:52:32.531596 containerd[1834]: time="2025-07-07T05:52:32.531407880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 05:52:32.531596 containerd[1834]: time="2025-07-07T05:52:32.531419480Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 05:52:32.531855 containerd[1834]: time="2025-07-07T05:52:32.531475360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 05:52:32.531855 containerd[1834]: time="2025-07-07T05:52:32.531495000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 05:52:32.531855 containerd[1834]: time="2025-07-07T05:52:32.531510400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 05:52:32.531855 containerd[1834]: time="2025-07-07T05:52:32.531524240Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 05:52:32.531855 containerd[1834]: time="2025-07-07T05:52:32.531534640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 05:52:32.531855 containerd[1834]: time="2025-07-07T05:52:32.531548520Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 05:52:32.531855 containerd[1834]: time="2025-07-07T05:52:32.531559480Z" level=info msg="NRI interface is disabled by configuration." Jul 7 05:52:32.531855 containerd[1834]: time="2025-07-07T05:52:32.531570680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 05:52:32.532007 containerd[1834]: time="2025-07-07T05:52:32.531876240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 05:52:32.532007 containerd[1834]: time="2025-07-07T05:52:32.531947680Z" level=info msg="Connect containerd service" Jul 7 05:52:32.532007 containerd[1834]: time="2025-07-07T05:52:32.531990320Z" level=info msg="using legacy CRI server" Jul 7 05:52:32.532007 containerd[1834]: time="2025-07-07T05:52:32.531996760Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 05:52:32.536447 containerd[1834]: time="2025-07-07T05:52:32.534201960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 05:52:32.537385 containerd[1834]: time="2025-07-07T05:52:32.537339400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 05:52:32.537853 containerd[1834]: time="2025-07-07T05:52:32.537826520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 05:52:32.538002 containerd[1834]: time="2025-07-07T05:52:32.537882880Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 05:52:32.538002 containerd[1834]: time="2025-07-07T05:52:32.537929680Z" level=info msg="Start subscribing containerd event" Jul 7 05:52:32.538002 containerd[1834]: time="2025-07-07T05:52:32.537971360Z" level=info msg="Start recovering state" Jul 7 05:52:32.541301 containerd[1834]: time="2025-07-07T05:52:32.538051280Z" level=info msg="Start event monitor" Jul 7 05:52:32.541301 containerd[1834]: time="2025-07-07T05:52:32.541102400Z" level=info msg="Start snapshots syncer" Jul 7 05:52:32.541301 containerd[1834]: time="2025-07-07T05:52:32.541126800Z" level=info msg="Start cni network conf syncer for default" Jul 7 05:52:32.541301 containerd[1834]: time="2025-07-07T05:52:32.541135080Z" level=info msg="Start streaming server" Jul 7 05:52:32.541301 containerd[1834]: time="2025-07-07T05:52:32.541245800Z" level=info msg="containerd successfully booted in 0.107043s" Jul 7 05:52:32.541590 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 05:52:32.738161 tar[1830]: linux-arm64/LICENSE Jul 7 05:52:32.738161 tar[1830]: linux-arm64/README.md Jul 7 05:52:32.758330 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 05:52:32.764638 sshd_keygen[1808]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 05:52:32.793244 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 05:52:32.808406 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 05:52:32.819614 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 7 05:52:32.827196 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 05:52:32.827754 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 05:52:32.848643 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 05:52:32.863374 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 7 05:52:32.874554 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 05:52:32.897409 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 05:52:32.905029 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 7 05:52:32.912018 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 05:52:32.918553 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:52:32.926689 (kubelet)[1950]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:52:32.927543 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 05:52:32.938296 systemd[1]: Startup finished in 13.216s (kernel) + 11.411s (userspace) = 24.628s. Jul 7 05:52:33.163429 login[1947]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:52:33.170379 login[1949]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:52:33.177849 systemd-logind[1801]: New session 1 of user core. Jul 7 05:52:33.178790 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 05:52:33.185960 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 05:52:33.191776 systemd-logind[1801]: New session 2 of user core. Jul 7 05:52:33.205965 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 05:52:33.217557 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 05:52:33.224796 (systemd)[1965]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 05:52:33.369250 systemd[1965]: Queued start job for default target default.target. Jul 7 05:52:33.369704 systemd[1965]: Created slice app.slice - User Application Slice. Jul 7 05:52:33.369724 systemd[1965]: Reached target paths.target - Paths. Jul 7 05:52:33.369735 systemd[1965]: Reached target timers.target - Timers. Jul 7 05:52:33.377233 systemd[1965]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 05:52:33.388274 systemd[1965]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 05:52:33.388360 systemd[1965]: Reached target sockets.target - Sockets. Jul 7 05:52:33.388374 systemd[1965]: Reached target basic.target - Basic System. Jul 7 05:52:33.388428 systemd[1965]: Reached target default.target - Main User Target. Jul 7 05:52:33.388456 systemd[1965]: Startup finished in 155ms. Jul 7 05:52:33.389271 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 05:52:33.394129 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 05:52:33.396098 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 05:52:33.488433 kubelet[1950]: E0707 05:52:33.488274 1950 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:52:33.491919 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:52:33.494476 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:52:34.397104 waagent[1939]: 2025-07-07T05:52:34.393393Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jul 7 05:52:34.399435 waagent[1939]: 2025-07-07T05:52:34.399340Z INFO Daemon Daemon OS: flatcar 4081.3.4 Jul 7 05:52:34.404047 waagent[1939]: 2025-07-07T05:52:34.403973Z INFO Daemon Daemon Python: 3.11.9 Jul 7 05:52:34.410106 waagent[1939]: 2025-07-07T05:52:34.409177Z INFO Daemon Daemon Run daemon Jul 7 05:52:34.413716 waagent[1939]: 2025-07-07T05:52:34.413639Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.4' Jul 7 05:52:34.422955 waagent[1939]: 2025-07-07T05:52:34.422768Z INFO Daemon Daemon Using waagent for provisioning Jul 7 05:52:34.428460 waagent[1939]: 2025-07-07T05:52:34.428392Z INFO Daemon Daemon Activate resource disk Jul 7 05:52:34.433476 waagent[1939]: 2025-07-07T05:52:34.433394Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 7 05:52:34.445970 waagent[1939]: 2025-07-07T05:52:34.445848Z INFO Daemon Daemon Found device: None Jul 7 05:52:34.451459 waagent[1939]: 2025-07-07T05:52:34.451346Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 7 05:52:34.461239 waagent[1939]: 2025-07-07T05:52:34.461147Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 7 05:52:34.475249 waagent[1939]: 2025-07-07T05:52:34.475157Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 7 05:52:34.481373 waagent[1939]: 2025-07-07T05:52:34.481293Z INFO Daemon Daemon Running default provisioning handler Jul 7 05:52:34.494409 waagent[1939]: 2025-07-07T05:52:34.494306Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 7 05:52:34.508761 waagent[1939]: 2025-07-07T05:52:34.508666Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 7 05:52:34.518297 waagent[1939]: 2025-07-07T05:52:34.518223Z INFO Daemon Daemon cloud-init is enabled: False Jul 7 05:52:34.523632 waagent[1939]: 2025-07-07T05:52:34.523564Z INFO Daemon Daemon Copying ovf-env.xml Jul 7 05:52:34.594421 waagent[1939]: 2025-07-07T05:52:34.594306Z INFO Daemon Daemon Successfully mounted dvd Jul 7 05:52:34.610045 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 7 05:52:34.613200 waagent[1939]: 2025-07-07T05:52:34.612198Z INFO Daemon Daemon Detect protocol endpoint Jul 7 05:52:34.617370 waagent[1939]: 2025-07-07T05:52:34.617283Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 7 05:52:34.623397 waagent[1939]: 2025-07-07T05:52:34.623297Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 7 05:52:34.629923 waagent[1939]: 2025-07-07T05:52:34.629846Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 7 05:52:34.636236 waagent[1939]: 2025-07-07T05:52:34.636019Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 7 05:52:34.641328 waagent[1939]: 2025-07-07T05:52:34.641250Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 7 05:52:34.673314 waagent[1939]: 2025-07-07T05:52:34.673191Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 7 05:52:34.680199 waagent[1939]: 2025-07-07T05:52:34.680156Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 7 05:52:34.685578 waagent[1939]: 2025-07-07T05:52:34.685494Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 7 05:52:34.825026 waagent[1939]: 2025-07-07T05:52:34.824899Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 7 05:52:34.832156 waagent[1939]: 2025-07-07T05:52:34.832045Z INFO Daemon Daemon Forcing an update of the goal state. Jul 7 05:52:34.842531 waagent[1939]: 2025-07-07T05:52:34.842453Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 7 05:52:34.867209 waagent[1939]: 2025-07-07T05:52:34.867157Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 7 05:52:34.873943 waagent[1939]: 2025-07-07T05:52:34.873881Z INFO Daemon Jul 7 05:52:34.876845 waagent[1939]: 2025-07-07T05:52:34.876781Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 6668fbed-1200-419c-b6e7-08fb1e83c8f9 eTag: 15577605781095939394 source: Fabric] Jul 7 05:52:34.888690 waagent[1939]: 2025-07-07T05:52:34.888630Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 7 05:52:34.895765 waagent[1939]: 2025-07-07T05:52:34.895702Z INFO Daemon Jul 7 05:52:34.898689 waagent[1939]: 2025-07-07T05:52:34.898625Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 7 05:52:34.913679 waagent[1939]: 2025-07-07T05:52:34.913607Z INFO Daemon Daemon Downloading artifacts profile blob Jul 7 05:52:35.009174 waagent[1939]: 2025-07-07T05:52:35.008912Z INFO Daemon Downloaded certificate {'thumbprint': 'D134CE1FFF0393A5C38D7339BD028FF28575585A', 'hasPrivateKey': False} Jul 7 05:52:35.021278 waagent[1939]: 2025-07-07T05:52:35.021211Z INFO Daemon Downloaded certificate {'thumbprint': '0E4B05DC0CC000CB5B3592EEC709A042F6A3FFC7', 'hasPrivateKey': True} Jul 7 05:52:35.031403 waagent[1939]: 2025-07-07T05:52:35.031340Z INFO Daemon Fetch goal state completed Jul 7 05:52:35.044644 waagent[1939]: 2025-07-07T05:52:35.044593Z INFO Daemon Daemon Starting provisioning Jul 7 05:52:35.049746 waagent[1939]: 2025-07-07T05:52:35.049661Z INFO Daemon Daemon Handle ovf-env.xml. Jul 7 05:52:35.054786 waagent[1939]: 2025-07-07T05:52:35.054720Z INFO Daemon Daemon Set hostname [ci-4081.3.4-a-5429f7cfbd] Jul 7 05:52:35.085087 waagent[1939]: 2025-07-07T05:52:35.080183Z INFO Daemon Daemon Publish hostname [ci-4081.3.4-a-5429f7cfbd] Jul 7 05:52:35.086822 waagent[1939]: 2025-07-07T05:52:35.086740Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 7 05:52:35.094150 waagent[1939]: 2025-07-07T05:52:35.094053Z INFO Daemon Daemon Primary interface is [eth0] Jul 7 05:52:35.122515 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:52:35.122524 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:52:35.122557 systemd-networkd[1397]: eth0: DHCP lease lost Jul 7 05:52:35.124979 waagent[1939]: 2025-07-07T05:52:35.124875Z INFO Daemon Daemon Create user account if not exists Jul 7 05:52:35.130740 waagent[1939]: 2025-07-07T05:52:35.130660Z INFO Daemon Daemon User core already exists, skip useradd Jul 7 05:52:35.136451 waagent[1939]: 2025-07-07T05:52:35.136375Z INFO Daemon Daemon Configure sudoer Jul 7 05:52:35.138199 systemd-networkd[1397]: eth0: DHCPv6 lease lost Jul 7 05:52:35.141325 waagent[1939]: 2025-07-07T05:52:35.141244Z INFO Daemon Daemon Configure sshd Jul 7 05:52:35.145815 waagent[1939]: 2025-07-07T05:52:35.145745Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 7 05:52:35.159052 waagent[1939]: 2025-07-07T05:52:35.158970Z INFO Daemon Daemon Deploy ssh public key. Jul 7 05:52:35.170132 systemd-networkd[1397]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 7 05:52:36.294088 waagent[1939]: 2025-07-07T05:52:36.290088Z INFO Daemon Daemon Provisioning complete Jul 7 05:52:36.316006 waagent[1939]: 2025-07-07T05:52:36.315944Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 7 05:52:36.323512 waagent[1939]: 2025-07-07T05:52:36.323432Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 7 05:52:36.333635 waagent[1939]: 2025-07-07T05:52:36.333564Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jul 7 05:52:36.494237 waagent[2027]: 2025-07-07T05:52:36.494125Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jul 7 05:52:36.494642 waagent[2027]: 2025-07-07T05:52:36.494323Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.4 Jul 7 05:52:36.494642 waagent[2027]: 2025-07-07T05:52:36.494385Z INFO ExtHandler ExtHandler Python: 3.11.9 Jul 7 05:52:36.541909 waagent[2027]: 2025-07-07T05:52:36.541783Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 7 05:52:36.542167 waagent[2027]: 2025-07-07T05:52:36.542111Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 05:52:36.542261 waagent[2027]: 2025-07-07T05:52:36.542213Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 05:52:36.553327 waagent[2027]: 2025-07-07T05:52:36.553145Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 7 05:52:36.560417 waagent[2027]: 2025-07-07T05:52:36.560353Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 7 05:52:36.561031 waagent[2027]: 2025-07-07T05:52:36.560978Z INFO ExtHandler Jul 7 05:52:36.561137 waagent[2027]: 2025-07-07T05:52:36.561097Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: c750adab-ab8b-433f-846c-ba5be7121642 eTag: 15577605781095939394 source: Fabric] Jul 7 05:52:36.561499 waagent[2027]: 2025-07-07T05:52:36.561451Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 7 05:52:36.562171 waagent[2027]: 2025-07-07T05:52:36.562119Z INFO ExtHandler Jul 7 05:52:36.562253 waagent[2027]: 2025-07-07T05:52:36.562220Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 7 05:52:36.566856 waagent[2027]: 2025-07-07T05:52:36.566801Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 7 05:52:36.656191 waagent[2027]: 2025-07-07T05:52:36.656038Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D134CE1FFF0393A5C38D7339BD028FF28575585A', 'hasPrivateKey': False} Jul 7 05:52:36.656689 waagent[2027]: 2025-07-07T05:52:36.656637Z INFO ExtHandler Downloaded certificate {'thumbprint': '0E4B05DC0CC000CB5B3592EEC709A042F6A3FFC7', 'hasPrivateKey': True} Jul 7 05:52:36.657219 waagent[2027]: 2025-07-07T05:52:36.657160Z INFO ExtHandler Fetch goal state completed Jul 7 05:52:36.674360 waagent[2027]: 2025-07-07T05:52:36.674268Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2027 Jul 7 05:52:36.674550 waagent[2027]: 2025-07-07T05:52:36.674507Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 7 05:52:36.676553 waagent[2027]: 2025-07-07T05:52:36.676486Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.4', '', 'Flatcar Container Linux by Kinvolk'] Jul 7 05:52:36.677013 waagent[2027]: 2025-07-07T05:52:36.676967Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 7 05:52:36.694346 waagent[2027]: 2025-07-07T05:52:36.694291Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 7 05:52:36.694601 waagent[2027]: 2025-07-07T05:52:36.694554Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 7 05:52:36.701486 waagent[2027]: 2025-07-07T05:52:36.701417Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 7 05:52:36.709727 systemd[1]: Reloading requested from client PID 2042 ('systemctl') (unit waagent.service)... Jul 7 05:52:36.709750 systemd[1]: Reloading... Jul 7 05:52:36.796100 zram_generator::config[2076]: No configuration found. Jul 7 05:52:36.936343 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:52:37.019833 systemd[1]: Reloading finished in 309 ms. Jul 7 05:52:37.043212 waagent[2027]: 2025-07-07T05:52:37.043101Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jul 7 05:52:37.049720 systemd[1]: Reloading requested from client PID 2135 ('systemctl') (unit waagent.service)... Jul 7 05:52:37.049743 systemd[1]: Reloading... Jul 7 05:52:37.143099 zram_generator::config[2165]: No configuration found. Jul 7 05:52:37.267739 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:52:37.343206 systemd[1]: Reloading finished in 293 ms. Jul 7 05:52:37.364086 waagent[2027]: 2025-07-07T05:52:37.363197Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 7 05:52:37.364086 waagent[2027]: 2025-07-07T05:52:37.363418Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 7 05:52:37.612823 waagent[2027]: 2025-07-07T05:52:37.612659Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 7 05:52:37.617449 waagent[2027]: 2025-07-07T05:52:37.616622Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 7 05:52:37.617762 waagent[2027]: 2025-07-07T05:52:37.617688Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 7 05:52:37.618030 waagent[2027]: 2025-07-07T05:52:37.617968Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 05:52:37.618554 waagent[2027]: 2025-07-07T05:52:37.618497Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 7 05:52:37.618711 waagent[2027]: 2025-07-07T05:52:37.618621Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 05:52:37.619206 waagent[2027]: 2025-07-07T05:52:37.619141Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 7 05:52:37.619655 waagent[2027]: 2025-07-07T05:52:37.619583Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 7 05:52:37.619898 waagent[2027]: 2025-07-07T05:52:37.619844Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 7 05:52:37.619898 waagent[2027]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 7 05:52:37.619898 waagent[2027]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 7 05:52:37.619898 waagent[2027]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 7 05:52:37.619898 waagent[2027]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 7 05:52:37.619898 waagent[2027]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 7 05:52:37.619898 waagent[2027]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 7 05:52:37.620250 waagent[2027]: 2025-07-07T05:52:37.620168Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 7 05:52:37.620447 waagent[2027]: 2025-07-07T05:52:37.620347Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 05:52:37.621441 waagent[2027]: 2025-07-07T05:52:37.621181Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 05:52:37.621441 waagent[2027]: 2025-07-07T05:52:37.621330Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 7 05:52:37.621563 waagent[2027]: 2025-07-07T05:52:37.621502Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 7 05:52:37.622156 waagent[2027]: 2025-07-07T05:52:37.622037Z INFO EnvHandler ExtHandler Configure routes Jul 7 05:52:37.622258 waagent[2027]: 2025-07-07T05:52:37.622166Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 7 05:52:37.622569 waagent[2027]: 2025-07-07T05:52:37.622507Z INFO EnvHandler ExtHandler Gateway:None Jul 7 05:52:37.623352 waagent[2027]: 2025-07-07T05:52:37.623289Z INFO EnvHandler ExtHandler Routes:None Jul 7 05:52:37.632416 waagent[2027]: 2025-07-07T05:52:37.632348Z INFO ExtHandler ExtHandler Jul 7 05:52:37.632532 waagent[2027]: 2025-07-07T05:52:37.632491Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: ee39c4d7-b7c6-4895-84c8-98a40b5f2e63 correlation 85f90dd8-3cd5-45e7-942d-bf546d504540 created: 2025-07-07T05:51:24.362556Z] Jul 7 05:52:37.633272 waagent[2027]: 2025-07-07T05:52:37.633198Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 7 05:52:37.633962 waagent[2027]: 2025-07-07T05:52:37.633901Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 7 05:52:37.675337 waagent[2027]: 2025-07-07T05:52:37.675224Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 4F708240-89E9-4D75-AAD1-22C36C858ED4;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jul 7 05:52:37.676927 waagent[2027]: 2025-07-07T05:52:37.676826Z INFO MonitorHandler ExtHandler Network interfaces: Jul 7 05:52:37.676927 waagent[2027]: Executing ['ip', '-a', '-o', 'link']: Jul 7 05:52:37.676927 waagent[2027]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 7 05:52:37.676927 waagent[2027]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b8:24:98 brd ff:ff:ff:ff:ff:ff Jul 7 05:52:37.676927 waagent[2027]: 3: enP318s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b8:24:98 brd ff:ff:ff:ff:ff:ff\ altname enP318p0s2 Jul 7 05:52:37.676927 waagent[2027]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 7 05:52:37.676927 waagent[2027]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 7 05:52:37.676927 waagent[2027]: 2: eth0 inet 10.200.20.35/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 7 05:52:37.676927 waagent[2027]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 7 05:52:37.676927 waagent[2027]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 7 05:52:37.676927 waagent[2027]: 2: eth0 inet6 fe80::222:48ff:feb8:2498/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 7 05:52:37.676927 waagent[2027]: 3: enP318s1 inet6 fe80::222:48ff:feb8:2498/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 7 05:52:37.730202 waagent[2027]: 2025-07-07T05:52:37.729953Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jul 7 05:52:37.730202 waagent[2027]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 05:52:37.730202 waagent[2027]: pkts bytes target prot opt in out source destination Jul 7 05:52:37.730202 waagent[2027]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 7 05:52:37.730202 waagent[2027]: pkts bytes target prot opt in out source destination Jul 7 05:52:37.730202 waagent[2027]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 05:52:37.730202 waagent[2027]: pkts bytes target prot opt in out source destination Jul 7 05:52:37.730202 waagent[2027]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 7 05:52:37.730202 waagent[2027]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 7 05:52:37.730202 waagent[2027]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 7 05:52:37.733867 waagent[2027]: 2025-07-07T05:52:37.733768Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 7 05:52:37.733867 waagent[2027]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 05:52:37.733867 waagent[2027]: pkts bytes target prot opt in out source destination Jul 7 05:52:37.733867 waagent[2027]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 7 05:52:37.733867 waagent[2027]: pkts bytes target prot opt in out source destination Jul 7 05:52:37.733867 waagent[2027]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 05:52:37.733867 waagent[2027]: pkts bytes target prot opt in out source destination Jul 7 05:52:37.733867 waagent[2027]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 7 05:52:37.733867 waagent[2027]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 7 05:52:37.733867 waagent[2027]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 7 05:52:37.734207 waagent[2027]: 2025-07-07T05:52:37.734161Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 7 05:52:41.131894 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 05:52:41.138329 systemd[1]: Started sshd@0-10.200.20.35:22-10.200.16.10:34522.service - OpenSSH per-connection server daemon (10.200.16.10:34522). Jul 7 05:52:41.662684 sshd[2259]: Accepted publickey for core from 10.200.16.10 port 34522 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:52:41.664247 sshd[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:52:41.668734 systemd-logind[1801]: New session 3 of user core. Jul 7 05:52:41.679360 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 05:52:42.100359 systemd[1]: Started sshd@1-10.200.20.35:22-10.200.16.10:34524.service - OpenSSH per-connection server daemon (10.200.16.10:34524). Jul 7 05:52:42.587187 sshd[2264]: Accepted publickey for core from 10.200.16.10 port 34524 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:52:42.589249 sshd[2264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:52:42.596880 systemd-logind[1801]: New session 4 of user core. Jul 7 05:52:42.599563 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 05:52:42.943129 sshd[2264]: pam_unix(sshd:session): session closed for user core Jul 7 05:52:42.947250 systemd[1]: sshd@1-10.200.20.35:22-10.200.16.10:34524.service: Deactivated successfully. Jul 7 05:52:42.950473 systemd-logind[1801]: Session 4 logged out. Waiting for processes to exit. Jul 7 05:52:42.951157 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 05:52:42.952468 systemd-logind[1801]: Removed session 4. Jul 7 05:52:43.023311 systemd[1]: Started sshd@2-10.200.20.35:22-10.200.16.10:34540.service - OpenSSH per-connection server daemon (10.200.16.10:34540). Jul 7 05:52:43.468467 sshd[2272]: Accepted publickey for core from 10.200.16.10 port 34540 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:52:43.469918 sshd[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:52:43.474782 systemd-logind[1801]: New session 5 of user core. Jul 7 05:52:43.485437 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 05:52:43.705681 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 05:52:43.714279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:52:43.797234 sshd[2272]: pam_unix(sshd:session): session closed for user core Jul 7 05:52:43.800951 systemd-logind[1801]: Session 5 logged out. Waiting for processes to exit. Jul 7 05:52:43.804849 systemd[1]: sshd@2-10.200.20.35:22-10.200.16.10:34540.service: Deactivated successfully. Jul 7 05:52:43.809879 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 05:52:43.813997 systemd-logind[1801]: Removed session 5. Jul 7 05:52:43.840311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:52:43.844881 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:52:43.883367 systemd[1]: Started sshd@3-10.200.20.35:22-10.200.16.10:34550.service - OpenSSH per-connection server daemon (10.200.16.10:34550). Jul 7 05:52:43.941765 kubelet[2292]: E0707 05:52:43.941691 2292 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:52:43.944644 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:52:43.944806 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:52:44.354163 sshd[2298]: Accepted publickey for core from 10.200.16.10 port 34550 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:52:44.355931 sshd[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:52:44.361184 systemd-logind[1801]: New session 6 of user core. Jul 7 05:52:44.367377 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 05:52:44.711332 sshd[2298]: pam_unix(sshd:session): session closed for user core Jul 7 05:52:44.716003 systemd[1]: sshd@3-10.200.20.35:22-10.200.16.10:34550.service: Deactivated successfully. Jul 7 05:52:44.718800 systemd-logind[1801]: Session 6 logged out. Waiting for processes to exit. Jul 7 05:52:44.719217 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 05:52:44.721006 systemd-logind[1801]: Removed session 6. Jul 7 05:52:44.796367 systemd[1]: Started sshd@4-10.200.20.35:22-10.200.16.10:34554.service - OpenSSH per-connection server daemon (10.200.16.10:34554). Jul 7 05:52:45.279375 sshd[2308]: Accepted publickey for core from 10.200.16.10 port 34554 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:52:45.280893 sshd[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:52:45.285168 systemd-logind[1801]: New session 7 of user core. Jul 7 05:52:45.291410 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 05:52:45.674757 sudo[2312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 05:52:45.675095 sudo[2312]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:52:45.704922 sudo[2312]: pam_unix(sudo:session): session closed for user root Jul 7 05:52:45.785397 sshd[2308]: pam_unix(sshd:session): session closed for user core Jul 7 05:52:45.789145 systemd-logind[1801]: Session 7 logged out. Waiting for processes to exit. Jul 7 05:52:45.789364 systemd[1]: sshd@4-10.200.20.35:22-10.200.16.10:34554.service: Deactivated successfully. Jul 7 05:52:45.793031 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 05:52:45.795342 systemd-logind[1801]: Removed session 7. Jul 7 05:52:45.871336 systemd[1]: Started sshd@5-10.200.20.35:22-10.200.16.10:34570.service - OpenSSH per-connection server daemon (10.200.16.10:34570). Jul 7 05:52:46.356720 sshd[2317]: Accepted publickey for core from 10.200.16.10 port 34570 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:52:46.358357 sshd[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:52:46.363181 systemd-logind[1801]: New session 8 of user core. Jul 7 05:52:46.370356 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 05:52:46.632908 sudo[2322]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 05:52:46.633250 sudo[2322]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:52:46.637208 sudo[2322]: pam_unix(sudo:session): session closed for user root Jul 7 05:52:46.642707 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 05:52:46.643047 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:52:46.655318 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 05:52:46.659836 auditctl[2325]: No rules Jul 7 05:52:46.660254 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 05:52:46.660516 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 05:52:46.667450 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 05:52:46.692363 augenrules[2344]: No rules Jul 7 05:52:46.694627 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 05:52:46.696508 sudo[2321]: pam_unix(sudo:session): session closed for user root Jul 7 05:52:46.778199 sshd[2317]: pam_unix(sshd:session): session closed for user core Jul 7 05:52:46.781972 systemd[1]: sshd@5-10.200.20.35:22-10.200.16.10:34570.service: Deactivated successfully. Jul 7 05:52:46.786534 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 05:52:46.787722 systemd-logind[1801]: Session 8 logged out. Waiting for processes to exit. Jul 7 05:52:46.788795 systemd-logind[1801]: Removed session 8. Jul 7 05:52:46.881335 systemd[1]: Started sshd@6-10.200.20.35:22-10.200.16.10:34572.service - OpenSSH per-connection server daemon (10.200.16.10:34572). Jul 7 05:52:47.365654 sshd[2353]: Accepted publickey for core from 10.200.16.10 port 34572 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:52:47.367149 sshd[2353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:52:47.371797 systemd-logind[1801]: New session 9 of user core. Jul 7 05:52:47.382440 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 05:52:47.641757 sudo[2357]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 05:52:47.642045 sudo[2357]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:52:48.542337 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 05:52:48.542606 (dockerd)[2372]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 05:52:49.027767 dockerd[2372]: time="2025-07-07T05:52:49.027697960Z" level=info msg="Starting up" Jul 7 05:52:49.597950 dockerd[2372]: time="2025-07-07T05:52:49.597897240Z" level=info msg="Loading containers: start." Jul 7 05:52:49.770101 kernel: Initializing XFRM netlink socket Jul 7 05:52:49.913035 systemd-networkd[1397]: docker0: Link UP Jul 7 05:52:49.945452 dockerd[2372]: time="2025-07-07T05:52:49.945397000Z" level=info msg="Loading containers: done." Jul 7 05:52:49.966448 dockerd[2372]: time="2025-07-07T05:52:49.966392080Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 05:52:49.966679 dockerd[2372]: time="2025-07-07T05:52:49.966515640Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 05:52:49.966679 dockerd[2372]: time="2025-07-07T05:52:49.966635160Z" level=info msg="Daemon has completed initialization" Jul 7 05:52:50.041101 dockerd[2372]: time="2025-07-07T05:52:50.041019320Z" level=info msg="API listen on /run/docker.sock" Jul 7 05:52:50.041825 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 05:52:51.119950 containerd[1834]: time="2025-07-07T05:52:51.119893920Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 7 05:52:52.168556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1841417394.mount: Deactivated successfully. Jul 7 05:52:53.548285 containerd[1834]: time="2025-07-07T05:52:53.548215560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:52:53.553471 containerd[1834]: time="2025-07-07T05:52:53.553213000Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651793" Jul 7 05:52:53.559793 containerd[1834]: time="2025-07-07T05:52:53.559732920Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:52:53.566662 containerd[1834]: time="2025-07-07T05:52:53.566588040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:52:53.567766 containerd[1834]: time="2025-07-07T05:52:53.567715520Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 2.44777352s" Jul 7 05:52:53.567766 containerd[1834]: time="2025-07-07T05:52:53.567764480Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 7 05:52:53.569168 containerd[1834]: time="2025-07-07T05:52:53.569128880Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 7 05:52:53.955737 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 05:52:53.962262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:52:54.079854 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:52:54.083380 (kubelet)[2576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:52:54.121360 kubelet[2576]: E0707 05:52:54.121266 2576 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:52:54.123416 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:52:54.123559 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:52:55.175105 chronyd[1776]: Selected source PHC0 Jul 7 05:52:55.192279 containerd[1834]: time="2025-07-07T05:52:55.192206826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:52:55.195398 containerd[1834]: time="2025-07-07T05:52:55.195338155Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459677" Jul 7 05:52:55.200118 containerd[1834]: time="2025-07-07T05:52:55.200039027Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:52:55.207852 containerd[1834]: time="2025-07-07T05:52:55.207784789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:52:55.209043 containerd[1834]: time="2025-07-07T05:52:55.208887057Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.639714098s" Jul 7 05:52:55.209043 containerd[1834]: time="2025-07-07T05:52:55.208932937Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 7 05:52:55.209694 containerd[1834]: time="2025-07-07T05:52:55.209395972Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 7 05:52:56.363181 containerd[1834]: time="2025-07-07T05:52:56.363117419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:52:56.366050 containerd[1834]: time="2025-07-07T05:52:56.365993739Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125066" Jul 7 05:52:56.375552 containerd[1834]: time="2025-07-07T05:52:56.375499661Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:52:56.382884 containerd[1834]: time="2025-07-07T05:52:56.382776382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:52:56.384072 containerd[1834]: time="2025-07-07T05:52:56.383910222Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.17448377s" Jul 7 05:52:56.384072 containerd[1834]: time="2025-07-07T05:52:56.383953902Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 7 05:52:56.384773 containerd[1834]: time="2025-07-07T05:52:56.384571462Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 05:52:57.523136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2820283944.mount: Deactivated successfully. Jul 7 05:52:57.908187 containerd[1834]: time="2025-07-07T05:52:57.907714752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:52:57.913083 containerd[1834]: time="2025-07-07T05:52:57.912991553Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915957" Jul 7 05:52:57.916239 containerd[1834]: time="2025-07-07T05:52:57.916205474Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:52:57.923218 containerd[1834]: time="2025-07-07T05:52:57.923143155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:52:57.924051 containerd[1834]: time="2025-07-07T05:52:57.923652035Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.539047813s" Jul 7 05:52:57.924051 containerd[1834]: time="2025-07-07T05:52:57.923686515Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 7 05:52:57.924210 containerd[1834]: time="2025-07-07T05:52:57.924179035Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 05:52:58.603988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3395124462.mount: Deactivated successfully. Jul 7 05:52:59.941919 containerd[1834]: time="2025-07-07T05:52:59.941842284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:52:59.944888 containerd[1834]: time="2025-07-07T05:52:59.944585405Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 7 05:52:59.949838 containerd[1834]: time="2025-07-07T05:52:59.949777526Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:52:59.959276 containerd[1834]: time="2025-07-07T05:52:59.959180287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:52:59.964100 containerd[1834]: time="2025-07-07T05:52:59.962869968Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.038654413s" Jul 7 05:52:59.964100 containerd[1834]: time="2025-07-07T05:52:59.962965248Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 7 05:52:59.966466 containerd[1834]: time="2025-07-07T05:52:59.966427128Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 05:53:00.596738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2118614886.mount: Deactivated successfully. Jul 7 05:53:00.634887 containerd[1834]: time="2025-07-07T05:53:00.634816677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:00.638145 containerd[1834]: time="2025-07-07T05:53:00.637956998Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 7 05:53:00.642536 containerd[1834]: time="2025-07-07T05:53:00.642473519Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:00.650616 containerd[1834]: time="2025-07-07T05:53:00.650569080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:00.651559 containerd[1834]: time="2025-07-07T05:53:00.651401280Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 684.929272ms" Jul 7 05:53:00.651559 containerd[1834]: time="2025-07-07T05:53:00.651445040Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 7 05:53:00.652127 containerd[1834]: time="2025-07-07T05:53:00.652094000Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 05:53:01.383991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount335497545.mount: Deactivated successfully. Jul 7 05:53:04.205721 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 05:53:04.212301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:04.350289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:04.351743 (kubelet)[2696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:53:04.445481 kubelet[2696]: E0707 05:53:04.445417 2696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:53:04.448838 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:53:04.449016 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:53:05.619275 containerd[1834]: time="2025-07-07T05:53:05.619209163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:05.623868 containerd[1834]: time="2025-07-07T05:53:05.623792563Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" Jul 7 05:53:05.629951 containerd[1834]: time="2025-07-07T05:53:05.629886284Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:05.638018 containerd[1834]: time="2025-07-07T05:53:05.637947525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:05.639608 containerd[1834]: time="2025-07-07T05:53:05.639453045Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.987319925s" Jul 7 05:53:05.639608 containerd[1834]: time="2025-07-07T05:53:05.639502085Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 7 05:53:12.114360 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:12.123339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:12.166190 systemd[1]: Reloading requested from client PID 2758 ('systemctl') (unit session-9.scope)... Jul 7 05:53:12.166205 systemd[1]: Reloading... Jul 7 05:53:12.284363 zram_generator::config[2798]: No configuration found. Jul 7 05:53:12.416026 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:53:12.494848 systemd[1]: Reloading finished in 328 ms. Jul 7 05:53:12.557028 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:12.558881 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 05:53:12.559201 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:12.566899 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:12.690304 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:12.695495 (kubelet)[2880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 05:53:12.738819 kubelet[2880]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:53:12.738819 kubelet[2880]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 05:53:12.738819 kubelet[2880]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:53:12.739255 kubelet[2880]: I0707 05:53:12.738890 2880 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 05:53:13.470143 kubelet[2880]: I0707 05:53:13.469998 2880 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 05:53:13.470143 kubelet[2880]: I0707 05:53:13.470043 2880 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 05:53:13.471092 kubelet[2880]: I0707 05:53:13.470646 2880 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 05:53:13.491036 kubelet[2880]: E0707 05:53:13.490970 2880 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:13.492371 kubelet[2880]: I0707 05:53:13.492334 2880 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 05:53:13.499187 kubelet[2880]: E0707 05:53:13.499148 2880 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 05:53:13.499415 kubelet[2880]: I0707 05:53:13.499403 2880 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 05:53:13.504002 kubelet[2880]: I0707 05:53:13.503958 2880 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 05:53:13.505528 kubelet[2880]: I0707 05:53:13.505424 2880 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 05:53:13.506395 kubelet[2880]: I0707 05:53:13.505769 2880 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 05:53:13.506395 kubelet[2880]: I0707 05:53:13.505803 2880 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-a-5429f7cfbd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 7 05:53:13.506395 kubelet[2880]: I0707 05:53:13.506036 2880 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 05:53:13.506395 kubelet[2880]: I0707 05:53:13.506046 2880 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 05:53:13.506593 kubelet[2880]: I0707 05:53:13.506210 2880 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:53:13.509456 kubelet[2880]: I0707 05:53:13.509423 2880 kubelet.go:408] "Attempting to sync node with API server" Jul 7 05:53:13.510029 kubelet[2880]: I0707 05:53:13.510015 2880 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 05:53:13.510171 kubelet[2880]: I0707 05:53:13.510142 2880 kubelet.go:314] "Adding apiserver pod source" Jul 7 05:53:13.510212 kubelet[2880]: I0707 05:53:13.510177 2880 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 05:53:13.515130 kubelet[2880]: I0707 05:53:13.514802 2880 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 05:53:13.515580 kubelet[2880]: I0707 05:53:13.515560 2880 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 05:53:13.515695 kubelet[2880]: W0707 05:53:13.515684 2880 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 05:53:13.516413 kubelet[2880]: I0707 05:53:13.516392 2880 server.go:1274] "Started kubelet" Jul 7 05:53:13.516849 kubelet[2880]: W0707 05:53:13.516662 2880 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-5429f7cfbd&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Jul 7 05:53:13.516849 kubelet[2880]: E0707 05:53:13.516723 2880 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-5429f7cfbd&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:13.520540 kubelet[2880]: W0707 05:53:13.520474 2880 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Jul 7 05:53:13.520540 kubelet[2880]: E0707 05:53:13.520544 2880 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:13.520737 kubelet[2880]: I0707 05:53:13.520691 2880 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 05:53:13.521194 kubelet[2880]: I0707 05:53:13.521081 2880 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 05:53:13.521550 kubelet[2880]: I0707 05:53:13.521504 2880 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 05:53:13.524111 kubelet[2880]: I0707 05:53:13.523177 2880 server.go:449] "Adding debug handlers to kubelet server" Jul 7 05:53:13.525135 kubelet[2880]: I0707 05:53:13.525105 2880 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 05:53:13.526696 kubelet[2880]: E0707 05:53:13.525491 2880 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.35:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.35:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.4-a-5429f7cfbd.184fe24b650fb394 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.4-a-5429f7cfbd,UID:ci-4081.3.4-a-5429f7cfbd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.4-a-5429f7cfbd,},FirstTimestamp:2025-07-07 05:53:13.516364692 +0000 UTC m=+0.816938618,LastTimestamp:2025-07-07 05:53:13.516364692 +0000 UTC m=+0.816938618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.4-a-5429f7cfbd,}" Jul 7 05:53:13.528097 kubelet[2880]: I0707 05:53:13.527752 2880 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 05:53:13.531799 kubelet[2880]: I0707 05:53:13.531741 2880 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 05:53:13.532147 kubelet[2880]: E0707 05:53:13.532118 2880 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-5429f7cfbd\" not found" Jul 7 05:53:13.532552 kubelet[2880]: I0707 05:53:13.532521 2880 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 05:53:13.532645 kubelet[2880]: I0707 05:53:13.532605 2880 reconciler.go:26] "Reconciler: start to sync state" Jul 7 05:53:13.533468 kubelet[2880]: E0707 05:53:13.533420 2880 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-5429f7cfbd?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="200ms" Jul 7 05:53:13.535102 kubelet[2880]: I0707 05:53:13.534142 2880 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 05:53:13.535832 kubelet[2880]: W0707 05:53:13.535775 2880 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Jul 7 05:53:13.535991 kubelet[2880]: E0707 05:53:13.535965 2880 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:13.537114 kubelet[2880]: E0707 05:53:13.537052 2880 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 05:53:13.537526 kubelet[2880]: I0707 05:53:13.537505 2880 factory.go:221] Registration of the containerd container factory successfully Jul 7 05:53:13.537594 kubelet[2880]: I0707 05:53:13.537586 2880 factory.go:221] Registration of the systemd container factory successfully Jul 7 05:53:13.552977 kubelet[2880]: I0707 05:53:13.552888 2880 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 05:53:13.554239 kubelet[2880]: I0707 05:53:13.554182 2880 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 05:53:13.554239 kubelet[2880]: I0707 05:53:13.554230 2880 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 05:53:13.554362 kubelet[2880]: I0707 05:53:13.554253 2880 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 05:53:13.554362 kubelet[2880]: E0707 05:53:13.554305 2880 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 05:53:13.562712 kubelet[2880]: W0707 05:53:13.562646 2880 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Jul 7 05:53:13.562903 kubelet[2880]: E0707 05:53:13.562739 2880 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:13.626553 kubelet[2880]: I0707 05:53:13.626477 2880 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 05:53:13.626553 kubelet[2880]: I0707 05:53:13.626495 2880 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 05:53:13.626553 kubelet[2880]: I0707 05:53:13.626522 2880 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:53:13.633118 kubelet[2880]: E0707 05:53:13.633050 2880 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-5429f7cfbd\" not found" Jul 7 05:53:13.633429 kubelet[2880]: I0707 05:53:13.633401 2880 policy_none.go:49] "None policy: Start" Jul 7 05:53:13.634687 kubelet[2880]: I0707 05:53:13.634300 2880 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 05:53:13.634687 kubelet[2880]: I0707 05:53:13.634358 2880 state_mem.go:35] "Initializing new in-memory state store" Jul 7 05:53:13.642757 kubelet[2880]: I0707 05:53:13.642722 2880 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 05:53:13.644083 kubelet[2880]: I0707 05:53:13.643107 2880 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 05:53:13.644083 kubelet[2880]: I0707 05:53:13.643124 2880 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 05:53:13.644563 kubelet[2880]: I0707 05:53:13.644530 2880 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 05:53:13.648001 kubelet[2880]: E0707 05:53:13.647947 2880 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.4-a-5429f7cfbd\" not found" Jul 7 05:53:13.733641 kubelet[2880]: I0707 05:53:13.733371 2880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38ad66146f22f588ff92c70873b0b285-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-a-5429f7cfbd\" (UID: \"38ad66146f22f588ff92c70873b0b285\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:13.733641 kubelet[2880]: I0707 05:53:13.733428 2880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38ad66146f22f588ff92c70873b0b285-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-a-5429f7cfbd\" (UID: \"38ad66146f22f588ff92c70873b0b285\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:13.733641 kubelet[2880]: I0707 05:53:13.733456 2880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97a037a48748624e1dbceadeb27d5f2f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-a-5429f7cfbd\" (UID: \"97a037a48748624e1dbceadeb27d5f2f\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:13.733641 kubelet[2880]: I0707 05:53:13.733482 2880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97a037a48748624e1dbceadeb27d5f2f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-a-5429f7cfbd\" (UID: \"97a037a48748624e1dbceadeb27d5f2f\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:13.733641 kubelet[2880]: I0707 05:53:13.733528 2880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3f9cb2ee93e3f6a78a75e409789706e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-a-5429f7cfbd\" (UID: \"d3f9cb2ee93e3f6a78a75e409789706e\") " pod="kube-system/kube-scheduler-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:13.734290 kubelet[2880]: I0707 05:53:13.734090 2880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38ad66146f22f588ff92c70873b0b285-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-a-5429f7cfbd\" (UID: \"38ad66146f22f588ff92c70873b0b285\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:13.734290 kubelet[2880]: E0707 05:53:13.734100 2880 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-5429f7cfbd?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="400ms" Jul 7 05:53:13.734290 kubelet[2880]: I0707 05:53:13.734135 2880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97a037a48748624e1dbceadeb27d5f2f-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-5429f7cfbd\" (UID: \"97a037a48748624e1dbceadeb27d5f2f\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:13.734290 kubelet[2880]: I0707 05:53:13.734157 2880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97a037a48748624e1dbceadeb27d5f2f-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-5429f7cfbd\" (UID: \"97a037a48748624e1dbceadeb27d5f2f\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:13.734290 kubelet[2880]: I0707 05:53:13.734189 2880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97a037a48748624e1dbceadeb27d5f2f-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-a-5429f7cfbd\" (UID: \"97a037a48748624e1dbceadeb27d5f2f\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:13.745068 kubelet[2880]: I0707 05:53:13.745014 2880 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:13.745687 kubelet[2880]: E0707 05:53:13.745524 2880 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:13.948240 kubelet[2880]: I0707 05:53:13.948152 2880 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:13.948673 kubelet[2880]: E0707 05:53:13.948631 2880 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:13.964584 containerd[1834]: time="2025-07-07T05:53:13.964498733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-a-5429f7cfbd,Uid:38ad66146f22f588ff92c70873b0b285,Namespace:kube-system,Attempt:0,}" Jul 7 05:53:13.967725 containerd[1834]: time="2025-07-07T05:53:13.967659892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-a-5429f7cfbd,Uid:97a037a48748624e1dbceadeb27d5f2f,Namespace:kube-system,Attempt:0,}" Jul 7 05:53:13.969698 containerd[1834]: time="2025-07-07T05:53:13.969641972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-a-5429f7cfbd,Uid:d3f9cb2ee93e3f6a78a75e409789706e,Namespace:kube-system,Attempt:0,}" Jul 7 05:53:14.135296 kubelet[2880]: E0707 05:53:14.135144 2880 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-5429f7cfbd?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="800ms" Jul 7 05:53:14.351045 kubelet[2880]: I0707 05:53:14.350998 2880 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:14.351758 kubelet[2880]: E0707 05:53:14.351722 2880 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:14.373557 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 7 05:53:14.684552 kubelet[2880]: W0707 05:53:14.684471 2880 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Jul 7 05:53:14.684552 kubelet[2880]: E0707 05:53:14.684557 2880 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:14.714488 kubelet[2880]: W0707 05:53:14.714378 2880 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Jul 7 05:53:14.714488 kubelet[2880]: E0707 05:53:14.714455 2880 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:14.784128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount991664038.mount: Deactivated successfully. Jul 7 05:53:14.839302 containerd[1834]: time="2025-07-07T05:53:14.839212099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:53:14.850342 containerd[1834]: time="2025-07-07T05:53:14.850209577Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 05:53:14.856105 containerd[1834]: time="2025-07-07T05:53:14.855505816Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:53:14.859247 containerd[1834]: time="2025-07-07T05:53:14.859179775Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:53:14.868869 containerd[1834]: time="2025-07-07T05:53:14.867883854Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:53:14.871297 containerd[1834]: time="2025-07-07T05:53:14.871147493Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 05:53:14.879392 containerd[1834]: time="2025-07-07T05:53:14.879348572Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 7 05:53:14.897980 containerd[1834]: time="2025-07-07T05:53:14.897915448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:53:14.899189 containerd[1834]: time="2025-07-07T05:53:14.898860448Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 931.121796ms" Jul 7 05:53:14.912720 containerd[1834]: time="2025-07-07T05:53:14.912653046Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 948.065393ms" Jul 7 05:53:14.914711 containerd[1834]: time="2025-07-07T05:53:14.914531405Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 944.812633ms" Jul 7 05:53:14.915125 kubelet[2880]: W0707 05:53:14.914949 2880 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-5429f7cfbd&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Jul 7 05:53:14.917257 kubelet[2880]: E0707 05:53:14.915456 2880 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-5429f7cfbd&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:14.936384 kubelet[2880]: E0707 05:53:14.936260 2880 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-5429f7cfbd?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="1.6s" Jul 7 05:53:15.103830 kubelet[2880]: W0707 05:53:15.103748 2880 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Jul 7 05:53:15.103830 kubelet[2880]: E0707 05:53:15.103824 2880 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:15.154472 kubelet[2880]: I0707 05:53:15.154442 2880 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:15.154819 kubelet[2880]: E0707 05:53:15.154790 2880 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:15.362283 containerd[1834]: time="2025-07-07T05:53:15.361418767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:53:15.362283 containerd[1834]: time="2025-07-07T05:53:15.361605607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:53:15.362283 containerd[1834]: time="2025-07-07T05:53:15.361622647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:15.364462 containerd[1834]: time="2025-07-07T05:53:15.364047126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:15.370188 containerd[1834]: time="2025-07-07T05:53:15.369632405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:53:15.370188 containerd[1834]: time="2025-07-07T05:53:15.369694325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:53:15.370188 containerd[1834]: time="2025-07-07T05:53:15.369714085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:15.370188 containerd[1834]: time="2025-07-07T05:53:15.369824245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:15.372392 containerd[1834]: time="2025-07-07T05:53:15.372230325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:53:15.372392 containerd[1834]: time="2025-07-07T05:53:15.372337645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:53:15.372392 containerd[1834]: time="2025-07-07T05:53:15.372350045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:15.372659 containerd[1834]: time="2025-07-07T05:53:15.372482645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:15.450796 containerd[1834]: time="2025-07-07T05:53:15.450659191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-a-5429f7cfbd,Uid:38ad66146f22f588ff92c70873b0b285,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0eb89ebced5479fb1427ebab7ebba0225b918e035c3142ebdc3a7d1dc33c697\"" Jul 7 05:53:15.457443 containerd[1834]: time="2025-07-07T05:53:15.457219910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-a-5429f7cfbd,Uid:d3f9cb2ee93e3f6a78a75e409789706e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cb5e03f89c8f086f7fc2837a930740ad582dd5899ddfc7dd4b16e41e4dbb10f\"" Jul 7 05:53:15.458181 containerd[1834]: time="2025-07-07T05:53:15.457579190Z" level=info msg="CreateContainer within sandbox \"f0eb89ebced5479fb1427ebab7ebba0225b918e035c3142ebdc3a7d1dc33c697\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 05:53:15.459452 containerd[1834]: time="2025-07-07T05:53:15.459053349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-a-5429f7cfbd,Uid:97a037a48748624e1dbceadeb27d5f2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5defcf0119fc49a48e416bc2bc0b37d5f96326a231bc598332220f7061b39fb9\"" Jul 7 05:53:15.461316 containerd[1834]: time="2025-07-07T05:53:15.461273069Z" level=info msg="CreateContainer within sandbox \"1cb5e03f89c8f086f7fc2837a930740ad582dd5899ddfc7dd4b16e41e4dbb10f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 05:53:15.465031 containerd[1834]: time="2025-07-07T05:53:15.464886188Z" level=info msg="CreateContainer within sandbox \"5defcf0119fc49a48e416bc2bc0b37d5f96326a231bc598332220f7061b39fb9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 05:53:15.520352 kubelet[2880]: E0707 05:53:15.520291 2880 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:15.570110 containerd[1834]: time="2025-07-07T05:53:15.569790050Z" level=info msg="CreateContainer within sandbox \"f0eb89ebced5479fb1427ebab7ebba0225b918e035c3142ebdc3a7d1dc33c697\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5dd81b3dce8298e77d0bb70c261eecc2c7316d54d26f97ef2463e51bf438ce0c\"" Jul 7 05:53:15.571041 containerd[1834]: time="2025-07-07T05:53:15.570997330Z" level=info msg="StartContainer for \"5dd81b3dce8298e77d0bb70c261eecc2c7316d54d26f97ef2463e51bf438ce0c\"" Jul 7 05:53:15.589439 containerd[1834]: time="2025-07-07T05:53:15.589147766Z" level=info msg="CreateContainer within sandbox \"5defcf0119fc49a48e416bc2bc0b37d5f96326a231bc598332220f7061b39fb9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"08e965a039c7fd23f68c8b0cb79da1c1cc81ba5f4f4bc4832c7fcf618fe08a22\"" Jul 7 05:53:15.590104 containerd[1834]: time="2025-07-07T05:53:15.589733526Z" level=info msg="StartContainer for \"08e965a039c7fd23f68c8b0cb79da1c1cc81ba5f4f4bc4832c7fcf618fe08a22\"" Jul 7 05:53:15.594833 containerd[1834]: time="2025-07-07T05:53:15.594685966Z" level=info msg="CreateContainer within sandbox \"1cb5e03f89c8f086f7fc2837a930740ad582dd5899ddfc7dd4b16e41e4dbb10f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"04fd98c2209c4721191b13b4849e5e78ad9615b153594c45f09b2d479852c18c\"" Jul 7 05:53:15.595773 containerd[1834]: time="2025-07-07T05:53:15.595737245Z" level=info msg="StartContainer for \"04fd98c2209c4721191b13b4849e5e78ad9615b153594c45f09b2d479852c18c\"" Jul 7 05:53:15.684702 containerd[1834]: time="2025-07-07T05:53:15.684573310Z" level=info msg="StartContainer for \"5dd81b3dce8298e77d0bb70c261eecc2c7316d54d26f97ef2463e51bf438ce0c\" returns successfully" Jul 7 05:53:15.711613 containerd[1834]: time="2025-07-07T05:53:15.711557825Z" level=info msg="StartContainer for \"08e965a039c7fd23f68c8b0cb79da1c1cc81ba5f4f4bc4832c7fcf618fe08a22\" returns successfully" Jul 7 05:53:15.726463 containerd[1834]: time="2025-07-07T05:53:15.726052902Z" level=info msg="StartContainer for \"04fd98c2209c4721191b13b4849e5e78ad9615b153594c45f09b2d479852c18c\" returns successfully" Jul 7 05:53:16.757427 kubelet[2880]: I0707 05:53:16.757393 2880 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:17.338230 update_engine[1810]: I20250707 05:53:17.338125 1810 update_attempter.cc:509] Updating boot flags... Jul 7 05:53:17.501072 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (3165) Jul 7 05:53:18.177665 kubelet[2880]: E0707 05:53:18.177611 2880 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.4-a-5429f7cfbd\" not found" node="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:18.353795 kubelet[2880]: I0707 05:53:18.353748 2880 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:18.353795 kubelet[2880]: E0707 05:53:18.353800 2880 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.4-a-5429f7cfbd\": node \"ci-4081.3.4-a-5429f7cfbd\" not found" Jul 7 05:53:18.523459 kubelet[2880]: I0707 05:53:18.523415 2880 apiserver.go:52] "Watching apiserver" Jul 7 05:53:18.533395 kubelet[2880]: I0707 05:53:18.533352 2880 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 05:53:18.647684 kubelet[2880]: E0707 05:53:18.647634 2880 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.4-a-5429f7cfbd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:18.652087 kubelet[2880]: E0707 05:53:18.651507 2880 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.4-a-5429f7cfbd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:19.242316 kubelet[2880]: W0707 05:53:19.242263 2880 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 05:53:20.697843 systemd[1]: Reloading requested from client PID 3193 ('systemctl') (unit session-9.scope)... Jul 7 05:53:20.697860 systemd[1]: Reloading... Jul 7 05:53:20.777241 zram_generator::config[3234]: No configuration found. Jul 7 05:53:20.907613 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:53:20.999083 systemd[1]: Reloading finished in 300 ms. Jul 7 05:53:21.030731 kubelet[2880]: I0707 05:53:21.030657 2880 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 05:53:21.031071 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:21.049379 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 05:53:21.049680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:21.059688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:21.248071 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:21.258692 (kubelet)[3307]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 05:53:21.307358 kubelet[3307]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:53:21.308424 kubelet[3307]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 05:53:21.308424 kubelet[3307]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:53:21.308424 kubelet[3307]: I0707 05:53:21.307481 3307 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 05:53:21.320824 kubelet[3307]: I0707 05:53:21.320740 3307 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 05:53:21.320824 kubelet[3307]: I0707 05:53:21.320775 3307 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 05:53:21.321429 kubelet[3307]: I0707 05:53:21.321408 3307 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 05:53:21.323035 kubelet[3307]: I0707 05:53:21.323003 3307 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 05:53:21.325878 kubelet[3307]: I0707 05:53:21.325816 3307 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 05:53:21.333106 kubelet[3307]: E0707 05:53:21.332428 3307 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 05:53:21.333106 kubelet[3307]: I0707 05:53:21.332466 3307 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 05:53:21.338242 kubelet[3307]: I0707 05:53:21.338207 3307 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 05:53:21.338847 kubelet[3307]: I0707 05:53:21.338830 3307 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 05:53:21.339108 kubelet[3307]: I0707 05:53:21.339072 3307 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 05:53:21.339394 kubelet[3307]: I0707 05:53:21.339183 3307 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-a-5429f7cfbd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 7 05:53:21.339693 kubelet[3307]: I0707 05:53:21.339524 3307 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 05:53:21.339693 kubelet[3307]: I0707 05:53:21.339541 3307 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 05:53:21.339693 kubelet[3307]: I0707 05:53:21.339587 3307 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:53:21.339820 kubelet[3307]: I0707 05:53:21.339811 3307 kubelet.go:408] "Attempting to sync node with API server" Jul 7 05:53:21.340465 kubelet[3307]: I0707 05:53:21.340407 3307 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 05:53:21.341108 kubelet[3307]: I0707 05:53:21.340528 3307 kubelet.go:314] "Adding apiserver pod source" Jul 7 05:53:21.341108 kubelet[3307]: I0707 05:53:21.340556 3307 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 05:53:21.350920 kubelet[3307]: I0707 05:53:21.350876 3307 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 05:53:21.354067 kubelet[3307]: I0707 05:53:21.351500 3307 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 05:53:21.354067 kubelet[3307]: I0707 05:53:21.352695 3307 server.go:1274] "Started kubelet" Jul 7 05:53:21.356533 kubelet[3307]: I0707 05:53:21.356324 3307 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 05:53:21.358959 kubelet[3307]: I0707 05:53:21.358904 3307 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 05:53:21.359509 kubelet[3307]: I0707 05:53:21.359457 3307 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 05:53:21.359801 kubelet[3307]: I0707 05:53:21.359773 3307 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 05:53:21.360107 kubelet[3307]: I0707 05:53:21.360084 3307 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 05:53:21.361205 kubelet[3307]: I0707 05:53:21.361177 3307 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 05:53:21.361824 kubelet[3307]: E0707 05:53:21.361787 3307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-5429f7cfbd\" not found" Jul 7 05:53:21.367827 kubelet[3307]: I0707 05:53:21.367798 3307 factory.go:221] Registration of the systemd container factory successfully Jul 7 05:53:21.368095 kubelet[3307]: I0707 05:53:21.368050 3307 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 05:53:21.373436 kubelet[3307]: I0707 05:53:21.373384 3307 server.go:449] "Adding debug handlers to kubelet server" Jul 7 05:53:21.374756 kubelet[3307]: I0707 05:53:21.374728 3307 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 05:53:21.375005 kubelet[3307]: I0707 05:53:21.374858 3307 reconciler.go:26] "Reconciler: start to sync state" Jul 7 05:53:21.386429 kubelet[3307]: E0707 05:53:21.386387 3307 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 05:53:21.388094 kubelet[3307]: I0707 05:53:21.386886 3307 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 05:53:21.390624 kubelet[3307]: I0707 05:53:21.390201 3307 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 05:53:21.391103 kubelet[3307]: I0707 05:53:21.391090 3307 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 05:53:21.391527 kubelet[3307]: I0707 05:53:21.391350 3307 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 05:53:21.393133 kubelet[3307]: E0707 05:53:21.391817 3307 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 05:53:21.395065 kubelet[3307]: I0707 05:53:21.394369 3307 factory.go:221] Registration of the containerd container factory successfully Jul 7 05:53:21.466807 kubelet[3307]: I0707 05:53:21.466775 3307 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 05:53:21.466807 kubelet[3307]: I0707 05:53:21.466797 3307 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 05:53:21.466976 kubelet[3307]: I0707 05:53:21.466824 3307 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:53:21.467024 kubelet[3307]: I0707 05:53:21.467004 3307 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 05:53:21.467053 kubelet[3307]: I0707 05:53:21.467021 3307 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 05:53:21.467053 kubelet[3307]: I0707 05:53:21.467040 3307 policy_none.go:49] "None policy: Start" Jul 7 05:53:21.467978 kubelet[3307]: I0707 05:53:21.467939 3307 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 05:53:21.467978 kubelet[3307]: I0707 05:53:21.467973 3307 state_mem.go:35] "Initializing new in-memory state store" Jul 7 05:53:21.468268 kubelet[3307]: I0707 05:53:21.468245 3307 state_mem.go:75] "Updated machine memory state" Jul 7 05:53:21.469503 kubelet[3307]: I0707 05:53:21.469477 3307 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 05:53:21.470573 kubelet[3307]: I0707 05:53:21.469662 3307 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 05:53:21.470573 kubelet[3307]: I0707 05:53:21.469684 3307 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 05:53:21.470573 kubelet[3307]: I0707 05:53:21.470475 3307 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 05:53:21.504334 kubelet[3307]: W0707 05:53:21.504288 3307 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 05:53:21.508934 kubelet[3307]: W0707 05:53:21.508813 3307 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 05:53:21.510198 kubelet[3307]: W0707 05:53:21.510012 3307 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 05:53:21.510198 kubelet[3307]: E0707 05:53:21.510097 3307 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.4-a-5429f7cfbd\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:21.579022 kubelet[3307]: I0707 05:53:21.578776 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97a037a48748624e1dbceadeb27d5f2f-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-5429f7cfbd\" (UID: \"97a037a48748624e1dbceadeb27d5f2f\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:21.579022 kubelet[3307]: I0707 05:53:21.578817 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97a037a48748624e1dbceadeb27d5f2f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-a-5429f7cfbd\" (UID: \"97a037a48748624e1dbceadeb27d5f2f\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:21.579022 kubelet[3307]: I0707 05:53:21.578840 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97a037a48748624e1dbceadeb27d5f2f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-a-5429f7cfbd\" (UID: \"97a037a48748624e1dbceadeb27d5f2f\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:21.579022 kubelet[3307]: I0707 05:53:21.578859 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3f9cb2ee93e3f6a78a75e409789706e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-a-5429f7cfbd\" (UID: \"d3f9cb2ee93e3f6a78a75e409789706e\") " pod="kube-system/kube-scheduler-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:21.579022 kubelet[3307]: I0707 05:53:21.578876 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38ad66146f22f588ff92c70873b0b285-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-a-5429f7cfbd\" (UID: \"38ad66146f22f588ff92c70873b0b285\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:21.579289 kubelet[3307]: I0707 05:53:21.578890 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38ad66146f22f588ff92c70873b0b285-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-a-5429f7cfbd\" (UID: \"38ad66146f22f588ff92c70873b0b285\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:21.579289 kubelet[3307]: I0707 05:53:21.578906 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38ad66146f22f588ff92c70873b0b285-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-a-5429f7cfbd\" (UID: \"38ad66146f22f588ff92c70873b0b285\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:21.579289 kubelet[3307]: I0707 05:53:21.578920 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97a037a48748624e1dbceadeb27d5f2f-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-5429f7cfbd\" (UID: \"97a037a48748624e1dbceadeb27d5f2f\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:21.579289 kubelet[3307]: I0707 05:53:21.578937 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97a037a48748624e1dbceadeb27d5f2f-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-a-5429f7cfbd\" (UID: \"97a037a48748624e1dbceadeb27d5f2f\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:21.581389 kubelet[3307]: I0707 05:53:21.581139 3307 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:21.594684 kubelet[3307]: I0707 05:53:21.594641 3307 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:21.594857 kubelet[3307]: I0707 05:53:21.594753 3307 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:22.346733 kubelet[3307]: I0707 05:53:22.346684 3307 apiserver.go:52] "Watching apiserver" Jul 7 05:53:22.376103 kubelet[3307]: I0707 05:53:22.375419 3307 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 05:53:22.452903 kubelet[3307]: W0707 05:53:22.452573 3307 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 05:53:22.452903 kubelet[3307]: E0707 05:53:22.452664 3307 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.4-a-5429f7cfbd\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.4-a-5429f7cfbd" Jul 7 05:53:22.504194 kubelet[3307]: I0707 05:53:22.504109 3307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.4-a-5429f7cfbd" podStartSLOduration=1.504071187 podStartE2EDuration="1.504071187s" podCreationTimestamp="2025-07-07 05:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:53:22.503341348 +0000 UTC m=+1.239480731" watchObservedRunningTime="2025-07-07 05:53:22.504071187 +0000 UTC m=+1.240210570" Jul 7 05:53:22.556134 kubelet[3307]: I0707 05:53:22.555588 3307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.4-a-5429f7cfbd" podStartSLOduration=1.5552995649999999 podStartE2EDuration="1.555299565s" podCreationTimestamp="2025-07-07 05:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:53:22.531232975 +0000 UTC m=+1.267372318" watchObservedRunningTime="2025-07-07 05:53:22.555299565 +0000 UTC m=+1.291438948" Jul 7 05:53:22.556524 kubelet[3307]: I0707 05:53:22.556465 3307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.4-a-5429f7cfbd" podStartSLOduration=3.556425804 podStartE2EDuration="3.556425804s" podCreationTimestamp="2025-07-07 05:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:53:22.556219324 +0000 UTC m=+1.292358707" watchObservedRunningTime="2025-07-07 05:53:22.556425804 +0000 UTC m=+1.292565147" Jul 7 05:53:26.887946 kubelet[3307]: I0707 05:53:26.887882 3307 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 05:53:26.889075 containerd[1834]: time="2025-07-07T05:53:26.888923227Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 05:53:26.889795 kubelet[3307]: I0707 05:53:26.889765 3307 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 05:53:27.817891 kubelet[3307]: I0707 05:53:27.817647 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cef2e72-f288-433b-9a97-acae9cf4d44d-lib-modules\") pod \"kube-proxy-dqpbg\" (UID: \"9cef2e72-f288-433b-9a97-acae9cf4d44d\") " pod="kube-system/kube-proxy-dqpbg" Jul 7 05:53:27.818461 kubelet[3307]: I0707 05:53:27.818034 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58jv9\" (UniqueName: \"kubernetes.io/projected/9cef2e72-f288-433b-9a97-acae9cf4d44d-kube-api-access-58jv9\") pod \"kube-proxy-dqpbg\" (UID: \"9cef2e72-f288-433b-9a97-acae9cf4d44d\") " pod="kube-system/kube-proxy-dqpbg" Jul 7 05:53:27.818461 kubelet[3307]: I0707 05:53:27.818307 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9cef2e72-f288-433b-9a97-acae9cf4d44d-kube-proxy\") pod \"kube-proxy-dqpbg\" (UID: \"9cef2e72-f288-433b-9a97-acae9cf4d44d\") " pod="kube-system/kube-proxy-dqpbg" Jul 7 05:53:27.818461 kubelet[3307]: I0707 05:53:27.818344 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cef2e72-f288-433b-9a97-acae9cf4d44d-xtables-lock\") pod \"kube-proxy-dqpbg\" (UID: \"9cef2e72-f288-433b-9a97-acae9cf4d44d\") " pod="kube-system/kube-proxy-dqpbg" Jul 7 05:53:27.919376 kubelet[3307]: I0707 05:53:27.918830 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjwtq\" (UniqueName: \"kubernetes.io/projected/2b60bdcb-f972-4c91-a1bf-d778a5ac0c48-kube-api-access-zjwtq\") pod \"tigera-operator-5bf8dfcb4-xzmjq\" (UID: \"2b60bdcb-f972-4c91-a1bf-d778a5ac0c48\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-xzmjq" Jul 7 05:53:27.919376 kubelet[3307]: I0707 05:53:27.918884 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2b60bdcb-f972-4c91-a1bf-d778a5ac0c48-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-xzmjq\" (UID: \"2b60bdcb-f972-4c91-a1bf-d778a5ac0c48\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-xzmjq" Jul 7 05:53:28.107200 containerd[1834]: time="2025-07-07T05:53:28.106906928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dqpbg,Uid:9cef2e72-f288-433b-9a97-acae9cf4d44d,Namespace:kube-system,Attempt:0,}" Jul 7 05:53:28.162078 containerd[1834]: time="2025-07-07T05:53:28.161819922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:53:28.162078 containerd[1834]: time="2025-07-07T05:53:28.162014442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:53:28.162268 containerd[1834]: time="2025-07-07T05:53:28.162129002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:28.163087 containerd[1834]: time="2025-07-07T05:53:28.162883442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:28.185014 containerd[1834]: time="2025-07-07T05:53:28.184948720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-xzmjq,Uid:2b60bdcb-f972-4c91-a1bf-d778a5ac0c48,Namespace:tigera-operator,Attempt:0,}" Jul 7 05:53:28.201517 containerd[1834]: time="2025-07-07T05:53:28.201373998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dqpbg,Uid:9cef2e72-f288-433b-9a97-acae9cf4d44d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9980c7fdc723361599064c6a3526f135d272d136af216590aa6354efaf438df4\"" Jul 7 05:53:28.205354 containerd[1834]: time="2025-07-07T05:53:28.205307878Z" level=info msg="CreateContainer within sandbox \"9980c7fdc723361599064c6a3526f135d272d136af216590aa6354efaf438df4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 05:53:28.268964 containerd[1834]: time="2025-07-07T05:53:28.268851551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:53:28.268964 containerd[1834]: time="2025-07-07T05:53:28.268927391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:53:28.269297 containerd[1834]: time="2025-07-07T05:53:28.268944111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:28.269867 containerd[1834]: time="2025-07-07T05:53:28.269804671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:28.306514 containerd[1834]: time="2025-07-07T05:53:28.306070787Z" level=info msg="CreateContainer within sandbox \"9980c7fdc723361599064c6a3526f135d272d136af216590aa6354efaf438df4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d79471b2b5187e3919f787d7ab5e5c2d1d513973afd43c251af5706f1691105f\"" Jul 7 05:53:28.309197 containerd[1834]: time="2025-07-07T05:53:28.309151707Z" level=info msg="StartContainer for \"d79471b2b5187e3919f787d7ab5e5c2d1d513973afd43c251af5706f1691105f\"" Jul 7 05:53:28.324827 containerd[1834]: time="2025-07-07T05:53:28.324786985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-xzmjq,Uid:2b60bdcb-f972-4c91-a1bf-d778a5ac0c48,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e06a1fbe2eba57ff5aa763d9f206c3bf3a1e3fb994b55516d62faf83d5d2f3e6\"" Jul 7 05:53:28.327850 containerd[1834]: time="2025-07-07T05:53:28.327810025Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 05:53:28.383737 containerd[1834]: time="2025-07-07T05:53:28.383613019Z" level=info msg="StartContainer for \"d79471b2b5187e3919f787d7ab5e5c2d1d513973afd43c251af5706f1691105f\" returns successfully" Jul 7 05:53:28.494238 kubelet[3307]: I0707 05:53:28.494163 3307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dqpbg" podStartSLOduration=1.494125207 podStartE2EDuration="1.494125207s" podCreationTimestamp="2025-07-07 05:53:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:53:28.476769129 +0000 UTC m=+7.212908592" watchObservedRunningTime="2025-07-07 05:53:28.494125207 +0000 UTC m=+7.230264590" Jul 7 05:53:29.942468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3058183143.mount: Deactivated successfully. Jul 7 05:53:30.370601 containerd[1834]: time="2025-07-07T05:53:30.370543806Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:30.373599 containerd[1834]: time="2025-07-07T05:53:30.373526046Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 7 05:53:30.379158 containerd[1834]: time="2025-07-07T05:53:30.379084405Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:30.386906 containerd[1834]: time="2025-07-07T05:53:30.386814564Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:30.387895 containerd[1834]: time="2025-07-07T05:53:30.387681524Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.059519379s" Jul 7 05:53:30.387895 containerd[1834]: time="2025-07-07T05:53:30.387722364Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 7 05:53:30.391649 containerd[1834]: time="2025-07-07T05:53:30.391600084Z" level=info msg="CreateContainer within sandbox \"e06a1fbe2eba57ff5aa763d9f206c3bf3a1e3fb994b55516d62faf83d5d2f3e6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 05:53:30.454827 containerd[1834]: time="2025-07-07T05:53:30.454774797Z" level=info msg="CreateContainer within sandbox \"e06a1fbe2eba57ff5aa763d9f206c3bf3a1e3fb994b55516d62faf83d5d2f3e6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3e05d701db93790456cf004989380f9862b82cdb48661e57e281a01c69366b77\"" Jul 7 05:53:30.456728 containerd[1834]: time="2025-07-07T05:53:30.455671677Z" level=info msg="StartContainer for \"3e05d701db93790456cf004989380f9862b82cdb48661e57e281a01c69366b77\"" Jul 7 05:53:30.513971 containerd[1834]: time="2025-07-07T05:53:30.513919271Z" level=info msg="StartContainer for \"3e05d701db93790456cf004989380f9862b82cdb48661e57e281a01c69366b77\" returns successfully" Jul 7 05:53:33.534093 kubelet[3307]: I0707 05:53:33.532196 3307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-xzmjq" podStartSLOduration=4.470616209 podStartE2EDuration="6.532177948s" podCreationTimestamp="2025-07-07 05:53:27 +0000 UTC" firstStartedPulling="2025-07-07 05:53:28.327247985 +0000 UTC m=+7.063387368" lastFinishedPulling="2025-07-07 05:53:30.388809724 +0000 UTC m=+9.124949107" observedRunningTime="2025-07-07 05:53:31.483808167 +0000 UTC m=+10.219947550" watchObservedRunningTime="2025-07-07 05:53:33.532177948 +0000 UTC m=+12.268317331" Jul 7 05:53:36.840339 sudo[2357]: pam_unix(sudo:session): session closed for user root Jul 7 05:53:36.924207 sshd[2353]: pam_unix(sshd:session): session closed for user core Jul 7 05:53:36.929177 systemd[1]: sshd@6-10.200.20.35:22-10.200.16.10:34572.service: Deactivated successfully. Jul 7 05:53:36.938801 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 05:53:36.940607 systemd-logind[1801]: Session 9 logged out. Waiting for processes to exit. Jul 7 05:53:36.942803 systemd-logind[1801]: Removed session 9. Jul 7 05:53:45.540778 kubelet[3307]: I0707 05:53:45.540713 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9247f87c-2526-4198-9e83-d09163de66a4-tigera-ca-bundle\") pod \"calico-typha-8f46b865d-ncddn\" (UID: \"9247f87c-2526-4198-9e83-d09163de66a4\") " pod="calico-system/calico-typha-8f46b865d-ncddn" Jul 7 05:53:45.540778 kubelet[3307]: I0707 05:53:45.540773 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9247f87c-2526-4198-9e83-d09163de66a4-typha-certs\") pod \"calico-typha-8f46b865d-ncddn\" (UID: \"9247f87c-2526-4198-9e83-d09163de66a4\") " pod="calico-system/calico-typha-8f46b865d-ncddn" Jul 7 05:53:45.540778 kubelet[3307]: I0707 05:53:45.540794 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p69zh\" (UniqueName: \"kubernetes.io/projected/9247f87c-2526-4198-9e83-d09163de66a4-kube-api-access-p69zh\") pod \"calico-typha-8f46b865d-ncddn\" (UID: \"9247f87c-2526-4198-9e83-d09163de66a4\") " pod="calico-system/calico-typha-8f46b865d-ncddn" Jul 7 05:53:45.743103 kubelet[3307]: I0707 05:53:45.742436 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/14831fb2-8a2c-4818-8d80-01e1663c1e45-policysync\") pod \"calico-node-w5flf\" (UID: \"14831fb2-8a2c-4818-8d80-01e1663c1e45\") " pod="calico-system/calico-node-w5flf" Jul 7 05:53:45.743103 kubelet[3307]: I0707 05:53:45.742483 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/14831fb2-8a2c-4818-8d80-01e1663c1e45-var-lib-calico\") pod \"calico-node-w5flf\" (UID: \"14831fb2-8a2c-4818-8d80-01e1663c1e45\") " pod="calico-system/calico-node-w5flf" Jul 7 05:53:45.743103 kubelet[3307]: I0707 05:53:45.742507 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14831fb2-8a2c-4818-8d80-01e1663c1e45-tigera-ca-bundle\") pod \"calico-node-w5flf\" (UID: \"14831fb2-8a2c-4818-8d80-01e1663c1e45\") " pod="calico-system/calico-node-w5flf" Jul 7 05:53:45.743103 kubelet[3307]: I0707 05:53:45.742522 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14831fb2-8a2c-4818-8d80-01e1663c1e45-xtables-lock\") pod \"calico-node-w5flf\" (UID: \"14831fb2-8a2c-4818-8d80-01e1663c1e45\") " pod="calico-system/calico-node-w5flf" Jul 7 05:53:45.743103 kubelet[3307]: I0707 05:53:45.742545 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/14831fb2-8a2c-4818-8d80-01e1663c1e45-cni-bin-dir\") pod \"calico-node-w5flf\" (UID: \"14831fb2-8a2c-4818-8d80-01e1663c1e45\") " pod="calico-system/calico-node-w5flf" Jul 7 05:53:45.743367 kubelet[3307]: I0707 05:53:45.742562 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/14831fb2-8a2c-4818-8d80-01e1663c1e45-cni-net-dir\") pod \"calico-node-w5flf\" (UID: \"14831fb2-8a2c-4818-8d80-01e1663c1e45\") " pod="calico-system/calico-node-w5flf" Jul 7 05:53:45.743367 kubelet[3307]: I0707 05:53:45.742581 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/14831fb2-8a2c-4818-8d80-01e1663c1e45-flexvol-driver-host\") pod \"calico-node-w5flf\" (UID: \"14831fb2-8a2c-4818-8d80-01e1663c1e45\") " pod="calico-system/calico-node-w5flf" Jul 7 05:53:45.743367 kubelet[3307]: I0707 05:53:45.742605 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/14831fb2-8a2c-4818-8d80-01e1663c1e45-node-certs\") pod \"calico-node-w5flf\" (UID: \"14831fb2-8a2c-4818-8d80-01e1663c1e45\") " pod="calico-system/calico-node-w5flf" Jul 7 05:53:45.743367 kubelet[3307]: I0707 05:53:45.742627 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14831fb2-8a2c-4818-8d80-01e1663c1e45-lib-modules\") pod \"calico-node-w5flf\" (UID: \"14831fb2-8a2c-4818-8d80-01e1663c1e45\") " pod="calico-system/calico-node-w5flf" Jul 7 05:53:45.743367 kubelet[3307]: I0707 05:53:45.742648 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/14831fb2-8a2c-4818-8d80-01e1663c1e45-var-run-calico\") pod \"calico-node-w5flf\" (UID: \"14831fb2-8a2c-4818-8d80-01e1663c1e45\") " pod="calico-system/calico-node-w5flf" Jul 7 05:53:45.744144 kubelet[3307]: I0707 05:53:45.742668 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/14831fb2-8a2c-4818-8d80-01e1663c1e45-cni-log-dir\") pod \"calico-node-w5flf\" (UID: \"14831fb2-8a2c-4818-8d80-01e1663c1e45\") " pod="calico-system/calico-node-w5flf" Jul 7 05:53:45.744144 kubelet[3307]: I0707 05:53:45.742687 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s86h\" (UniqueName: \"kubernetes.io/projected/14831fb2-8a2c-4818-8d80-01e1663c1e45-kube-api-access-9s86h\") pod \"calico-node-w5flf\" (UID: \"14831fb2-8a2c-4818-8d80-01e1663c1e45\") " pod="calico-system/calico-node-w5flf" Jul 7 05:53:45.800600 containerd[1834]: time="2025-07-07T05:53:45.800402263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8f46b865d-ncddn,Uid:9247f87c-2526-4198-9e83-d09163de66a4,Namespace:calico-system,Attempt:0,}" Jul 7 05:53:45.817800 kubelet[3307]: E0707 05:53:45.815045 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7clz" podUID="c92a7bb2-2db4-4f96-97d0-028fc27545ab" Jul 7 05:53:45.844105 kubelet[3307]: I0707 05:53:45.844014 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c92a7bb2-2db4-4f96-97d0-028fc27545ab-registration-dir\") pod \"csi-node-driver-z7clz\" (UID: \"c92a7bb2-2db4-4f96-97d0-028fc27545ab\") " pod="calico-system/csi-node-driver-z7clz" Jul 7 05:53:45.844267 kubelet[3307]: I0707 05:53:45.844231 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c92a7bb2-2db4-4f96-97d0-028fc27545ab-kubelet-dir\") pod \"csi-node-driver-z7clz\" (UID: \"c92a7bb2-2db4-4f96-97d0-028fc27545ab\") " pod="calico-system/csi-node-driver-z7clz" Jul 7 05:53:45.844719 kubelet[3307]: I0707 05:53:45.844409 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c92a7bb2-2db4-4f96-97d0-028fc27545ab-varrun\") pod \"csi-node-driver-z7clz\" (UID: \"c92a7bb2-2db4-4f96-97d0-028fc27545ab\") " pod="calico-system/csi-node-driver-z7clz" Jul 7 05:53:45.844719 kubelet[3307]: I0707 05:53:45.844466 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnw69\" (UniqueName: \"kubernetes.io/projected/c92a7bb2-2db4-4f96-97d0-028fc27545ab-kube-api-access-hnw69\") pod \"csi-node-driver-z7clz\" (UID: \"c92a7bb2-2db4-4f96-97d0-028fc27545ab\") " pod="calico-system/csi-node-driver-z7clz" Jul 7 05:53:45.844719 kubelet[3307]: I0707 05:53:45.844599 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c92a7bb2-2db4-4f96-97d0-028fc27545ab-socket-dir\") pod \"csi-node-driver-z7clz\" (UID: \"c92a7bb2-2db4-4f96-97d0-028fc27545ab\") " pod="calico-system/csi-node-driver-z7clz" Jul 7 05:53:45.854238 kubelet[3307]: E0707 05:53:45.854086 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.854238 kubelet[3307]: W0707 05:53:45.854122 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.854451 kubelet[3307]: E0707 05:53:45.854427 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.862157 kubelet[3307]: E0707 05:53:45.857557 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.862157 kubelet[3307]: W0707 05:53:45.857589 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.862157 kubelet[3307]: E0707 05:53:45.857611 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.862157 kubelet[3307]: E0707 05:53:45.861425 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.862157 kubelet[3307]: W0707 05:53:45.861449 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.862157 kubelet[3307]: E0707 05:53:45.861474 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.866472 kubelet[3307]: E0707 05:53:45.866355 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.866472 kubelet[3307]: W0707 05:53:45.866386 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.866472 kubelet[3307]: E0707 05:53:45.866415 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.869441 kubelet[3307]: E0707 05:53:45.867160 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.869441 kubelet[3307]: W0707 05:53:45.867183 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.869441 kubelet[3307]: E0707 05:53:45.867200 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.869441 kubelet[3307]: E0707 05:53:45.867718 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.869441 kubelet[3307]: W0707 05:53:45.867883 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.869441 kubelet[3307]: E0707 05:53:45.867904 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.869441 kubelet[3307]: E0707 05:53:45.868948 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.869441 kubelet[3307]: W0707 05:53:45.869160 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.869441 kubelet[3307]: E0707 05:53:45.869180 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.889210 kubelet[3307]: E0707 05:53:45.885193 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.889210 kubelet[3307]: W0707 05:53:45.885223 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.889210 kubelet[3307]: E0707 05:53:45.885256 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.890160 kubelet[3307]: E0707 05:53:45.890131 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.890323 kubelet[3307]: W0707 05:53:45.890305 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.894695 kubelet[3307]: E0707 05:53:45.894650 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.928120 containerd[1834]: time="2025-07-07T05:53:45.924462927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:53:45.928120 containerd[1834]: time="2025-07-07T05:53:45.926727447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:53:45.928120 containerd[1834]: time="2025-07-07T05:53:45.926801447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:45.930475 containerd[1834]: time="2025-07-07T05:53:45.930174206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:45.950159 kubelet[3307]: E0707 05:53:45.948399 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.950159 kubelet[3307]: W0707 05:53:45.949117 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.950159 kubelet[3307]: E0707 05:53:45.949145 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.951140 kubelet[3307]: E0707 05:53:45.950245 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.951140 kubelet[3307]: W0707 05:53:45.951135 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.951280 kubelet[3307]: E0707 05:53:45.951185 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.953337 kubelet[3307]: E0707 05:53:45.953298 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.953337 kubelet[3307]: W0707 05:53:45.953323 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.956303 kubelet[3307]: E0707 05:53:45.956169 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.957313 kubelet[3307]: E0707 05:53:45.957285 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.957361 kubelet[3307]: W0707 05:53:45.957313 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.957361 kubelet[3307]: E0707 05:53:45.957342 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.960247 kubelet[3307]: E0707 05:53:45.960198 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.960247 kubelet[3307]: W0707 05:53:45.960241 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.960619 kubelet[3307]: E0707 05:53:45.960591 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.966378 kubelet[3307]: E0707 05:53:45.966153 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.966378 kubelet[3307]: W0707 05:53:45.966238 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.966596 kubelet[3307]: E0707 05:53:45.966510 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.968347 kubelet[3307]: E0707 05:53:45.968284 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.968347 kubelet[3307]: W0707 05:53:45.968338 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.970454 kubelet[3307]: E0707 05:53:45.970372 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.974902 kubelet[3307]: E0707 05:53:45.974837 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.974902 kubelet[3307]: W0707 05:53:45.974891 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.975258 kubelet[3307]: E0707 05:53:45.975156 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.975947 kubelet[3307]: E0707 05:53:45.975894 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.976091 kubelet[3307]: W0707 05:53:45.975926 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.976693 kubelet[3307]: E0707 05:53:45.976181 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.977347 kubelet[3307]: E0707 05:53:45.976846 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.977347 kubelet[3307]: W0707 05:53:45.976879 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.977615 kubelet[3307]: E0707 05:53:45.977532 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.981103 kubelet[3307]: E0707 05:53:45.979495 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.981103 kubelet[3307]: W0707 05:53:45.979523 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.981332 containerd[1834]: time="2025-07-07T05:53:45.979713360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w5flf,Uid:14831fb2-8a2c-4818-8d80-01e1663c1e45,Namespace:calico-system,Attempt:0,}" Jul 7 05:53:45.982123 kubelet[3307]: E0707 05:53:45.981437 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.982123 kubelet[3307]: E0707 05:53:45.981644 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.982123 kubelet[3307]: W0707 05:53:45.981693 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.982123 kubelet[3307]: E0707 05:53:45.981771 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.982375 kubelet[3307]: E0707 05:53:45.982140 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.982375 kubelet[3307]: W0707 05:53:45.982185 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.982375 kubelet[3307]: E0707 05:53:45.982274 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.983621 kubelet[3307]: E0707 05:53:45.983575 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.984319 kubelet[3307]: W0707 05:53:45.983635 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.984319 kubelet[3307]: E0707 05:53:45.983725 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.984319 kubelet[3307]: E0707 05:53:45.984205 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.984319 kubelet[3307]: W0707 05:53:45.984242 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.984485 kubelet[3307]: E0707 05:53:45.984342 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.985321 kubelet[3307]: E0707 05:53:45.984627 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.985321 kubelet[3307]: W0707 05:53:45.984647 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.985321 kubelet[3307]: E0707 05:53:45.984749 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.985321 kubelet[3307]: E0707 05:53:45.984947 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.985321 kubelet[3307]: W0707 05:53:45.984958 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.985321 kubelet[3307]: E0707 05:53:45.985036 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.987216 kubelet[3307]: E0707 05:53:45.985470 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.987216 kubelet[3307]: W0707 05:53:45.985483 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.987216 kubelet[3307]: E0707 05:53:45.985543 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.987519 kubelet[3307]: E0707 05:53:45.987361 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.987519 kubelet[3307]: W0707 05:53:45.987379 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.988209 kubelet[3307]: E0707 05:53:45.988121 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.989500 kubelet[3307]: E0707 05:53:45.989464 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.989500 kubelet[3307]: W0707 05:53:45.989493 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.989904 kubelet[3307]: E0707 05:53:45.989646 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.991573 kubelet[3307]: E0707 05:53:45.991276 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.991573 kubelet[3307]: W0707 05:53:45.991351 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.992849 kubelet[3307]: E0707 05:53:45.992693 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.993650 kubelet[3307]: E0707 05:53:45.993629 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.993887 kubelet[3307]: W0707 05:53:45.993788 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.994859 kubelet[3307]: E0707 05:53:45.994383 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.997287 kubelet[3307]: E0707 05:53:45.997247 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.997287 kubelet[3307]: W0707 05:53:45.997275 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.997287 kubelet[3307]: E0707 05:53:45.997338 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:45.999459 kubelet[3307]: E0707 05:53:45.999017 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:45.999459 kubelet[3307]: W0707 05:53:45.999044 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:45.999459 kubelet[3307]: E0707 05:53:45.999088 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:46.001950 kubelet[3307]: E0707 05:53:46.001506 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:46.001950 kubelet[3307]: W0707 05:53:46.001542 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:46.001950 kubelet[3307]: E0707 05:53:46.001565 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:46.036116 kubelet[3307]: E0707 05:53:46.035816 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:46.036116 kubelet[3307]: W0707 05:53:46.035846 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:46.036116 kubelet[3307]: E0707 05:53:46.035870 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:46.106422 containerd[1834]: time="2025-07-07T05:53:46.104685584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:53:46.106422 containerd[1834]: time="2025-07-07T05:53:46.105274863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:53:46.107477 containerd[1834]: time="2025-07-07T05:53:46.105434663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:46.110950 containerd[1834]: time="2025-07-07T05:53:46.109410303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:46.184811 containerd[1834]: time="2025-07-07T05:53:46.184754133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8f46b865d-ncddn,Uid:9247f87c-2526-4198-9e83-d09163de66a4,Namespace:calico-system,Attempt:0,} returns sandbox id \"63143017a1cf9b8b3a2c440dcd674ddf9bd4a244297288a8c37ca2428f24e999\"" Jul 7 05:53:46.190390 containerd[1834]: time="2025-07-07T05:53:46.190277652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 05:53:46.209747 containerd[1834]: time="2025-07-07T05:53:46.209691250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w5flf,Uid:14831fb2-8a2c-4818-8d80-01e1663c1e45,Namespace:calico-system,Attempt:0,} returns sandbox id \"27ff027592cb5908ec6bf6fa35dc73551ad185ffb6815ae207806260a078e4f7\"" Jul 7 05:53:47.393541 kubelet[3307]: E0707 05:53:47.392946 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7clz" podUID="c92a7bb2-2db4-4f96-97d0-028fc27545ab" Jul 7 05:53:47.663912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2803057141.mount: Deactivated successfully. Jul 7 05:53:48.719203 containerd[1834]: time="2025-07-07T05:53:48.719093561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:48.725170 containerd[1834]: time="2025-07-07T05:53:48.724373880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 7 05:53:48.732167 containerd[1834]: time="2025-07-07T05:53:48.732111999Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:48.742500 containerd[1834]: time="2025-07-07T05:53:48.742451598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:48.743237 containerd[1834]: time="2025-07-07T05:53:48.743199278Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.552866466s" Jul 7 05:53:48.743366 containerd[1834]: time="2025-07-07T05:53:48.743350038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 7 05:53:48.746863 containerd[1834]: time="2025-07-07T05:53:48.746809397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 05:53:48.760196 containerd[1834]: time="2025-07-07T05:53:48.759854836Z" level=info msg="CreateContainer within sandbox \"63143017a1cf9b8b3a2c440dcd674ddf9bd4a244297288a8c37ca2428f24e999\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 05:53:48.814334 containerd[1834]: time="2025-07-07T05:53:48.814154869Z" level=info msg="CreateContainer within sandbox \"63143017a1cf9b8b3a2c440dcd674ddf9bd4a244297288a8c37ca2428f24e999\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"890b5f8241e8dbbe1fe5623f313d5a196a840efd613dbc69a96705d23e2149dc\"" Jul 7 05:53:48.815248 containerd[1834]: time="2025-07-07T05:53:48.815080428Z" level=info msg="StartContainer for \"890b5f8241e8dbbe1fe5623f313d5a196a840efd613dbc69a96705d23e2149dc\"" Jul 7 05:53:48.881485 containerd[1834]: time="2025-07-07T05:53:48.881400340Z" level=info msg="StartContainer for \"890b5f8241e8dbbe1fe5623f313d5a196a840efd613dbc69a96705d23e2149dc\" returns successfully" Jul 7 05:53:49.392751 kubelet[3307]: E0707 05:53:49.392356 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7clz" podUID="c92a7bb2-2db4-4f96-97d0-028fc27545ab" Jul 7 05:53:49.561925 kubelet[3307]: E0707 05:53:49.561803 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.561925 kubelet[3307]: W0707 05:53:49.561834 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.561925 kubelet[3307]: E0707 05:53:49.561859 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.562197 kubelet[3307]: E0707 05:53:49.562043 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.562197 kubelet[3307]: W0707 05:53:49.562052 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.562197 kubelet[3307]: E0707 05:53:49.562083 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.562271 kubelet[3307]: E0707 05:53:49.562228 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.562271 kubelet[3307]: W0707 05:53:49.562235 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.562271 kubelet[3307]: E0707 05:53:49.562244 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.562390 kubelet[3307]: E0707 05:53:49.562376 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.562390 kubelet[3307]: W0707 05:53:49.562387 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.562454 kubelet[3307]: E0707 05:53:49.562395 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.562585 kubelet[3307]: E0707 05:53:49.562551 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.562585 kubelet[3307]: W0707 05:53:49.562563 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.562585 kubelet[3307]: E0707 05:53:49.562570 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.562717 kubelet[3307]: E0707 05:53:49.562699 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.562717 kubelet[3307]: W0707 05:53:49.562711 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.562717 kubelet[3307]: E0707 05:53:49.562718 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.562863 kubelet[3307]: E0707 05:53:49.562849 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.562863 kubelet[3307]: W0707 05:53:49.562863 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.562927 kubelet[3307]: E0707 05:53:49.562871 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.563331 kubelet[3307]: E0707 05:53:49.563086 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.563331 kubelet[3307]: W0707 05:53:49.563099 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.563331 kubelet[3307]: E0707 05:53:49.563108 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.563450 kubelet[3307]: E0707 05:53:49.563404 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.563450 kubelet[3307]: W0707 05:53:49.563415 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.563450 kubelet[3307]: E0707 05:53:49.563425 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.563933 kubelet[3307]: E0707 05:53:49.563626 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.563933 kubelet[3307]: W0707 05:53:49.563639 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.563933 kubelet[3307]: E0707 05:53:49.563648 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.563933 kubelet[3307]: E0707 05:53:49.563863 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.563933 kubelet[3307]: W0707 05:53:49.563872 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.563933 kubelet[3307]: E0707 05:53:49.563902 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.564909 kubelet[3307]: E0707 05:53:49.564124 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.564909 kubelet[3307]: W0707 05:53:49.564135 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.564909 kubelet[3307]: E0707 05:53:49.564170 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.564909 kubelet[3307]: E0707 05:53:49.564458 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.564909 kubelet[3307]: W0707 05:53:49.564469 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.564909 kubelet[3307]: E0707 05:53:49.564497 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.564909 kubelet[3307]: E0707 05:53:49.564677 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.564909 kubelet[3307]: W0707 05:53:49.564687 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.564909 kubelet[3307]: E0707 05:53:49.564717 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.564909 kubelet[3307]: E0707 05:53:49.564927 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.566289 kubelet[3307]: W0707 05:53:49.564936 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.566289 kubelet[3307]: E0707 05:53:49.564944 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.567260 kubelet[3307]: I0707 05:53:49.566880 3307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8f46b865d-ncddn" podStartSLOduration=2.011612025 podStartE2EDuration="4.56686317s" podCreationTimestamp="2025-07-07 05:53:45 +0000 UTC" firstStartedPulling="2025-07-07 05:53:46.188970493 +0000 UTC m=+24.925109876" lastFinishedPulling="2025-07-07 05:53:48.744221638 +0000 UTC m=+27.480361021" observedRunningTime="2025-07-07 05:53:49.548094052 +0000 UTC m=+28.284233435" watchObservedRunningTime="2025-07-07 05:53:49.56686317 +0000 UTC m=+28.303002553" Jul 7 05:53:49.615887 kubelet[3307]: E0707 05:53:49.615837 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.615887 kubelet[3307]: W0707 05:53:49.615871 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.616597 kubelet[3307]: E0707 05:53:49.615916 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.616597 kubelet[3307]: E0707 05:53:49.616282 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.616597 kubelet[3307]: W0707 05:53:49.616296 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.616597 kubelet[3307]: E0707 05:53:49.616318 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.617646 kubelet[3307]: E0707 05:53:49.617609 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.617646 kubelet[3307]: W0707 05:53:49.617635 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.617804 kubelet[3307]: E0707 05:53:49.617659 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.617911 kubelet[3307]: E0707 05:53:49.617889 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.617911 kubelet[3307]: W0707 05:53:49.617906 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.618109 kubelet[3307]: E0707 05:53:49.617955 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.618109 kubelet[3307]: E0707 05:53:49.618082 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.618109 kubelet[3307]: W0707 05:53:49.618093 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.618247 kubelet[3307]: E0707 05:53:49.618201 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.618390 kubelet[3307]: E0707 05:53:49.618369 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.618390 kubelet[3307]: W0707 05:53:49.618386 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.618502 kubelet[3307]: E0707 05:53:49.618417 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.618583 kubelet[3307]: E0707 05:53:49.618563 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.618583 kubelet[3307]: W0707 05:53:49.618579 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.618646 kubelet[3307]: E0707 05:53:49.618595 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.618791 kubelet[3307]: E0707 05:53:49.618774 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.618791 kubelet[3307]: W0707 05:53:49.618788 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.618854 kubelet[3307]: E0707 05:53:49.618806 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.619190 kubelet[3307]: E0707 05:53:49.619166 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.619190 kubelet[3307]: W0707 05:53:49.619185 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.619418 kubelet[3307]: E0707 05:53:49.619203 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.619544 kubelet[3307]: E0707 05:53:49.619521 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.619544 kubelet[3307]: W0707 05:53:49.619534 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.619625 kubelet[3307]: E0707 05:53:49.619607 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.619763 kubelet[3307]: E0707 05:53:49.619727 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.619763 kubelet[3307]: W0707 05:53:49.619740 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.619893 kubelet[3307]: E0707 05:53:49.619872 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.619944 kubelet[3307]: E0707 05:53:49.619929 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.620806 kubelet[3307]: W0707 05:53:49.620009 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.620806 kubelet[3307]: E0707 05:53:49.620031 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.621662 kubelet[3307]: E0707 05:53:49.621568 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.621662 kubelet[3307]: W0707 05:53:49.621597 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.621662 kubelet[3307]: E0707 05:53:49.621629 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.623909 kubelet[3307]: E0707 05:53:49.623742 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.623909 kubelet[3307]: W0707 05:53:49.623771 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.624131 kubelet[3307]: E0707 05:53:49.623980 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.624978 kubelet[3307]: E0707 05:53:49.624847 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.624978 kubelet[3307]: W0707 05:53:49.624888 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.626198 kubelet[3307]: E0707 05:53:49.625437 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.626198 kubelet[3307]: E0707 05:53:49.625807 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.626198 kubelet[3307]: W0707 05:53:49.625820 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.626198 kubelet[3307]: E0707 05:53:49.625853 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.626198 kubelet[3307]: E0707 05:53:49.626144 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.626198 kubelet[3307]: W0707 05:53:49.626155 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.626198 kubelet[3307]: E0707 05:53:49.626166 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:49.626605 kubelet[3307]: E0707 05:53:49.626572 3307 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:53:49.626605 kubelet[3307]: W0707 05:53:49.626589 3307 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:53:49.626605 kubelet[3307]: E0707 05:53:49.626600 3307 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:53:50.334942 containerd[1834]: time="2025-07-07T05:53:50.334240309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:50.339040 containerd[1834]: time="2025-07-07T05:53:50.338996789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 7 05:53:50.342692 containerd[1834]: time="2025-07-07T05:53:50.342634788Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:50.348657 containerd[1834]: time="2025-07-07T05:53:50.348577467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:50.349553 containerd[1834]: time="2025-07-07T05:53:50.349383747Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.60251515s" Jul 7 05:53:50.349553 containerd[1834]: time="2025-07-07T05:53:50.349433227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 7 05:53:50.353141 containerd[1834]: time="2025-07-07T05:53:50.353072307Z" level=info msg="CreateContainer within sandbox \"27ff027592cb5908ec6bf6fa35dc73551ad185ffb6815ae207806260a078e4f7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 05:53:50.423872 containerd[1834]: time="2025-07-07T05:53:50.423731578Z" level=info msg="CreateContainer within sandbox \"27ff027592cb5908ec6bf6fa35dc73551ad185ffb6815ae207806260a078e4f7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6b0ca3c84ee1e595f5d672b1c827eb3930a2c617b51492a513cd1c4d05991d5c\"" Jul 7 05:53:50.425482 containerd[1834]: time="2025-07-07T05:53:50.424500697Z" level=info msg="StartContainer for \"6b0ca3c84ee1e595f5d672b1c827eb3930a2c617b51492a513cd1c4d05991d5c\"" Jul 7 05:53:50.490744 containerd[1834]: time="2025-07-07T05:53:50.490690289Z" level=info msg="StartContainer for \"6b0ca3c84ee1e595f5d672b1c827eb3930a2c617b51492a513cd1c4d05991d5c\" returns successfully" Jul 7 05:53:50.523017 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b0ca3c84ee1e595f5d672b1c827eb3930a2c617b51492a513cd1c4d05991d5c-rootfs.mount: Deactivated successfully. Jul 7 05:53:51.393974 kubelet[3307]: E0707 05:53:51.393686 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7clz" podUID="c92a7bb2-2db4-4f96-97d0-028fc27545ab" Jul 7 05:53:51.525154 containerd[1834]: time="2025-07-07T05:53:51.525049400Z" level=info msg="shim disconnected" id=6b0ca3c84ee1e595f5d672b1c827eb3930a2c617b51492a513cd1c4d05991d5c namespace=k8s.io Jul 7 05:53:51.525154 containerd[1834]: time="2025-07-07T05:53:51.525142560Z" level=warning msg="cleaning up after shim disconnected" id=6b0ca3c84ee1e595f5d672b1c827eb3930a2c617b51492a513cd1c4d05991d5c namespace=k8s.io Jul 7 05:53:51.525662 containerd[1834]: time="2025-07-07T05:53:51.525168320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:53:52.541085 containerd[1834]: time="2025-07-07T05:53:52.540998491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 05:53:53.393284 kubelet[3307]: E0707 05:53:53.392915 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7clz" podUID="c92a7bb2-2db4-4f96-97d0-028fc27545ab" Jul 7 05:53:55.392481 kubelet[3307]: E0707 05:53:55.392199 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7clz" podUID="c92a7bb2-2db4-4f96-97d0-028fc27545ab" Jul 7 05:53:56.152111 containerd[1834]: time="2025-07-07T05:53:56.151373026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:56.157585 containerd[1834]: time="2025-07-07T05:53:56.157519865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 7 05:53:56.163115 containerd[1834]: time="2025-07-07T05:53:56.163030705Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:56.169919 containerd[1834]: time="2025-07-07T05:53:56.169844144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:56.170873 containerd[1834]: time="2025-07-07T05:53:56.170726504Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.629679933s" Jul 7 05:53:56.170873 containerd[1834]: time="2025-07-07T05:53:56.170765504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 7 05:53:56.174231 containerd[1834]: time="2025-07-07T05:53:56.174185824Z" level=info msg="CreateContainer within sandbox \"27ff027592cb5908ec6bf6fa35dc73551ad185ffb6815ae207806260a078e4f7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 05:53:56.231106 containerd[1834]: time="2025-07-07T05:53:56.231017537Z" level=info msg="CreateContainer within sandbox \"27ff027592cb5908ec6bf6fa35dc73551ad185ffb6815ae207806260a078e4f7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"695b3561795b7c2bdaee7ec01cb3b1b1ddce27788858324a0e0614c811dd69de\"" Jul 7 05:53:56.233094 containerd[1834]: time="2025-07-07T05:53:56.232609057Z" level=info msg="StartContainer for \"695b3561795b7c2bdaee7ec01cb3b1b1ddce27788858324a0e0614c811dd69de\"" Jul 7 05:53:56.297866 containerd[1834]: time="2025-07-07T05:53:56.297821690Z" level=info msg="StartContainer for \"695b3561795b7c2bdaee7ec01cb3b1b1ddce27788858324a0e0614c811dd69de\" returns successfully" Jul 7 05:53:57.396166 kubelet[3307]: E0707 05:53:57.393148 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7clz" podUID="c92a7bb2-2db4-4f96-97d0-028fc27545ab" Jul 7 05:53:57.517711 containerd[1834]: time="2025-07-07T05:53:57.517622600Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 05:53:57.541915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-695b3561795b7c2bdaee7ec01cb3b1b1ddce27788858324a0e0614c811dd69de-rootfs.mount: Deactivated successfully. Jul 7 05:53:57.550548 kubelet[3307]: I0707 05:53:57.548121 3307 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 05:53:58.303918 kubelet[3307]: I0707 05:53:57.671512 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pf25\" (UniqueName: \"kubernetes.io/projected/ad1a3f21-6e23-4ca3-b64b-f1320f379e83-kube-api-access-2pf25\") pod \"coredns-7c65d6cfc9-znvbk\" (UID: \"ad1a3f21-6e23-4ca3-b64b-f1320f379e83\") " pod="kube-system/coredns-7c65d6cfc9-znvbk" Jul 7 05:53:58.303918 kubelet[3307]: I0707 05:53:57.671681 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/617d9772-8b6b-4315-9d5b-66e788e56a42-calico-apiserver-certs\") pod \"calico-apiserver-577bd8b5bc-8dtfq\" (UID: \"617d9772-8b6b-4315-9d5b-66e788e56a42\") " pod="calico-apiserver/calico-apiserver-577bd8b5bc-8dtfq" Jul 7 05:53:58.303918 kubelet[3307]: I0707 05:53:57.671793 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fe50b8e-0e00-42c7-bd7e-7a3329714697-whisker-ca-bundle\") pod \"whisker-df7ff855f-vz8wf\" (UID: \"4fe50b8e-0e00-42c7-bd7e-7a3329714697\") " pod="calico-system/whisker-df7ff855f-vz8wf" Jul 7 05:53:58.303918 kubelet[3307]: I0707 05:53:57.671821 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dk4g\" (UniqueName: \"kubernetes.io/projected/3631a971-c63b-496d-bbcc-d7a38e1fa7de-kube-api-access-5dk4g\") pod \"coredns-7c65d6cfc9-4668m\" (UID: \"3631a971-c63b-496d-bbcc-d7a38e1fa7de\") " pod="kube-system/coredns-7c65d6cfc9-4668m" Jul 7 05:53:58.303918 kubelet[3307]: I0707 05:53:57.671838 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4fe50b8e-0e00-42c7-bd7e-7a3329714697-whisker-backend-key-pair\") pod \"whisker-df7ff855f-vz8wf\" (UID: \"4fe50b8e-0e00-42c7-bd7e-7a3329714697\") " pod="calico-system/whisker-df7ff855f-vz8wf" Jul 7 05:53:58.304279 kubelet[3307]: I0707 05:53:57.672047 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cptf\" (UniqueName: \"kubernetes.io/projected/241b2127-b6e0-4525-9ffd-ee1fb00c225a-kube-api-access-4cptf\") pod \"calico-kube-controllers-5c48c79585-dwtff\" (UID: \"241b2127-b6e0-4525-9ffd-ee1fb00c225a\") " pod="calico-system/calico-kube-controllers-5c48c79585-dwtff" Jul 7 05:53:58.304279 kubelet[3307]: I0707 05:53:57.673044 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/04f2c61c-8783-44a5-a6b4-59965cc32dc5-calico-apiserver-certs\") pod \"calico-apiserver-577bd8b5bc-khd5g\" (UID: \"04f2c61c-8783-44a5-a6b4-59965cc32dc5\") " pod="calico-apiserver/calico-apiserver-577bd8b5bc-khd5g" Jul 7 05:53:58.304279 kubelet[3307]: I0707 05:53:57.673113 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f15f550-1ef2-4040-9e85-a0ab3be387d3-config\") pod \"goldmane-58fd7646b9-75wzk\" (UID: \"4f15f550-1ef2-4040-9e85-a0ab3be387d3\") " pod="calico-system/goldmane-58fd7646b9-75wzk" Jul 7 05:53:58.304279 kubelet[3307]: I0707 05:53:57.673136 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-429wd\" (UniqueName: \"kubernetes.io/projected/4fe50b8e-0e00-42c7-bd7e-7a3329714697-kube-api-access-429wd\") pod \"whisker-df7ff855f-vz8wf\" (UID: \"4fe50b8e-0e00-42c7-bd7e-7a3329714697\") " pod="calico-system/whisker-df7ff855f-vz8wf" Jul 7 05:53:58.304279 kubelet[3307]: I0707 05:53:57.673165 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4f15f550-1ef2-4040-9e85-a0ab3be387d3-goldmane-key-pair\") pod \"goldmane-58fd7646b9-75wzk\" (UID: \"4f15f550-1ef2-4040-9e85-a0ab3be387d3\") " pod="calico-system/goldmane-58fd7646b9-75wzk" Jul 7 05:53:58.304408 kubelet[3307]: I0707 05:53:57.673192 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3631a971-c63b-496d-bbcc-d7a38e1fa7de-config-volume\") pod \"coredns-7c65d6cfc9-4668m\" (UID: \"3631a971-c63b-496d-bbcc-d7a38e1fa7de\") " pod="kube-system/coredns-7c65d6cfc9-4668m" Jul 7 05:53:58.304408 kubelet[3307]: I0707 05:53:57.673218 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad1a3f21-6e23-4ca3-b64b-f1320f379e83-config-volume\") pod \"coredns-7c65d6cfc9-znvbk\" (UID: \"ad1a3f21-6e23-4ca3-b64b-f1320f379e83\") " pod="kube-system/coredns-7c65d6cfc9-znvbk" Jul 7 05:53:58.304408 kubelet[3307]: I0707 05:53:57.673235 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg5xp\" (UniqueName: \"kubernetes.io/projected/4f15f550-1ef2-4040-9e85-a0ab3be387d3-kube-api-access-xg5xp\") pod \"goldmane-58fd7646b9-75wzk\" (UID: \"4f15f550-1ef2-4040-9e85-a0ab3be387d3\") " pod="calico-system/goldmane-58fd7646b9-75wzk" Jul 7 05:53:58.304408 kubelet[3307]: I0707 05:53:57.673254 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f15f550-1ef2-4040-9e85-a0ab3be387d3-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-75wzk\" (UID: \"4f15f550-1ef2-4040-9e85-a0ab3be387d3\") " pod="calico-system/goldmane-58fd7646b9-75wzk" Jul 7 05:53:58.304408 kubelet[3307]: I0707 05:53:57.673273 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlp5n\" (UniqueName: \"kubernetes.io/projected/617d9772-8b6b-4315-9d5b-66e788e56a42-kube-api-access-mlp5n\") pod \"calico-apiserver-577bd8b5bc-8dtfq\" (UID: \"617d9772-8b6b-4315-9d5b-66e788e56a42\") " pod="calico-apiserver/calico-apiserver-577bd8b5bc-8dtfq" Jul 7 05:53:58.304539 kubelet[3307]: I0707 05:53:57.673298 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/241b2127-b6e0-4525-9ffd-ee1fb00c225a-tigera-ca-bundle\") pod \"calico-kube-controllers-5c48c79585-dwtff\" (UID: \"241b2127-b6e0-4525-9ffd-ee1fb00c225a\") " pod="calico-system/calico-kube-controllers-5c48c79585-dwtff" Jul 7 05:53:58.304539 kubelet[3307]: I0707 05:53:57.673318 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpt5c\" (UniqueName: \"kubernetes.io/projected/04f2c61c-8783-44a5-a6b4-59965cc32dc5-kube-api-access-lpt5c\") pod \"calico-apiserver-577bd8b5bc-khd5g\" (UID: \"04f2c61c-8783-44a5-a6b4-59965cc32dc5\") " pod="calico-apiserver/calico-apiserver-577bd8b5bc-khd5g" Jul 7 05:53:58.314923 containerd[1834]: time="2025-07-07T05:53:58.314349995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4668m,Uid:3631a971-c63b-496d-bbcc-d7a38e1fa7de,Namespace:kube-system,Attempt:0,}" Jul 7 05:53:58.385094 containerd[1834]: time="2025-07-07T05:53:58.385005708Z" level=info msg="shim disconnected" id=695b3561795b7c2bdaee7ec01cb3b1b1ddce27788858324a0e0614c811dd69de namespace=k8s.io Jul 7 05:53:58.385094 containerd[1834]: time="2025-07-07T05:53:58.385077308Z" level=warning msg="cleaning up after shim disconnected" id=695b3561795b7c2bdaee7ec01cb3b1b1ddce27788858324a0e0614c811dd69de namespace=k8s.io Jul 7 05:53:58.385094 containerd[1834]: time="2025-07-07T05:53:58.385090268Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:53:58.477038 containerd[1834]: time="2025-07-07T05:53:58.476984338Z" level=error msg="Failed to destroy network for sandbox \"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:58.477780 containerd[1834]: time="2025-07-07T05:53:58.477620738Z" level=error msg="encountered an error cleaning up failed sandbox \"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:58.477780 containerd[1834]: time="2025-07-07T05:53:58.477678058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4668m,Uid:3631a971-c63b-496d-bbcc-d7a38e1fa7de,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:58.478114 kubelet[3307]: E0707 05:53:58.478007 3307 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:58.478755 kubelet[3307]: E0707 05:53:58.478145 3307 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4668m" Jul 7 05:53:58.478755 kubelet[3307]: E0707 05:53:58.478178 3307 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4668m" Jul 7 05:53:58.478755 kubelet[3307]: E0707 05:53:58.478260 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-4668m_kube-system(3631a971-c63b-496d-bbcc-d7a38e1fa7de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-4668m_kube-system(3631a971-c63b-496d-bbcc-d7a38e1fa7de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4668m" podUID="3631a971-c63b-496d-bbcc-d7a38e1fa7de" Jul 7 05:53:58.558621 kubelet[3307]: I0707 05:53:58.558027 3307 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Jul 7 05:53:58.559219 containerd[1834]: time="2025-07-07T05:53:58.558856929Z" level=info msg="StopPodSandbox for \"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\"" Jul 7 05:53:58.560045 containerd[1834]: time="2025-07-07T05:53:58.559967529Z" level=info msg="Ensure that sandbox 484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59 in task-service has been cleanup successfully" Jul 7 05:53:58.569856 containerd[1834]: time="2025-07-07T05:53:58.569797648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 05:53:58.603590 containerd[1834]: time="2025-07-07T05:53:58.603502084Z" level=error msg="StopPodSandbox for \"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\" failed" error="failed to destroy network for sandbox \"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:58.603797 kubelet[3307]: E0707 05:53:58.603761 3307 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Jul 7 05:53:58.603896 kubelet[3307]: E0707 05:53:58.603829 3307 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59"} Jul 7 05:53:58.603939 kubelet[3307]: E0707 05:53:58.603896 3307 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3631a971-c63b-496d-bbcc-d7a38e1fa7de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:53:58.603939 kubelet[3307]: E0707 05:53:58.603920 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3631a971-c63b-496d-bbcc-d7a38e1fa7de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4668m" podUID="3631a971-c63b-496d-bbcc-d7a38e1fa7de" Jul 7 05:53:58.608580 containerd[1834]: time="2025-07-07T05:53:58.608207604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c48c79585-dwtff,Uid:241b2127-b6e0-4525-9ffd-ee1fb00c225a,Namespace:calico-system,Attempt:0,}" Jul 7 05:53:58.608580 containerd[1834]: time="2025-07-07T05:53:58.608373084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577bd8b5bc-8dtfq,Uid:617d9772-8b6b-4315-9d5b-66e788e56a42,Namespace:calico-apiserver,Attempt:0,}" Jul 7 05:53:58.617974 containerd[1834]: time="2025-07-07T05:53:58.617645643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577bd8b5bc-khd5g,Uid:04f2c61c-8783-44a5-a6b4-59965cc32dc5,Namespace:calico-apiserver,Attempt:0,}" Jul 7 05:53:58.631389 containerd[1834]: time="2025-07-07T05:53:58.631339121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-znvbk,Uid:ad1a3f21-6e23-4ca3-b64b-f1320f379e83,Namespace:kube-system,Attempt:0,}" Jul 7 05:53:58.643377 containerd[1834]: time="2025-07-07T05:53:58.642777720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-df7ff855f-vz8wf,Uid:4fe50b8e-0e00-42c7-bd7e-7a3329714697,Namespace:calico-system,Attempt:0,}" Jul 7 05:53:58.643377 containerd[1834]: time="2025-07-07T05:53:58.643130160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-75wzk,Uid:4f15f550-1ef2-4040-9e85-a0ab3be387d3,Namespace:calico-system,Attempt:0,}" Jul 7 05:53:58.793158 containerd[1834]: time="2025-07-07T05:53:58.793098464Z" level=error msg="Failed to destroy network for sandbox \"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:58.793918 containerd[1834]: time="2025-07-07T05:53:58.793762864Z" level=error msg="encountered an error cleaning up failed sandbox \"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:58.793918 containerd[1834]: time="2025-07-07T05:53:58.793838064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c48c79585-dwtff,Uid:241b2127-b6e0-4525-9ffd-ee1fb00c225a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:58.794584 kubelet[3307]: E0707 05:53:58.794137 3307 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:58.794584 kubelet[3307]: E0707 05:53:58.794210 3307 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c48c79585-dwtff" Jul 7 05:53:58.794584 kubelet[3307]: E0707 05:53:58.794240 3307 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c48c79585-dwtff" Jul 7 05:53:58.794766 kubelet[3307]: E0707 05:53:58.794290 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c48c79585-dwtff_calico-system(241b2127-b6e0-4525-9ffd-ee1fb00c225a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c48c79585-dwtff_calico-system(241b2127-b6e0-4525-9ffd-ee1fb00c225a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c48c79585-dwtff" podUID="241b2127-b6e0-4525-9ffd-ee1fb00c225a" Jul 7 05:53:58.894554 containerd[1834]: time="2025-07-07T05:53:58.894316893Z" level=error msg="Failed to destroy network for sandbox \"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:58.896156 containerd[1834]: time="2025-07-07T05:53:58.895996533Z" level=error msg="encountered an error cleaning up failed sandbox \"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:58.896636 containerd[1834]: time="2025-07-07T05:53:58.896558973Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577bd8b5bc-khd5g,Uid:04f2c61c-8783-44a5-a6b4-59965cc32dc5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:58.897625 kubelet[3307]: E0707 05:53:58.897120 3307 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:58.897625 kubelet[3307]: E0707 05:53:58.897272 3307 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-577bd8b5bc-khd5g" Jul 7 05:53:58.897625 kubelet[3307]: E0707 05:53:58.897299 3307 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-577bd8b5bc-khd5g" Jul 7 05:53:58.897802 kubelet[3307]: E0707 05:53:58.897410 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-577bd8b5bc-khd5g_calico-apiserver(04f2c61c-8783-44a5-a6b4-59965cc32dc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-577bd8b5bc-khd5g_calico-apiserver(04f2c61c-8783-44a5-a6b4-59965cc32dc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-577bd8b5bc-khd5g" podUID="04f2c61c-8783-44a5-a6b4-59965cc32dc5" Jul 7 05:53:58.988408 containerd[1834]: time="2025-07-07T05:53:58.988214483Z" level=error msg="Failed to destroy network for sandbox \"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:58.988977 containerd[1834]: time="2025-07-07T05:53:58.988929563Z" level=error msg="encountered an error cleaning up failed sandbox \"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:58.989148 containerd[1834]: time="2025-07-07T05:53:58.989097243Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-znvbk,Uid:ad1a3f21-6e23-4ca3-b64b-f1320f379e83,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:58.990488 kubelet[3307]: E0707 05:53:58.989402 3307 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:58.990488 kubelet[3307]: E0707 05:53:58.990239 3307 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-znvbk" Jul 7 05:53:58.990488 kubelet[3307]: E0707 05:53:58.990290 3307 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-znvbk" Jul 7 05:53:58.990689 kubelet[3307]: E0707 05:53:58.990349 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-znvbk_kube-system(ad1a3f21-6e23-4ca3-b64b-f1320f379e83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-znvbk_kube-system(ad1a3f21-6e23-4ca3-b64b-f1320f379e83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-znvbk" podUID="ad1a3f21-6e23-4ca3-b64b-f1320f379e83" Jul 7 05:53:58.991286 containerd[1834]: time="2025-07-07T05:53:58.991245843Z" level=error msg="Failed to destroy network for sandbox \"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:58.991720 containerd[1834]: time="2025-07-07T05:53:58.991690123Z" level=error msg="encountered an error cleaning up failed sandbox \"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:58.991844 containerd[1834]: time="2025-07-07T05:53:58.991821603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577bd8b5bc-8dtfq,Uid:617d9772-8b6b-4315-9d5b-66e788e56a42,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:58.992315 kubelet[3307]: E0707 05:53:58.992142 3307 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:58.992315 kubelet[3307]: E0707 05:53:58.992203 3307 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-577bd8b5bc-8dtfq" Jul 7 05:53:58.992315 kubelet[3307]: E0707 05:53:58.992223 3307 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-577bd8b5bc-8dtfq" Jul 7 05:53:58.992472 kubelet[3307]: E0707 05:53:58.992260 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-577bd8b5bc-8dtfq_calico-apiserver(617d9772-8b6b-4315-9d5b-66e788e56a42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-577bd8b5bc-8dtfq_calico-apiserver(617d9772-8b6b-4315-9d5b-66e788e56a42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-577bd8b5bc-8dtfq" podUID="617d9772-8b6b-4315-9d5b-66e788e56a42" Jul 7 05:53:59.009896 containerd[1834]: time="2025-07-07T05:53:59.009730001Z" level=error msg="Failed to destroy network for sandbox \"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:59.010505 containerd[1834]: time="2025-07-07T05:53:59.010312961Z" level=error msg="encountered an error cleaning up failed sandbox \"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:59.010505 containerd[1834]: time="2025-07-07T05:53:59.010386881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-df7ff855f-vz8wf,Uid:4fe50b8e-0e00-42c7-bd7e-7a3329714697,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:59.011097 kubelet[3307]: E0707 05:53:59.010797 3307 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:59.011097 kubelet[3307]: E0707 05:53:59.010863 3307 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-df7ff855f-vz8wf" Jul 7 05:53:59.011097 kubelet[3307]: E0707 05:53:59.010887 3307 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-df7ff855f-vz8wf" Jul 7 05:53:59.011278 kubelet[3307]: E0707 05:53:59.010932 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-df7ff855f-vz8wf_calico-system(4fe50b8e-0e00-42c7-bd7e-7a3329714697)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-df7ff855f-vz8wf_calico-system(4fe50b8e-0e00-42c7-bd7e-7a3329714697)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-df7ff855f-vz8wf" podUID="4fe50b8e-0e00-42c7-bd7e-7a3329714697" Jul 7 05:53:59.025317 containerd[1834]: time="2025-07-07T05:53:59.025261559Z" level=error msg="Failed to destroy network for sandbox \"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:59.025647 containerd[1834]: time="2025-07-07T05:53:59.025616879Z" level=error msg="encountered an error cleaning up failed sandbox \"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:59.025703 containerd[1834]: time="2025-07-07T05:53:59.025676439Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-75wzk,Uid:4f15f550-1ef2-4040-9e85-a0ab3be387d3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:59.026120 kubelet[3307]: E0707 05:53:59.025961 3307 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:59.026120 kubelet[3307]: E0707 05:53:59.026038 3307 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-75wzk" Jul 7 05:53:59.026120 kubelet[3307]: E0707 05:53:59.026076 3307 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-75wzk" Jul 7 05:53:59.027401 kubelet[3307]: E0707 05:53:59.026306 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-75wzk_calico-system(4f15f550-1ef2-4040-9e85-a0ab3be387d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-75wzk_calico-system(4f15f550-1ef2-4040-9e85-a0ab3be387d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-75wzk" podUID="4f15f550-1ef2-4040-9e85-a0ab3be387d3" Jul 7 05:53:59.396368 containerd[1834]: time="2025-07-07T05:53:59.396162456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7clz,Uid:c92a7bb2-2db4-4f96-97d0-028fc27545ab,Namespace:calico-system,Attempt:0,}" Jul 7 05:53:59.521240 containerd[1834]: time="2025-07-07T05:53:59.521178177Z" level=error msg="Failed to destroy network for sandbox \"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:59.521619 containerd[1834]: time="2025-07-07T05:53:59.521583217Z" level=error msg="encountered an error cleaning up failed sandbox \"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:59.521679 containerd[1834]: time="2025-07-07T05:53:59.521650217Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7clz,Uid:c92a7bb2-2db4-4f96-97d0-028fc27545ab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:59.522114 kubelet[3307]: E0707 05:53:59.521896 3307 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:59.522114 kubelet[3307]: E0707 05:53:59.521966 3307 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7clz" Jul 7 05:53:59.522114 kubelet[3307]: E0707 05:53:59.521985 3307 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7clz" Jul 7 05:53:59.522567 kubelet[3307]: E0707 05:53:59.522036 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z7clz_calico-system(c92a7bb2-2db4-4f96-97d0-028fc27545ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z7clz_calico-system(c92a7bb2-2db4-4f96-97d0-028fc27545ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z7clz" podUID="c92a7bb2-2db4-4f96-97d0-028fc27545ab" Jul 7 05:53:59.543212 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c-shm.mount: Deactivated successfully. Jul 7 05:53:59.543378 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44-shm.mount: Deactivated successfully. Jul 7 05:53:59.567980 kubelet[3307]: I0707 05:53:59.567936 3307 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Jul 7 05:53:59.569192 containerd[1834]: time="2025-07-07T05:53:59.569154643Z" level=info msg="StopPodSandbox for \"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\"" Jul 7 05:53:59.570599 containerd[1834]: time="2025-07-07T05:53:59.570011562Z" level=info msg="Ensure that sandbox 4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716 in task-service has been cleanup successfully" Jul 7 05:53:59.572261 kubelet[3307]: I0707 05:53:59.572197 3307 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Jul 7 05:53:59.573601 containerd[1834]: time="2025-07-07T05:53:59.573563681Z" level=info msg="StopPodSandbox for \"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\"" Jul 7 05:53:59.573913 containerd[1834]: time="2025-07-07T05:53:59.573889521Z" level=info msg="Ensure that sandbox 023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e in task-service has been cleanup successfully" Jul 7 05:53:59.575749 kubelet[3307]: I0707 05:53:59.575569 3307 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Jul 7 05:53:59.576542 containerd[1834]: time="2025-07-07T05:53:59.576367160Z" level=info msg="StopPodSandbox for \"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\"" Jul 7 05:53:59.577767 containerd[1834]: time="2025-07-07T05:53:59.577499160Z" level=info msg="Ensure that sandbox e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d in task-service has been cleanup successfully" Jul 7 05:53:59.581626 kubelet[3307]: I0707 05:53:59.581544 3307 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Jul 7 05:53:59.591035 containerd[1834]: time="2025-07-07T05:53:59.590823796Z" level=info msg="StopPodSandbox for \"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\"" Jul 7 05:53:59.591583 containerd[1834]: time="2025-07-07T05:53:59.591429436Z" level=info msg="Ensure that sandbox fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44 in task-service has been cleanup successfully" Jul 7 05:53:59.592335 kubelet[3307]: I0707 05:53:59.591741 3307 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Jul 7 05:53:59.597920 containerd[1834]: time="2025-07-07T05:53:59.597707554Z" level=info msg="StopPodSandbox for \"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\"" Jul 7 05:53:59.598114 containerd[1834]: time="2025-07-07T05:53:59.597997114Z" level=info msg="Ensure that sandbox 081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28 in task-service has been cleanup successfully" Jul 7 05:53:59.598791 kubelet[3307]: I0707 05:53:59.598357 3307 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Jul 7 05:53:59.603392 containerd[1834]: time="2025-07-07T05:53:59.602768912Z" level=info msg="StopPodSandbox for \"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\"" Jul 7 05:53:59.604725 containerd[1834]: time="2025-07-07T05:53:59.604417672Z" level=info msg="Ensure that sandbox 4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee in task-service has been cleanup successfully" Jul 7 05:53:59.606626 kubelet[3307]: I0707 05:53:59.606419 3307 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Jul 7 05:53:59.610047 containerd[1834]: time="2025-07-07T05:53:59.609943830Z" level=info msg="StopPodSandbox for \"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\"" Jul 7 05:53:59.610386 containerd[1834]: time="2025-07-07T05:53:59.610337190Z" level=info msg="Ensure that sandbox a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c in task-service has been cleanup successfully" Jul 7 05:53:59.690070 containerd[1834]: time="2025-07-07T05:53:59.689765486Z" level=error msg="StopPodSandbox for \"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\" failed" error="failed to destroy network for sandbox \"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:59.692295 kubelet[3307]: E0707 05:53:59.692118 3307 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Jul 7 05:53:59.692295 kubelet[3307]: E0707 05:53:59.692180 3307 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44"} Jul 7 05:53:59.692295 kubelet[3307]: E0707 05:53:59.692222 3307 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"241b2127-b6e0-4525-9ffd-ee1fb00c225a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:53:59.692295 kubelet[3307]: E0707 05:53:59.692253 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"241b2127-b6e0-4525-9ffd-ee1fb00c225a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c48c79585-dwtff" podUID="241b2127-b6e0-4525-9ffd-ee1fb00c225a" Jul 7 05:53:59.700762 containerd[1834]: time="2025-07-07T05:53:59.700600642Z" level=error msg="StopPodSandbox for \"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\" failed" error="failed to destroy network for sandbox \"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:59.705588 kubelet[3307]: E0707 05:53:59.705429 3307 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Jul 7 05:53:59.705588 kubelet[3307]: E0707 05:53:59.705488 3307 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716"} Jul 7 05:53:59.705588 kubelet[3307]: E0707 05:53:59.705525 3307 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"617d9772-8b6b-4315-9d5b-66e788e56a42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:53:59.705588 kubelet[3307]: E0707 05:53:59.705546 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"617d9772-8b6b-4315-9d5b-66e788e56a42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-577bd8b5bc-8dtfq" podUID="617d9772-8b6b-4315-9d5b-66e788e56a42" Jul 7 05:53:59.714960 containerd[1834]: time="2025-07-07T05:53:59.714570438Z" level=error msg="StopPodSandbox for \"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\" failed" error="failed to destroy network for sandbox \"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:59.715452 kubelet[3307]: E0707 05:53:59.715307 3307 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Jul 7 05:53:59.715452 kubelet[3307]: E0707 05:53:59.715381 3307 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28"} Jul 7 05:53:59.716662 kubelet[3307]: E0707 05:53:59.715612 3307 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c92a7bb2-2db4-4f96-97d0-028fc27545ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:53:59.716662 kubelet[3307]: E0707 05:53:59.715645 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c92a7bb2-2db4-4f96-97d0-028fc27545ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z7clz" podUID="c92a7bb2-2db4-4f96-97d0-028fc27545ab" Jul 7 05:53:59.724834 containerd[1834]: time="2025-07-07T05:53:59.724757635Z" level=error msg="StopPodSandbox for \"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\" failed" error="failed to destroy network for sandbox \"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:59.725580 containerd[1834]: time="2025-07-07T05:53:59.724757795Z" level=error msg="StopPodSandbox for \"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\" failed" error="failed to destroy network for sandbox \"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:59.725632 kubelet[3307]: E0707 05:53:59.725132 3307 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Jul 7 05:53:59.725632 kubelet[3307]: E0707 05:53:59.725203 3307 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d"} Jul 7 05:53:59.725632 kubelet[3307]: E0707 05:53:59.725243 3307 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f15f550-1ef2-4040-9e85-a0ab3be387d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:53:59.725632 kubelet[3307]: E0707 05:53:59.725273 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f15f550-1ef2-4040-9e85-a0ab3be387d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-75wzk" podUID="4f15f550-1ef2-4040-9e85-a0ab3be387d3" Jul 7 05:53:59.725806 kubelet[3307]: E0707 05:53:59.725314 3307 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Jul 7 05:53:59.725806 kubelet[3307]: E0707 05:53:59.725330 3307 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e"} Jul 7 05:53:59.725806 kubelet[3307]: E0707 05:53:59.725350 3307 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4fe50b8e-0e00-42c7-bd7e-7a3329714697\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:53:59.725806 kubelet[3307]: E0707 05:53:59.725404 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4fe50b8e-0e00-42c7-bd7e-7a3329714697\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-df7ff855f-vz8wf" podUID="4fe50b8e-0e00-42c7-bd7e-7a3329714697" Jul 7 05:53:59.729016 containerd[1834]: time="2025-07-07T05:53:59.728593194Z" level=error msg="StopPodSandbox for \"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\" failed" error="failed to destroy network for sandbox \"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:59.729183 kubelet[3307]: E0707 05:53:59.728863 3307 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Jul 7 05:53:59.729183 kubelet[3307]: E0707 05:53:59.728915 3307 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee"} Jul 7 05:53:59.729183 kubelet[3307]: E0707 05:53:59.728948 3307 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ad1a3f21-6e23-4ca3-b64b-f1320f379e83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:53:59.729183 kubelet[3307]: E0707 05:53:59.728970 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ad1a3f21-6e23-4ca3-b64b-f1320f379e83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-znvbk" podUID="ad1a3f21-6e23-4ca3-b64b-f1320f379e83" Jul 7 05:53:59.730116 containerd[1834]: time="2025-07-07T05:53:59.730012433Z" level=error msg="StopPodSandbox for \"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\" failed" error="failed to destroy network for sandbox \"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:53:59.730336 kubelet[3307]: E0707 05:53:59.730292 3307 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Jul 7 05:53:59.730397 kubelet[3307]: E0707 05:53:59.730349 3307 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c"} Jul 7 05:53:59.730397 kubelet[3307]: E0707 05:53:59.730390 3307 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"04f2c61c-8783-44a5-a6b4-59965cc32dc5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:53:59.730480 kubelet[3307]: E0707 05:53:59.730410 3307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"04f2c61c-8783-44a5-a6b4-59965cc32dc5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-577bd8b5bc-khd5g" podUID="04f2c61c-8783-44a5-a6b4-59965cc32dc5" Jul 7 05:54:05.509427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3667479964.mount: Deactivated successfully. Jul 7 05:54:05.582111 containerd[1834]: time="2025-07-07T05:54:05.581693600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:05.586973 containerd[1834]: time="2025-07-07T05:54:05.586896198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 7 05:54:05.606540 containerd[1834]: time="2025-07-07T05:54:05.606472152Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:05.611569 containerd[1834]: time="2025-07-07T05:54:05.611479310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:05.612547 containerd[1834]: time="2025-07-07T05:54:05.612023630Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 7.042161342s" Jul 7 05:54:05.612547 containerd[1834]: time="2025-07-07T05:54:05.612082550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 7 05:54:05.632518 containerd[1834]: time="2025-07-07T05:54:05.632462584Z" level=info msg="CreateContainer within sandbox \"27ff027592cb5908ec6bf6fa35dc73551ad185ffb6815ae207806260a078e4f7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 05:54:05.691262 containerd[1834]: time="2025-07-07T05:54:05.691194846Z" level=info msg="CreateContainer within sandbox \"27ff027592cb5908ec6bf6fa35dc73551ad185ffb6815ae207806260a078e4f7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"753db72323d53e4c48ca7fcf22edfe0a301f9c678d89c6719ac7efec169973dd\"" Jul 7 05:54:05.693329 containerd[1834]: time="2025-07-07T05:54:05.693166805Z" level=info msg="StartContainer for \"753db72323d53e4c48ca7fcf22edfe0a301f9c678d89c6719ac7efec169973dd\"" Jul 7 05:54:05.761901 containerd[1834]: time="2025-07-07T05:54:05.761655864Z" level=info msg="StartContainer for \"753db72323d53e4c48ca7fcf22edfe0a301f9c678d89c6719ac7efec169973dd\" returns successfully" Jul 7 05:54:06.087222 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 05:54:06.087366 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 05:54:06.210331 containerd[1834]: time="2025-07-07T05:54:06.210283007Z" level=info msg="StopPodSandbox for \"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\"" Jul 7 05:54:06.426117 containerd[1834]: 2025-07-07 05:54:06.329 [INFO][4472] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Jul 7 05:54:06.426117 containerd[1834]: 2025-07-07 05:54:06.329 [INFO][4472] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" iface="eth0" netns="/var/run/netns/cni-a005e65b-575a-e456-635d-bde1dda57181" Jul 7 05:54:06.426117 containerd[1834]: 2025-07-07 05:54:06.329 [INFO][4472] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" iface="eth0" netns="/var/run/netns/cni-a005e65b-575a-e456-635d-bde1dda57181" Jul 7 05:54:06.426117 containerd[1834]: 2025-07-07 05:54:06.330 [INFO][4472] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" iface="eth0" netns="/var/run/netns/cni-a005e65b-575a-e456-635d-bde1dda57181" Jul 7 05:54:06.426117 containerd[1834]: 2025-07-07 05:54:06.330 [INFO][4472] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Jul 7 05:54:06.426117 containerd[1834]: 2025-07-07 05:54:06.330 [INFO][4472] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Jul 7 05:54:06.426117 containerd[1834]: 2025-07-07 05:54:06.395 [INFO][4484] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" HandleID="k8s-pod-network.023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--df7ff855f--vz8wf-eth0" Jul 7 05:54:06.426117 containerd[1834]: 2025-07-07 05:54:06.398 [INFO][4484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:06.426117 containerd[1834]: 2025-07-07 05:54:06.398 [INFO][4484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:06.426117 containerd[1834]: 2025-07-07 05:54:06.413 [WARNING][4484] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" HandleID="k8s-pod-network.023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--df7ff855f--vz8wf-eth0" Jul 7 05:54:06.426117 containerd[1834]: 2025-07-07 05:54:06.413 [INFO][4484] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" HandleID="k8s-pod-network.023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--df7ff855f--vz8wf-eth0" Jul 7 05:54:06.426117 containerd[1834]: 2025-07-07 05:54:06.416 [INFO][4484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:06.426117 containerd[1834]: 2025-07-07 05:54:06.422 [INFO][4472] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Jul 7 05:54:06.429361 containerd[1834]: time="2025-07-07T05:54:06.427277660Z" level=info msg="TearDown network for sandbox \"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\" successfully" Jul 7 05:54:06.429361 containerd[1834]: time="2025-07-07T05:54:06.427318420Z" level=info msg="StopPodSandbox for \"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\" returns successfully" Jul 7 05:54:06.512273 systemd[1]: run-netns-cni\x2da005e65b\x2d575a\x2de456\x2d635d\x2dbde1dda57181.mount: Deactivated successfully. Jul 7 05:54:06.536606 kubelet[3307]: I0707 05:54:06.536228 3307 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4fe50b8e-0e00-42c7-bd7e-7a3329714697-whisker-backend-key-pair\") pod \"4fe50b8e-0e00-42c7-bd7e-7a3329714697\" (UID: \"4fe50b8e-0e00-42c7-bd7e-7a3329714697\") " Jul 7 05:54:06.536606 kubelet[3307]: I0707 05:54:06.536292 3307 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fe50b8e-0e00-42c7-bd7e-7a3329714697-whisker-ca-bundle\") pod \"4fe50b8e-0e00-42c7-bd7e-7a3329714697\" (UID: \"4fe50b8e-0e00-42c7-bd7e-7a3329714697\") " Jul 7 05:54:06.536606 kubelet[3307]: I0707 05:54:06.536315 3307 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-429wd\" (UniqueName: \"kubernetes.io/projected/4fe50b8e-0e00-42c7-bd7e-7a3329714697-kube-api-access-429wd\") pod \"4fe50b8e-0e00-42c7-bd7e-7a3329714697\" (UID: \"4fe50b8e-0e00-42c7-bd7e-7a3329714697\") " Jul 7 05:54:06.538776 kubelet[3307]: I0707 05:54:06.538487 3307 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fe50b8e-0e00-42c7-bd7e-7a3329714697-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4fe50b8e-0e00-42c7-bd7e-7a3329714697" (UID: "4fe50b8e-0e00-42c7-bd7e-7a3329714697"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 05:54:06.556403 kubelet[3307]: I0707 05:54:06.553546 3307 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe50b8e-0e00-42c7-bd7e-7a3329714697-kube-api-access-429wd" (OuterVolumeSpecName: "kube-api-access-429wd") pod "4fe50b8e-0e00-42c7-bd7e-7a3329714697" (UID: "4fe50b8e-0e00-42c7-bd7e-7a3329714697"). InnerVolumeSpecName "kube-api-access-429wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 05:54:06.555651 systemd[1]: var-lib-kubelet-pods-4fe50b8e\x2d0e00\x2d42c7\x2dbd7e\x2d7a3329714697-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d429wd.mount: Deactivated successfully. Jul 7 05:54:06.555852 systemd[1]: var-lib-kubelet-pods-4fe50b8e\x2d0e00\x2d42c7\x2dbd7e\x2d7a3329714697-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 05:54:06.560107 kubelet[3307]: I0707 05:54:06.557323 3307 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe50b8e-0e00-42c7-bd7e-7a3329714697-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4fe50b8e-0e00-42c7-bd7e-7a3329714697" (UID: "4fe50b8e-0e00-42c7-bd7e-7a3329714697"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 05:54:06.639414 kubelet[3307]: I0707 05:54:06.639360 3307 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fe50b8e-0e00-42c7-bd7e-7a3329714697-whisker-ca-bundle\") on node \"ci-4081.3.4-a-5429f7cfbd\" DevicePath \"\"" Jul 7 05:54:06.639966 kubelet[3307]: I0707 05:54:06.639391 3307 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-429wd\" (UniqueName: \"kubernetes.io/projected/4fe50b8e-0e00-42c7-bd7e-7a3329714697-kube-api-access-429wd\") on node \"ci-4081.3.4-a-5429f7cfbd\" DevicePath \"\"" Jul 7 05:54:06.639966 kubelet[3307]: I0707 05:54:06.639756 3307 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4fe50b8e-0e00-42c7-bd7e-7a3329714697-whisker-backend-key-pair\") on node \"ci-4081.3.4-a-5429f7cfbd\" DevicePath \"\"" Jul 7 05:54:06.671009 kubelet[3307]: I0707 05:54:06.669042 3307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-w5flf" podStartSLOduration=2.268765165 podStartE2EDuration="21.669006186s" podCreationTimestamp="2025-07-07 05:53:45 +0000 UTC" firstStartedPulling="2025-07-07 05:53:46.212694969 +0000 UTC m=+24.948834352" lastFinishedPulling="2025-07-07 05:54:05.61293599 +0000 UTC m=+44.349075373" observedRunningTime="2025-07-07 05:54:06.666339587 +0000 UTC m=+45.402478970" watchObservedRunningTime="2025-07-07 05:54:06.669006186 +0000 UTC m=+45.405145569" Jul 7 05:54:06.841720 kubelet[3307]: I0707 05:54:06.841549 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/71cb083c-5010-4ee4-8a97-b83592767f39-whisker-backend-key-pair\") pod \"whisker-b7bc54c89-dqx59\" (UID: \"71cb083c-5010-4ee4-8a97-b83592767f39\") " pod="calico-system/whisker-b7bc54c89-dqx59" Jul 7 05:54:06.841720 kubelet[3307]: I0707 05:54:06.841619 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71cb083c-5010-4ee4-8a97-b83592767f39-whisker-ca-bundle\") pod \"whisker-b7bc54c89-dqx59\" (UID: \"71cb083c-5010-4ee4-8a97-b83592767f39\") " pod="calico-system/whisker-b7bc54c89-dqx59" Jul 7 05:54:06.841720 kubelet[3307]: I0707 05:54:06.841652 3307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4t9x\" (UniqueName: \"kubernetes.io/projected/71cb083c-5010-4ee4-8a97-b83592767f39-kube-api-access-h4t9x\") pod \"whisker-b7bc54c89-dqx59\" (UID: \"71cb083c-5010-4ee4-8a97-b83592767f39\") " pod="calico-system/whisker-b7bc54c89-dqx59" Jul 7 05:54:07.063692 containerd[1834]: time="2025-07-07T05:54:07.063185065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b7bc54c89-dqx59,Uid:71cb083c-5010-4ee4-8a97-b83592767f39,Namespace:calico-system,Attempt:0,}" Jul 7 05:54:07.255000 systemd-networkd[1397]: cali58dbb8a3cd5: Link UP Jul 7 05:54:07.255811 systemd-networkd[1397]: cali58dbb8a3cd5: Gained carrier Jul 7 05:54:07.276174 containerd[1834]: 2025-07-07 05:54:07.154 [INFO][4530] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 05:54:07.276174 containerd[1834]: 2025-07-07 05:54:07.169 [INFO][4530] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--5429f7cfbd-k8s-whisker--b7bc54c89--dqx59-eth0 whisker-b7bc54c89- calico-system 71cb083c-5010-4ee4-8a97-b83592767f39 889 0 2025-07-07 05:54:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:b7bc54c89 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.4-a-5429f7cfbd whisker-b7bc54c89-dqx59 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali58dbb8a3cd5 [] [] }} ContainerID="ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037" Namespace="calico-system" Pod="whisker-b7bc54c89-dqx59" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--b7bc54c89--dqx59-" Jul 7 05:54:07.276174 containerd[1834]: 2025-07-07 05:54:07.169 [INFO][4530] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037" Namespace="calico-system" Pod="whisker-b7bc54c89-dqx59" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--b7bc54c89--dqx59-eth0" Jul 7 05:54:07.276174 containerd[1834]: 2025-07-07 05:54:07.196 [INFO][4542] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037" HandleID="k8s-pod-network.ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--b7bc54c89--dqx59-eth0" Jul 7 05:54:07.276174 containerd[1834]: 2025-07-07 05:54:07.196 [INFO][4542] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037" HandleID="k8s-pod-network.ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--b7bc54c89--dqx59-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b920), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-5429f7cfbd", "pod":"whisker-b7bc54c89-dqx59", "timestamp":"2025-07-07 05:54:07.196506625 +0000 UTC"}, Hostname:"ci-4081.3.4-a-5429f7cfbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:54:07.276174 containerd[1834]: 2025-07-07 05:54:07.196 [INFO][4542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:07.276174 containerd[1834]: 2025-07-07 05:54:07.196 [INFO][4542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:07.276174 containerd[1834]: 2025-07-07 05:54:07.196 [INFO][4542] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-5429f7cfbd' Jul 7 05:54:07.276174 containerd[1834]: 2025-07-07 05:54:07.206 [INFO][4542] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:07.276174 containerd[1834]: 2025-07-07 05:54:07.211 [INFO][4542] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:07.276174 containerd[1834]: 2025-07-07 05:54:07.217 [INFO][4542] ipam/ipam.go 511: Trying affinity for 192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:07.276174 containerd[1834]: 2025-07-07 05:54:07.220 [INFO][4542] ipam/ipam.go 158: Attempting to load block cidr=192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:07.276174 containerd[1834]: 2025-07-07 05:54:07.223 [INFO][4542] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:07.276174 containerd[1834]: 2025-07-07 05:54:07.223 [INFO][4542] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:07.276174 containerd[1834]: 2025-07-07 05:54:07.225 [INFO][4542] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037 Jul 7 05:54:07.276174 containerd[1834]: 2025-07-07 05:54:07.233 [INFO][4542] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:07.276174 containerd[1834]: 2025-07-07 05:54:07.240 [INFO][4542] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.119.193/26] block=192.168.119.192/26 handle="k8s-pod-network.ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:07.276174 containerd[1834]: 2025-07-07 05:54:07.240 [INFO][4542] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.119.193/26] handle="k8s-pod-network.ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:07.276174 containerd[1834]: 2025-07-07 05:54:07.240 [INFO][4542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:07.276174 containerd[1834]: 2025-07-07 05:54:07.240 [INFO][4542] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.193/26] IPv6=[] ContainerID="ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037" HandleID="k8s-pod-network.ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--b7bc54c89--dqx59-eth0" Jul 7 05:54:07.276784 containerd[1834]: 2025-07-07 05:54:07.246 [INFO][4530] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037" Namespace="calico-system" Pod="whisker-b7bc54c89-dqx59" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--b7bc54c89--dqx59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-whisker--b7bc54c89--dqx59-eth0", GenerateName:"whisker-b7bc54c89-", Namespace:"calico-system", SelfLink:"", UID:"71cb083c-5010-4ee4-8a97-b83592767f39", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b7bc54c89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"", Pod:"whisker-b7bc54c89-dqx59", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.119.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali58dbb8a3cd5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:07.276784 containerd[1834]: 2025-07-07 05:54:07.246 [INFO][4530] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.193/32] ContainerID="ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037" Namespace="calico-system" Pod="whisker-b7bc54c89-dqx59" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--b7bc54c89--dqx59-eth0" Jul 7 05:54:07.276784 containerd[1834]: 2025-07-07 05:54:07.247 [INFO][4530] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58dbb8a3cd5 ContainerID="ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037" Namespace="calico-system" Pod="whisker-b7bc54c89-dqx59" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--b7bc54c89--dqx59-eth0" Jul 7 05:54:07.276784 containerd[1834]: 2025-07-07 05:54:07.256 [INFO][4530] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037" Namespace="calico-system" Pod="whisker-b7bc54c89-dqx59" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--b7bc54c89--dqx59-eth0" Jul 7 05:54:07.276784 containerd[1834]: 2025-07-07 05:54:07.256 [INFO][4530] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037" Namespace="calico-system" Pod="whisker-b7bc54c89-dqx59" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--b7bc54c89--dqx59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-whisker--b7bc54c89--dqx59-eth0", GenerateName:"whisker-b7bc54c89-", Namespace:"calico-system", SelfLink:"", UID:"71cb083c-5010-4ee4-8a97-b83592767f39", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b7bc54c89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037", Pod:"whisker-b7bc54c89-dqx59", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.119.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali58dbb8a3cd5", MAC:"5e:c1:7c:06:c8:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:07.276784 containerd[1834]: 2025-07-07 05:54:07.273 [INFO][4530] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037" Namespace="calico-system" Pod="whisker-b7bc54c89-dqx59" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--b7bc54c89--dqx59-eth0" Jul 7 05:54:07.312960 containerd[1834]: time="2025-07-07T05:54:07.312447034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:07.313542 containerd[1834]: time="2025-07-07T05:54:07.312914354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:07.313542 containerd[1834]: time="2025-07-07T05:54:07.312947634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:07.313542 containerd[1834]: time="2025-07-07T05:54:07.313200994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:07.357138 containerd[1834]: time="2025-07-07T05:54:07.357096389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b7bc54c89-dqx59,Uid:71cb083c-5010-4ee4-8a97-b83592767f39,Namespace:calico-system,Attempt:0,} returns sandbox id \"ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037\"" Jul 7 05:54:07.359474 containerd[1834]: time="2025-07-07T05:54:07.359423549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 05:54:07.395258 kubelet[3307]: I0707 05:54:07.395206 3307 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fe50b8e-0e00-42c7-bd7e-7a3329714697" path="/var/lib/kubelet/pods/4fe50b8e-0e00-42c7-bd7e-7a3329714697/volumes" Jul 7 05:54:08.071114 kernel: bpftool[4721]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 05:54:08.337594 systemd-networkd[1397]: vxlan.calico: Link UP Jul 7 05:54:08.337604 systemd-networkd[1397]: vxlan.calico: Gained carrier Jul 7 05:54:08.853288 systemd-networkd[1397]: cali58dbb8a3cd5: Gained IPv6LL Jul 7 05:54:09.514103 containerd[1834]: time="2025-07-07T05:54:09.393876161Z" level=info msg="StopPodSandbox for \"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\"" Jul 7 05:54:09.514103 containerd[1834]: 2025-07-07 05:54:09.452 [INFO][4826] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Jul 7 05:54:09.514103 containerd[1834]: 2025-07-07 05:54:09.452 [INFO][4826] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" iface="eth0" netns="/var/run/netns/cni-c3a45c7d-eef8-cfce-7c29-afd46fa753a0" Jul 7 05:54:09.514103 containerd[1834]: 2025-07-07 05:54:09.452 [INFO][4826] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" iface="eth0" netns="/var/run/netns/cni-c3a45c7d-eef8-cfce-7c29-afd46fa753a0" Jul 7 05:54:09.514103 containerd[1834]: 2025-07-07 05:54:09.455 [INFO][4826] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" iface="eth0" netns="/var/run/netns/cni-c3a45c7d-eef8-cfce-7c29-afd46fa753a0" Jul 7 05:54:09.514103 containerd[1834]: 2025-07-07 05:54:09.455 [INFO][4826] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Jul 7 05:54:09.514103 containerd[1834]: 2025-07-07 05:54:09.455 [INFO][4826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Jul 7 05:54:09.514103 containerd[1834]: 2025-07-07 05:54:09.481 [INFO][4833] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" HandleID="k8s-pod-network.484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0" Jul 7 05:54:09.514103 containerd[1834]: 2025-07-07 05:54:09.481 [INFO][4833] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:09.514103 containerd[1834]: 2025-07-07 05:54:09.481 [INFO][4833] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:09.514103 containerd[1834]: 2025-07-07 05:54:09.493 [WARNING][4833] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" HandleID="k8s-pod-network.484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0" Jul 7 05:54:09.514103 containerd[1834]: 2025-07-07 05:54:09.493 [INFO][4833] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" HandleID="k8s-pod-network.484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0" Jul 7 05:54:09.514103 containerd[1834]: 2025-07-07 05:54:09.495 [INFO][4833] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:09.514103 containerd[1834]: 2025-07-07 05:54:09.497 [INFO][4826] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Jul 7 05:54:09.514103 containerd[1834]: time="2025-07-07T05:54:09.499638950Z" level=info msg="TearDown network for sandbox \"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\" successfully" Jul 7 05:54:09.514103 containerd[1834]: time="2025-07-07T05:54:09.499713150Z" level=info msg="StopPodSandbox for \"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\" returns successfully" Jul 7 05:54:09.514103 containerd[1834]: time="2025-07-07T05:54:09.500691309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4668m,Uid:3631a971-c63b-496d-bbcc-d7a38e1fa7de,Namespace:kube-system,Attempt:1,}" Jul 7 05:54:09.503616 systemd[1]: run-netns-cni\x2dc3a45c7d\x2deef8\x2dcfce\x2d7c29\x2dafd46fa753a0.mount: Deactivated successfully. Jul 7 05:54:09.702076 containerd[1834]: time="2025-07-07T05:54:09.701311887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:09.704092 containerd[1834]: time="2025-07-07T05:54:09.704000207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 7 05:54:09.711499 containerd[1834]: time="2025-07-07T05:54:09.711168566Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:09.718074 containerd[1834]: time="2025-07-07T05:54:09.717892365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:09.719507 containerd[1834]: time="2025-07-07T05:54:09.719441805Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 2.359964976s" Jul 7 05:54:09.719507 containerd[1834]: time="2025-07-07T05:54:09.719506605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 7 05:54:09.739837 systemd-networkd[1397]: calib2b44b1a8a0: Link UP Jul 7 05:54:09.742482 systemd-networkd[1397]: calib2b44b1a8a0: Gained carrier Jul 7 05:54:09.757303 containerd[1834]: time="2025-07-07T05:54:09.757155681Z" level=info msg="CreateContainer within sandbox \"ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 05:54:09.771935 containerd[1834]: 2025-07-07 05:54:09.641 [INFO][4844] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0 coredns-7c65d6cfc9- kube-system 3631a971-c63b-496d-bbcc-d7a38e1fa7de 904 0 2025-07-07 05:53:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-a-5429f7cfbd coredns-7c65d6cfc9-4668m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib2b44b1a8a0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4668m" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-" Jul 7 05:54:09.771935 containerd[1834]: 2025-07-07 05:54:09.641 [INFO][4844] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4668m" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0" Jul 7 05:54:09.771935 containerd[1834]: 2025-07-07 05:54:09.680 [INFO][4856] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963" HandleID="k8s-pod-network.3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0" Jul 7 05:54:09.771935 containerd[1834]: 2025-07-07 05:54:09.680 [INFO][4856] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963" HandleID="k8s-pod-network.3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b1d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-a-5429f7cfbd", "pod":"coredns-7c65d6cfc9-4668m", "timestamp":"2025-07-07 05:54:09.680501129 +0000 UTC"}, Hostname:"ci-4081.3.4-a-5429f7cfbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:54:09.771935 containerd[1834]: 2025-07-07 05:54:09.680 [INFO][4856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:09.771935 containerd[1834]: 2025-07-07 05:54:09.680 [INFO][4856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:09.771935 containerd[1834]: 2025-07-07 05:54:09.680 [INFO][4856] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-5429f7cfbd' Jul 7 05:54:09.771935 containerd[1834]: 2025-07-07 05:54:09.690 [INFO][4856] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:09.771935 containerd[1834]: 2025-07-07 05:54:09.695 [INFO][4856] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:09.771935 containerd[1834]: 2025-07-07 05:54:09.702 [INFO][4856] ipam/ipam.go 511: Trying affinity for 192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:09.771935 containerd[1834]: 2025-07-07 05:54:09.706 [INFO][4856] ipam/ipam.go 158: Attempting to load block cidr=192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:09.771935 containerd[1834]: 2025-07-07 05:54:09.709 [INFO][4856] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:09.771935 containerd[1834]: 2025-07-07 05:54:09.709 [INFO][4856] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:09.771935 containerd[1834]: 2025-07-07 05:54:09.711 [INFO][4856] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963 Jul 7 05:54:09.771935 containerd[1834]: 2025-07-07 05:54:09.718 [INFO][4856] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:09.771935 containerd[1834]: 2025-07-07 05:54:09.729 [INFO][4856] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.119.194/26] block=192.168.119.192/26 handle="k8s-pod-network.3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:09.771935 containerd[1834]: 2025-07-07 05:54:09.729 [INFO][4856] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.119.194/26] handle="k8s-pod-network.3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:09.771935 containerd[1834]: 2025-07-07 05:54:09.729 [INFO][4856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:09.771935 containerd[1834]: 2025-07-07 05:54:09.729 [INFO][4856] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.194/26] IPv6=[] ContainerID="3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963" HandleID="k8s-pod-network.3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0" Jul 7 05:54:09.775168 containerd[1834]: 2025-07-07 05:54:09.732 [INFO][4844] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4668m" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3631a971-c63b-496d-bbcc-d7a38e1fa7de", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"", Pod:"coredns-7c65d6cfc9-4668m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2b44b1a8a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:09.775168 containerd[1834]: 2025-07-07 05:54:09.732 [INFO][4844] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.194/32] ContainerID="3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4668m" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0" Jul 7 05:54:09.775168 containerd[1834]: 2025-07-07 05:54:09.733 [INFO][4844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2b44b1a8a0 ContainerID="3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4668m" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0" Jul 7 05:54:09.775168 containerd[1834]: 2025-07-07 05:54:09.743 [INFO][4844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4668m" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0" Jul 7 05:54:09.775168 containerd[1834]: 2025-07-07 05:54:09.745 [INFO][4844] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4668m" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3631a971-c63b-496d-bbcc-d7a38e1fa7de", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963", Pod:"coredns-7c65d6cfc9-4668m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2b44b1a8a0", MAC:"b6:d5:57:60:ad:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:09.775168 containerd[1834]: 2025-07-07 05:54:09.761 [INFO][4844] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4668m" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0" Jul 7 05:54:09.812375 containerd[1834]: time="2025-07-07T05:54:09.812245355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:09.812533 containerd[1834]: time="2025-07-07T05:54:09.812402195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:09.812533 containerd[1834]: time="2025-07-07T05:54:09.812446475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:09.812680 containerd[1834]: time="2025-07-07T05:54:09.812635475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:09.823429 containerd[1834]: time="2025-07-07T05:54:09.823373673Z" level=info msg="CreateContainer within sandbox \"ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"225298b63cc4987eb8e1f3cda6f45a79c39b9f6a681a0f3bc8dd4c637b50d3d9\"" Jul 7 05:54:09.825570 containerd[1834]: time="2025-07-07T05:54:09.824632273Z" level=info msg="StartContainer for \"225298b63cc4987eb8e1f3cda6f45a79c39b9f6a681a0f3bc8dd4c637b50d3d9\"" Jul 7 05:54:09.874398 containerd[1834]: time="2025-07-07T05:54:09.874346388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4668m,Uid:3631a971-c63b-496d-bbcc-d7a38e1fa7de,Namespace:kube-system,Attempt:1,} returns sandbox id \"3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963\"" Jul 7 05:54:09.890538 containerd[1834]: time="2025-07-07T05:54:09.890218826Z" level=info msg="CreateContainer within sandbox \"3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 05:54:09.915284 containerd[1834]: time="2025-07-07T05:54:09.915228943Z" level=info msg="StartContainer for \"225298b63cc4987eb8e1f3cda6f45a79c39b9f6a681a0f3bc8dd4c637b50d3d9\" returns successfully" Jul 7 05:54:09.918951 containerd[1834]: time="2025-07-07T05:54:09.918574943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 05:54:09.970471 containerd[1834]: time="2025-07-07T05:54:09.970360217Z" level=info msg="CreateContainer within sandbox \"3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5e3df91bd337acd75a96e082101169897f024682ec3356838b861b8a30a5ae68\"" Jul 7 05:54:09.971669 containerd[1834]: time="2025-07-07T05:54:09.971615377Z" level=info msg="StartContainer for \"5e3df91bd337acd75a96e082101169897f024682ec3356838b861b8a30a5ae68\"" Jul 7 05:54:10.261278 systemd-networkd[1397]: vxlan.calico: Gained IPv6LL Jul 7 05:54:10.308156 containerd[1834]: time="2025-07-07T05:54:10.307930059Z" level=info msg="StartContainer for \"5e3df91bd337acd75a96e082101169897f024682ec3356838b861b8a30a5ae68\" returns successfully" Jul 7 05:54:10.393033 containerd[1834]: time="2025-07-07T05:54:10.392926170Z" level=info msg="StopPodSandbox for \"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\"" Jul 7 05:54:10.529382 containerd[1834]: 2025-07-07 05:54:10.449 [INFO][4998] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Jul 7 05:54:10.529382 containerd[1834]: 2025-07-07 05:54:10.450 [INFO][4998] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" iface="eth0" netns="/var/run/netns/cni-fcffc3d2-85c3-c1f4-77bb-a908968085d0" Jul 7 05:54:10.529382 containerd[1834]: 2025-07-07 05:54:10.450 [INFO][4998] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" iface="eth0" netns="/var/run/netns/cni-fcffc3d2-85c3-c1f4-77bb-a908968085d0" Jul 7 05:54:10.529382 containerd[1834]: 2025-07-07 05:54:10.450 [INFO][4998] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" iface="eth0" netns="/var/run/netns/cni-fcffc3d2-85c3-c1f4-77bb-a908968085d0" Jul 7 05:54:10.529382 containerd[1834]: 2025-07-07 05:54:10.450 [INFO][4998] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Jul 7 05:54:10.529382 containerd[1834]: 2025-07-07 05:54:10.450 [INFO][4998] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Jul 7 05:54:10.529382 containerd[1834]: 2025-07-07 05:54:10.476 [INFO][5005] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" HandleID="k8s-pod-network.e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0" Jul 7 05:54:10.529382 containerd[1834]: 2025-07-07 05:54:10.507 [INFO][5005] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:10.529382 containerd[1834]: 2025-07-07 05:54:10.508 [INFO][5005] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:10.529382 containerd[1834]: 2025-07-07 05:54:10.522 [WARNING][5005] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" HandleID="k8s-pod-network.e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0" Jul 7 05:54:10.529382 containerd[1834]: 2025-07-07 05:54:10.522 [INFO][5005] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" HandleID="k8s-pod-network.e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0" Jul 7 05:54:10.529382 containerd[1834]: 2025-07-07 05:54:10.524 [INFO][5005] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:10.529382 containerd[1834]: 2025-07-07 05:54:10.526 [INFO][4998] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Jul 7 05:54:10.529382 containerd[1834]: time="2025-07-07T05:54:10.528904314Z" level=info msg="TearDown network for sandbox \"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\" successfully" Jul 7 05:54:10.529382 containerd[1834]: time="2025-07-07T05:54:10.528936354Z" level=info msg="StopPodSandbox for \"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\" returns successfully" Jul 7 05:54:10.532225 containerd[1834]: time="2025-07-07T05:54:10.531742474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-75wzk,Uid:4f15f550-1ef2-4040-9e85-a0ab3be387d3,Namespace:calico-system,Attempt:1,}" Jul 7 05:54:10.534876 systemd[1]: run-netns-cni\x2dfcffc3d2\x2d85c3\x2dc1f4\x2d77bb\x2da908968085d0.mount: Deactivated successfully. Jul 7 05:54:10.674379 kubelet[3307]: I0707 05:54:10.673766 3307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4668m" podStartSLOduration=43.673743418 podStartE2EDuration="43.673743418s" podCreationTimestamp="2025-07-07 05:53:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:54:10.673604658 +0000 UTC m=+49.409744041" watchObservedRunningTime="2025-07-07 05:54:10.673743418 +0000 UTC m=+49.409882801" Jul 7 05:54:10.837279 systemd-networkd[1397]: calib2b44b1a8a0: Gained IPv6LL Jul 7 05:54:10.873828 systemd-networkd[1397]: cali36bbbc129e7: Link UP Jul 7 05:54:10.876321 systemd-networkd[1397]: cali36bbbc129e7: Gained carrier Jul 7 05:54:10.906573 containerd[1834]: 2025-07-07 05:54:10.783 [INFO][5017] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0 goldmane-58fd7646b9- calico-system 4f15f550-1ef2-4040-9e85-a0ab3be387d3 919 0 2025-07-07 05:53:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.4-a-5429f7cfbd goldmane-58fd7646b9-75wzk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali36bbbc129e7 [] [] }} ContainerID="efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70" Namespace="calico-system" Pod="goldmane-58fd7646b9-75wzk" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-" Jul 7 05:54:10.906573 containerd[1834]: 2025-07-07 05:54:10.783 [INFO][5017] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70" Namespace="calico-system" Pod="goldmane-58fd7646b9-75wzk" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0" Jul 7 05:54:10.906573 containerd[1834]: 2025-07-07 05:54:10.815 [INFO][5029] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70" HandleID="k8s-pod-network.efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0" Jul 7 05:54:10.906573 containerd[1834]: 2025-07-07 05:54:10.815 [INFO][5029] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70" HandleID="k8s-pod-network.efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab4a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-5429f7cfbd", "pod":"goldmane-58fd7646b9-75wzk", "timestamp":"2025-07-07 05:54:10.815762802 +0000 UTC"}, Hostname:"ci-4081.3.4-a-5429f7cfbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:54:10.906573 containerd[1834]: 2025-07-07 05:54:10.816 [INFO][5029] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:10.906573 containerd[1834]: 2025-07-07 05:54:10.816 [INFO][5029] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:10.906573 containerd[1834]: 2025-07-07 05:54:10.816 [INFO][5029] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-5429f7cfbd' Jul 7 05:54:10.906573 containerd[1834]: 2025-07-07 05:54:10.825 [INFO][5029] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:10.906573 containerd[1834]: 2025-07-07 05:54:10.835 [INFO][5029] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:10.906573 containerd[1834]: 2025-07-07 05:54:10.843 [INFO][5029] ipam/ipam.go 511: Trying affinity for 192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:10.906573 containerd[1834]: 2025-07-07 05:54:10.845 [INFO][5029] ipam/ipam.go 158: Attempting to load block cidr=192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:10.906573 containerd[1834]: 2025-07-07 05:54:10.848 [INFO][5029] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:10.906573 containerd[1834]: 2025-07-07 05:54:10.848 [INFO][5029] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:10.906573 containerd[1834]: 2025-07-07 05:54:10.851 [INFO][5029] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70 Jul 7 05:54:10.906573 containerd[1834]: 2025-07-07 05:54:10.859 [INFO][5029] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:10.906573 containerd[1834]: 2025-07-07 05:54:10.866 [INFO][5029] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.119.195/26] block=192.168.119.192/26 handle="k8s-pod-network.efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:10.906573 containerd[1834]: 2025-07-07 05:54:10.867 [INFO][5029] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.119.195/26] handle="k8s-pod-network.efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:10.906573 containerd[1834]: 2025-07-07 05:54:10.867 [INFO][5029] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:10.906573 containerd[1834]: 2025-07-07 05:54:10.867 [INFO][5029] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.195/26] IPv6=[] ContainerID="efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70" HandleID="k8s-pod-network.efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0" Jul 7 05:54:10.907214 containerd[1834]: 2025-07-07 05:54:10.869 [INFO][5017] cni-plugin/k8s.go 418: Populated endpoint ContainerID="efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70" Namespace="calico-system" Pod="goldmane-58fd7646b9-75wzk" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4f15f550-1ef2-4040-9e85-a0ab3be387d3", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"", Pod:"goldmane-58fd7646b9-75wzk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.119.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali36bbbc129e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:10.907214 containerd[1834]: 2025-07-07 05:54:10.869 [INFO][5017] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.195/32] ContainerID="efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70" Namespace="calico-system" Pod="goldmane-58fd7646b9-75wzk" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0" Jul 7 05:54:10.907214 containerd[1834]: 2025-07-07 05:54:10.869 [INFO][5017] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36bbbc129e7 ContainerID="efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70" Namespace="calico-system" Pod="goldmane-58fd7646b9-75wzk" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0" Jul 7 05:54:10.907214 containerd[1834]: 2025-07-07 05:54:10.878 [INFO][5017] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70" Namespace="calico-system" Pod="goldmane-58fd7646b9-75wzk" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0" Jul 7 05:54:10.907214 containerd[1834]: 2025-07-07 05:54:10.882 [INFO][5017] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70" Namespace="calico-system" Pod="goldmane-58fd7646b9-75wzk" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4f15f550-1ef2-4040-9e85-a0ab3be387d3", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70", Pod:"goldmane-58fd7646b9-75wzk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.119.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali36bbbc129e7", MAC:"66:f7:be:18:64:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:10.907214 containerd[1834]: 2025-07-07 05:54:10.897 [INFO][5017] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70" Namespace="calico-system" Pod="goldmane-58fd7646b9-75wzk" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0" Jul 7 05:54:10.943672 containerd[1834]: time="2025-07-07T05:54:10.943521268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:10.943672 containerd[1834]: time="2025-07-07T05:54:10.943608868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:10.943967 containerd[1834]: time="2025-07-07T05:54:10.943646508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:10.944048 containerd[1834]: time="2025-07-07T05:54:10.943822908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:11.009893 containerd[1834]: time="2025-07-07T05:54:11.009773381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-75wzk,Uid:4f15f550-1ef2-4040-9e85-a0ab3be387d3,Namespace:calico-system,Attempt:1,} returns sandbox id \"efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70\"" Jul 7 05:54:12.399410 containerd[1834]: time="2025-07-07T05:54:12.399354585Z" level=info msg="StopPodSandbox for \"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\"" Jul 7 05:54:12.400738 containerd[1834]: time="2025-07-07T05:54:12.399621705Z" level=info msg="StopPodSandbox for \"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\"" Jul 7 05:54:12.411547 containerd[1834]: time="2025-07-07T05:54:12.410687104Z" level=info msg="StopPodSandbox for \"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\"" Jul 7 05:54:12.566184 systemd-networkd[1397]: cali36bbbc129e7: Gained IPv6LL Jul 7 05:54:12.727184 containerd[1834]: 2025-07-07 05:54:12.597 [INFO][5110] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Jul 7 05:54:12.727184 containerd[1834]: 2025-07-07 05:54:12.599 [INFO][5110] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" iface="eth0" netns="/var/run/netns/cni-d4ef3486-b41c-25ac-b51d-e86924168619" Jul 7 05:54:12.727184 containerd[1834]: 2025-07-07 05:54:12.600 [INFO][5110] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" iface="eth0" netns="/var/run/netns/cni-d4ef3486-b41c-25ac-b51d-e86924168619" Jul 7 05:54:12.727184 containerd[1834]: 2025-07-07 05:54:12.605 [INFO][5110] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" iface="eth0" netns="/var/run/netns/cni-d4ef3486-b41c-25ac-b51d-e86924168619" Jul 7 05:54:12.727184 containerd[1834]: 2025-07-07 05:54:12.605 [INFO][5110] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Jul 7 05:54:12.727184 containerd[1834]: 2025-07-07 05:54:12.605 [INFO][5110] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Jul 7 05:54:12.727184 containerd[1834]: 2025-07-07 05:54:12.694 [INFO][5139] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" HandleID="k8s-pod-network.fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0" Jul 7 05:54:12.727184 containerd[1834]: 2025-07-07 05:54:12.696 [INFO][5139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:12.727184 containerd[1834]: 2025-07-07 05:54:12.696 [INFO][5139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:12.727184 containerd[1834]: 2025-07-07 05:54:12.712 [WARNING][5139] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" HandleID="k8s-pod-network.fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0" Jul 7 05:54:12.727184 containerd[1834]: 2025-07-07 05:54:12.712 [INFO][5139] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" HandleID="k8s-pod-network.fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0" Jul 7 05:54:12.727184 containerd[1834]: 2025-07-07 05:54:12.716 [INFO][5139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:12.727184 containerd[1834]: 2025-07-07 05:54:12.724 [INFO][5110] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Jul 7 05:54:12.727630 containerd[1834]: time="2025-07-07T05:54:12.727476108Z" level=info msg="TearDown network for sandbox \"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\" successfully" Jul 7 05:54:12.727630 containerd[1834]: time="2025-07-07T05:54:12.727506908Z" level=info msg="StopPodSandbox for \"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\" returns successfully" Jul 7 05:54:12.731291 containerd[1834]: time="2025-07-07T05:54:12.730759908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c48c79585-dwtff,Uid:241b2127-b6e0-4525-9ffd-ee1fb00c225a,Namespace:calico-system,Attempt:1,}" Jul 7 05:54:12.733713 systemd[1]: run-netns-cni\x2dd4ef3486\x2db41c\x2d25ac\x2db51d\x2de86924168619.mount: Deactivated successfully. Jul 7 05:54:12.743647 containerd[1834]: 2025-07-07 05:54:12.623 [INFO][5118] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Jul 7 05:54:12.743647 containerd[1834]: 2025-07-07 05:54:12.624 [INFO][5118] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" iface="eth0" netns="/var/run/netns/cni-073f895e-b78f-a7b7-932c-caa9531ea7b3" Jul 7 05:54:12.743647 containerd[1834]: 2025-07-07 05:54:12.624 [INFO][5118] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" iface="eth0" netns="/var/run/netns/cni-073f895e-b78f-a7b7-932c-caa9531ea7b3" Jul 7 05:54:12.743647 containerd[1834]: 2025-07-07 05:54:12.625 [INFO][5118] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" iface="eth0" netns="/var/run/netns/cni-073f895e-b78f-a7b7-932c-caa9531ea7b3" Jul 7 05:54:12.743647 containerd[1834]: 2025-07-07 05:54:12.625 [INFO][5118] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Jul 7 05:54:12.743647 containerd[1834]: 2025-07-07 05:54:12.625 [INFO][5118] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Jul 7 05:54:12.743647 containerd[1834]: 2025-07-07 05:54:12.701 [INFO][5147] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" HandleID="k8s-pod-network.4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0" Jul 7 05:54:12.743647 containerd[1834]: 2025-07-07 05:54:12.701 [INFO][5147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:12.743647 containerd[1834]: 2025-07-07 05:54:12.715 [INFO][5147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:12.743647 containerd[1834]: 2025-07-07 05:54:12.734 [WARNING][5147] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" HandleID="k8s-pod-network.4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0" Jul 7 05:54:12.743647 containerd[1834]: 2025-07-07 05:54:12.734 [INFO][5147] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" HandleID="k8s-pod-network.4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0" Jul 7 05:54:12.743647 containerd[1834]: 2025-07-07 05:54:12.737 [INFO][5147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:12.743647 containerd[1834]: 2025-07-07 05:54:12.739 [INFO][5118] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Jul 7 05:54:12.748167 containerd[1834]: time="2025-07-07T05:54:12.747957106Z" level=info msg="TearDown network for sandbox \"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\" successfully" Jul 7 05:54:12.748167 containerd[1834]: time="2025-07-07T05:54:12.748092066Z" level=info msg="StopPodSandbox for \"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\" returns successfully" Jul 7 05:54:12.754559 containerd[1834]: time="2025-07-07T05:54:12.754221825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-znvbk,Uid:ad1a3f21-6e23-4ca3-b64b-f1320f379e83,Namespace:kube-system,Attempt:1,}" Jul 7 05:54:12.756697 systemd[1]: run-netns-cni\x2d073f895e\x2db78f\x2da7b7\x2d932c\x2dcaa9531ea7b3.mount: Deactivated successfully. Jul 7 05:54:12.771201 containerd[1834]: 2025-07-07 05:54:12.605 [INFO][5127] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Jul 7 05:54:12.771201 containerd[1834]: 2025-07-07 05:54:12.607 [INFO][5127] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" iface="eth0" netns="/var/run/netns/cni-fa0003ea-122d-ab04-527c-44973723256e" Jul 7 05:54:12.771201 containerd[1834]: 2025-07-07 05:54:12.608 [INFO][5127] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" iface="eth0" netns="/var/run/netns/cni-fa0003ea-122d-ab04-527c-44973723256e" Jul 7 05:54:12.771201 containerd[1834]: 2025-07-07 05:54:12.614 [INFO][5127] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" iface="eth0" netns="/var/run/netns/cni-fa0003ea-122d-ab04-527c-44973723256e" Jul 7 05:54:12.771201 containerd[1834]: 2025-07-07 05:54:12.614 [INFO][5127] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Jul 7 05:54:12.771201 containerd[1834]: 2025-07-07 05:54:12.614 [INFO][5127] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Jul 7 05:54:12.771201 containerd[1834]: 2025-07-07 05:54:12.712 [INFO][5145] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" HandleID="k8s-pod-network.081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0" Jul 7 05:54:12.771201 containerd[1834]: 2025-07-07 05:54:12.712 [INFO][5145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:12.771201 containerd[1834]: 2025-07-07 05:54:12.738 [INFO][5145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:12.771201 containerd[1834]: 2025-07-07 05:54:12.760 [WARNING][5145] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" HandleID="k8s-pod-network.081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0" Jul 7 05:54:12.771201 containerd[1834]: 2025-07-07 05:54:12.760 [INFO][5145] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" HandleID="k8s-pod-network.081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0" Jul 7 05:54:12.771201 containerd[1834]: 2025-07-07 05:54:12.763 [INFO][5145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:12.771201 containerd[1834]: 2025-07-07 05:54:12.767 [INFO][5127] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Jul 7 05:54:12.772457 containerd[1834]: time="2025-07-07T05:54:12.772320303Z" level=info msg="TearDown network for sandbox \"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\" successfully" Jul 7 05:54:12.772457 containerd[1834]: time="2025-07-07T05:54:12.772355543Z" level=info msg="StopPodSandbox for \"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\" returns successfully" Jul 7 05:54:12.774538 containerd[1834]: time="2025-07-07T05:54:12.774052143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7clz,Uid:c92a7bb2-2db4-4f96-97d0-028fc27545ab,Namespace:calico-system,Attempt:1,}" Jul 7 05:54:12.777453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1949260394.mount: Deactivated successfully. Jul 7 05:54:12.777648 systemd[1]: run-netns-cni\x2dfa0003ea\x2d122d\x2dab04\x2d527c\x2d44973723256e.mount: Deactivated successfully. Jul 7 05:54:12.994933 containerd[1834]: time="2025-07-07T05:54:12.993790239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:13.000493 containerd[1834]: time="2025-07-07T05:54:13.000433838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 7 05:54:13.005399 containerd[1834]: time="2025-07-07T05:54:13.005338837Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:13.018799 containerd[1834]: time="2025-07-07T05:54:13.018706356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:13.019075 containerd[1834]: time="2025-07-07T05:54:13.019031036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 3.100117053s" Jul 7 05:54:13.019235 containerd[1834]: time="2025-07-07T05:54:13.019095996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 7 05:54:13.025649 containerd[1834]: time="2025-07-07T05:54:13.025506555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 05:54:13.031470 containerd[1834]: time="2025-07-07T05:54:13.031402594Z" level=info msg="CreateContainer within sandbox \"ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 05:54:13.112976 containerd[1834]: time="2025-07-07T05:54:13.112532905Z" level=info msg="CreateContainer within sandbox \"ac0f5c29f3819ec0c0f328dd480f0f5a98520ecec9c5b34a548ea45ef051d037\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"8d8f6e9177b1adff15da923a3a68dd112443179312f80e7c359f0c88d5c77bb2\"" Jul 7 05:54:13.115444 containerd[1834]: time="2025-07-07T05:54:13.115373305Z" level=info msg="StartContainer for \"8d8f6e9177b1adff15da923a3a68dd112443179312f80e7c359f0c88d5c77bb2\"" Jul 7 05:54:13.143951 systemd-networkd[1397]: cali46c72a69039: Link UP Jul 7 05:54:13.146247 systemd-networkd[1397]: cali46c72a69039: Gained carrier Jul 7 05:54:13.179837 containerd[1834]: 2025-07-07 05:54:12.974 [INFO][5165] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0 coredns-7c65d6cfc9- kube-system ad1a3f21-6e23-4ca3-b64b-f1320f379e83 940 0 2025-07-07 05:53:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-a-5429f7cfbd coredns-7c65d6cfc9-znvbk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali46c72a69039 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-znvbk" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-" Jul 7 05:54:13.179837 containerd[1834]: 2025-07-07 05:54:12.974 [INFO][5165] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-znvbk" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0" Jul 7 05:54:13.179837 containerd[1834]: 2025-07-07 05:54:13.072 [INFO][5193] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8" HandleID="k8s-pod-network.23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0" Jul 7 05:54:13.179837 containerd[1834]: 2025-07-07 05:54:13.072 [INFO][5193] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8" HandleID="k8s-pod-network.23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400037a2b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-a-5429f7cfbd", "pod":"coredns-7c65d6cfc9-znvbk", "timestamp":"2025-07-07 05:54:13.07244503 +0000 UTC"}, Hostname:"ci-4081.3.4-a-5429f7cfbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:54:13.179837 containerd[1834]: 2025-07-07 05:54:13.072 [INFO][5193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:13.179837 containerd[1834]: 2025-07-07 05:54:13.072 [INFO][5193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:13.179837 containerd[1834]: 2025-07-07 05:54:13.072 [INFO][5193] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-5429f7cfbd' Jul 7 05:54:13.179837 containerd[1834]: 2025-07-07 05:54:13.086 [INFO][5193] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.179837 containerd[1834]: 2025-07-07 05:54:13.092 [INFO][5193] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.179837 containerd[1834]: 2025-07-07 05:54:13.101 [INFO][5193] ipam/ipam.go 511: Trying affinity for 192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.179837 containerd[1834]: 2025-07-07 05:54:13.103 [INFO][5193] ipam/ipam.go 158: Attempting to load block cidr=192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.179837 containerd[1834]: 2025-07-07 05:54:13.107 [INFO][5193] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.179837 containerd[1834]: 2025-07-07 05:54:13.108 [INFO][5193] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.179837 containerd[1834]: 2025-07-07 05:54:13.110 [INFO][5193] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8 Jul 7 05:54:13.179837 containerd[1834]: 2025-07-07 05:54:13.117 [INFO][5193] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.179837 containerd[1834]: 2025-07-07 05:54:13.128 [INFO][5193] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.119.196/26] block=192.168.119.192/26 handle="k8s-pod-network.23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.179837 containerd[1834]: 2025-07-07 05:54:13.128 [INFO][5193] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.119.196/26] handle="k8s-pod-network.23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.179837 containerd[1834]: 2025-07-07 05:54:13.128 [INFO][5193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:13.179837 containerd[1834]: 2025-07-07 05:54:13.128 [INFO][5193] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.196/26] IPv6=[] ContainerID="23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8" HandleID="k8s-pod-network.23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0" Jul 7 05:54:13.180479 containerd[1834]: 2025-07-07 05:54:13.131 [INFO][5165] cni-plugin/k8s.go 418: Populated endpoint ContainerID="23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-znvbk" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ad1a3f21-6e23-4ca3-b64b-f1320f379e83", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"", Pod:"coredns-7c65d6cfc9-znvbk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46c72a69039", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:13.180479 containerd[1834]: 2025-07-07 05:54:13.131 [INFO][5165] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.196/32] ContainerID="23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-znvbk" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0" Jul 7 05:54:13.180479 containerd[1834]: 2025-07-07 05:54:13.131 [INFO][5165] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46c72a69039 ContainerID="23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-znvbk" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0" Jul 7 05:54:13.180479 containerd[1834]: 2025-07-07 05:54:13.148 [INFO][5165] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-znvbk" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0" Jul 7 05:54:13.180479 containerd[1834]: 2025-07-07 05:54:13.151 [INFO][5165] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-znvbk" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ad1a3f21-6e23-4ca3-b64b-f1320f379e83", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8", Pod:"coredns-7c65d6cfc9-znvbk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46c72a69039", MAC:"aa:43:f2:cb:38:91", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:13.180479 containerd[1834]: 2025-07-07 05:54:13.172 [INFO][5165] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-znvbk" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0" Jul 7 05:54:13.218205 containerd[1834]: time="2025-07-07T05:54:13.217519094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:13.218205 containerd[1834]: time="2025-07-07T05:54:13.217602614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:13.218729 containerd[1834]: time="2025-07-07T05:54:13.217690054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:13.220626 containerd[1834]: time="2025-07-07T05:54:13.218983653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:13.262985 systemd-networkd[1397]: cali8de4630aef1: Link UP Jul 7 05:54:13.273145 systemd-networkd[1397]: cali8de4630aef1: Gained carrier Jul 7 05:54:13.311854 containerd[1834]: 2025-07-07 05:54:13.003 [INFO][5174] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0 calico-kube-controllers-5c48c79585- calico-system 241b2127-b6e0-4525-9ffd-ee1fb00c225a 938 0 2025-07-07 05:53:45 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c48c79585 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.4-a-5429f7cfbd calico-kube-controllers-5c48c79585-dwtff eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8de4630aef1 [] [] }} ContainerID="650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3" Namespace="calico-system" Pod="calico-kube-controllers-5c48c79585-dwtff" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-" Jul 7 05:54:13.311854 containerd[1834]: 2025-07-07 05:54:13.003 [INFO][5174] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3" Namespace="calico-system" Pod="calico-kube-controllers-5c48c79585-dwtff" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0" Jul 7 05:54:13.311854 containerd[1834]: 2025-07-07 05:54:13.089 [INFO][5205] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3" HandleID="k8s-pod-network.650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0" Jul 7 05:54:13.311854 containerd[1834]: 2025-07-07 05:54:13.089 [INFO][5205] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3" HandleID="k8s-pod-network.650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000331a00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-5429f7cfbd", "pod":"calico-kube-controllers-5c48c79585-dwtff", "timestamp":"2025-07-07 05:54:13.088988748 +0000 UTC"}, Hostname:"ci-4081.3.4-a-5429f7cfbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:54:13.311854 containerd[1834]: 2025-07-07 05:54:13.089 [INFO][5205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:13.311854 containerd[1834]: 2025-07-07 05:54:13.128 [INFO][5205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:13.311854 containerd[1834]: 2025-07-07 05:54:13.128 [INFO][5205] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-5429f7cfbd' Jul 7 05:54:13.311854 containerd[1834]: 2025-07-07 05:54:13.189 [INFO][5205] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.311854 containerd[1834]: 2025-07-07 05:54:13.199 [INFO][5205] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.311854 containerd[1834]: 2025-07-07 05:54:13.205 [INFO][5205] ipam/ipam.go 511: Trying affinity for 192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.311854 containerd[1834]: 2025-07-07 05:54:13.210 [INFO][5205] ipam/ipam.go 158: Attempting to load block cidr=192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.311854 containerd[1834]: 2025-07-07 05:54:13.215 [INFO][5205] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.311854 containerd[1834]: 2025-07-07 05:54:13.215 [INFO][5205] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.311854 containerd[1834]: 2025-07-07 05:54:13.217 [INFO][5205] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3 Jul 7 05:54:13.311854 containerd[1834]: 2025-07-07 05:54:13.227 [INFO][5205] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.311854 containerd[1834]: 2025-07-07 05:54:13.245 [INFO][5205] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.119.197/26] block=192.168.119.192/26 handle="k8s-pod-network.650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.311854 containerd[1834]: 2025-07-07 05:54:13.247 [INFO][5205] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.119.197/26] handle="k8s-pod-network.650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.311854 containerd[1834]: 2025-07-07 05:54:13.247 [INFO][5205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:13.311854 containerd[1834]: 2025-07-07 05:54:13.247 [INFO][5205] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.197/26] IPv6=[] ContainerID="650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3" HandleID="k8s-pod-network.650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0" Jul 7 05:54:13.312459 containerd[1834]: 2025-07-07 05:54:13.251 [INFO][5174] cni-plugin/k8s.go 418: Populated endpoint ContainerID="650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3" Namespace="calico-system" Pod="calico-kube-controllers-5c48c79585-dwtff" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0", GenerateName:"calico-kube-controllers-5c48c79585-", Namespace:"calico-system", SelfLink:"", UID:"241b2127-b6e0-4525-9ffd-ee1fb00c225a", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c48c79585", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"", Pod:"calico-kube-controllers-5c48c79585-dwtff", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8de4630aef1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:13.312459 containerd[1834]: 2025-07-07 05:54:13.251 [INFO][5174] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.197/32] ContainerID="650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3" Namespace="calico-system" Pod="calico-kube-controllers-5c48c79585-dwtff" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0" Jul 7 05:54:13.312459 containerd[1834]: 2025-07-07 05:54:13.251 [INFO][5174] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8de4630aef1 ContainerID="650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3" Namespace="calico-system" Pod="calico-kube-controllers-5c48c79585-dwtff" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0" Jul 7 05:54:13.312459 containerd[1834]: 2025-07-07 05:54:13.274 [INFO][5174] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3" Namespace="calico-system" Pod="calico-kube-controllers-5c48c79585-dwtff" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0" Jul 7 05:54:13.312459 containerd[1834]: 2025-07-07 05:54:13.283 [INFO][5174] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3" Namespace="calico-system" Pod="calico-kube-controllers-5c48c79585-dwtff" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0", GenerateName:"calico-kube-controllers-5c48c79585-", Namespace:"calico-system", SelfLink:"", UID:"241b2127-b6e0-4525-9ffd-ee1fb00c225a", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c48c79585", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3", Pod:"calico-kube-controllers-5c48c79585-dwtff", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8de4630aef1", MAC:"e2:21:75:aa:6b:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:13.312459 containerd[1834]: 2025-07-07 05:54:13.300 [INFO][5174] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3" Namespace="calico-system" Pod="calico-kube-controllers-5c48c79585-dwtff" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0" Jul 7 05:54:13.334658 containerd[1834]: time="2025-07-07T05:54:13.334607081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-znvbk,Uid:ad1a3f21-6e23-4ca3-b64b-f1320f379e83,Namespace:kube-system,Attempt:1,} returns sandbox id \"23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8\"" Jul 7 05:54:13.335180 containerd[1834]: time="2025-07-07T05:54:13.334963601Z" level=info msg="StartContainer for \"8d8f6e9177b1adff15da923a3a68dd112443179312f80e7c359f0c88d5c77bb2\" returns successfully" Jul 7 05:54:13.348311 containerd[1834]: time="2025-07-07T05:54:13.348036039Z" level=info msg="CreateContainer within sandbox \"23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 05:54:13.364113 containerd[1834]: time="2025-07-07T05:54:13.363820637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:13.364113 containerd[1834]: time="2025-07-07T05:54:13.364002477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:13.364113 containerd[1834]: time="2025-07-07T05:54:13.364021157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:13.364980 containerd[1834]: time="2025-07-07T05:54:13.364748757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:13.393281 systemd-networkd[1397]: calic4efa11edf8: Link UP Jul 7 05:54:13.403710 systemd-networkd[1397]: calic4efa11edf8: Gained carrier Jul 7 05:54:13.469410 containerd[1834]: time="2025-07-07T05:54:13.469226906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c48c79585-dwtff,Uid:241b2127-b6e0-4525-9ffd-ee1fb00c225a,Namespace:calico-system,Attempt:1,} returns sandbox id \"650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3\"" Jul 7 05:54:13.469867 containerd[1834]: 2025-07-07 05:54:13.042 [INFO][5187] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0 csi-node-driver- calico-system c92a7bb2-2db4-4f96-97d0-028fc27545ab 939 0 2025-07-07 05:53:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.4-a-5429f7cfbd csi-node-driver-z7clz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic4efa11edf8 [] [] }} ContainerID="cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137" Namespace="calico-system" Pod="csi-node-driver-z7clz" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-" Jul 7 05:54:13.469867 containerd[1834]: 2025-07-07 05:54:13.042 [INFO][5187] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137" Namespace="calico-system" Pod="csi-node-driver-z7clz" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0" Jul 7 05:54:13.469867 containerd[1834]: 2025-07-07 05:54:13.122 [INFO][5213] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137" HandleID="k8s-pod-network.cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0" Jul 7 05:54:13.469867 containerd[1834]: 2025-07-07 05:54:13.123 [INFO][5213] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137" HandleID="k8s-pod-network.cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004da90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-5429f7cfbd", "pod":"csi-node-driver-z7clz", "timestamp":"2025-07-07 05:54:13.122871824 +0000 UTC"}, Hostname:"ci-4081.3.4-a-5429f7cfbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:54:13.469867 containerd[1834]: 2025-07-07 05:54:13.123 [INFO][5213] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:13.469867 containerd[1834]: 2025-07-07 05:54:13.247 [INFO][5213] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:13.469867 containerd[1834]: 2025-07-07 05:54:13.247 [INFO][5213] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-5429f7cfbd' Jul 7 05:54:13.469867 containerd[1834]: 2025-07-07 05:54:13.301 [INFO][5213] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.469867 containerd[1834]: 2025-07-07 05:54:13.316 [INFO][5213] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.469867 containerd[1834]: 2025-07-07 05:54:13.324 [INFO][5213] ipam/ipam.go 511: Trying affinity for 192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.469867 containerd[1834]: 2025-07-07 05:54:13.328 [INFO][5213] ipam/ipam.go 158: Attempting to load block cidr=192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.469867 containerd[1834]: 2025-07-07 05:54:13.334 [INFO][5213] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.469867 containerd[1834]: 2025-07-07 05:54:13.334 [INFO][5213] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.469867 containerd[1834]: 2025-07-07 05:54:13.339 [INFO][5213] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137 Jul 7 05:54:13.469867 containerd[1834]: 2025-07-07 05:54:13.360 [INFO][5213] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.469867 containerd[1834]: 2025-07-07 05:54:13.375 [INFO][5213] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.119.198/26] block=192.168.119.192/26 handle="k8s-pod-network.cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.469867 containerd[1834]: 2025-07-07 05:54:13.376 [INFO][5213] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.119.198/26] handle="k8s-pod-network.cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:13.469867 containerd[1834]: 2025-07-07 05:54:13.376 [INFO][5213] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:13.469867 containerd[1834]: 2025-07-07 05:54:13.376 [INFO][5213] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.198/26] IPv6=[] ContainerID="cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137" HandleID="k8s-pod-network.cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0" Jul 7 05:54:13.472628 containerd[1834]: 2025-07-07 05:54:13.386 [INFO][5187] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137" Namespace="calico-system" Pod="csi-node-driver-z7clz" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c92a7bb2-2db4-4f96-97d0-028fc27545ab", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"", Pod:"csi-node-driver-z7clz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.119.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4efa11edf8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:13.472628 containerd[1834]: 2025-07-07 05:54:13.386 [INFO][5187] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.198/32] ContainerID="cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137" Namespace="calico-system" Pod="csi-node-driver-z7clz" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0" Jul 7 05:54:13.472628 containerd[1834]: 2025-07-07 05:54:13.386 [INFO][5187] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4efa11edf8 ContainerID="cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137" Namespace="calico-system" Pod="csi-node-driver-z7clz" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0" Jul 7 05:54:13.472628 containerd[1834]: 2025-07-07 05:54:13.398 [INFO][5187] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137" Namespace="calico-system" Pod="csi-node-driver-z7clz" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0" Jul 7 05:54:13.472628 containerd[1834]: 2025-07-07 05:54:13.420 [INFO][5187] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137" Namespace="calico-system" Pod="csi-node-driver-z7clz" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c92a7bb2-2db4-4f96-97d0-028fc27545ab", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137", Pod:"csi-node-driver-z7clz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.119.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4efa11edf8", MAC:"ea:8a:00:23:f7:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:13.472628 containerd[1834]: 2025-07-07 05:54:13.457 [INFO][5187] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137" Namespace="calico-system" Pod="csi-node-driver-z7clz" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0" Jul 7 05:54:13.502854 containerd[1834]: time="2025-07-07T05:54:13.502801262Z" level=info msg="CreateContainer within sandbox \"23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5fc39715c8692faec5075bb21e2f2c5e6fcca34cafe32adbde91b0f91516be3d\"" Jul 7 05:54:13.503475 containerd[1834]: time="2025-07-07T05:54:13.503442462Z" level=info msg="StartContainer for \"5fc39715c8692faec5075bb21e2f2c5e6fcca34cafe32adbde91b0f91516be3d\"" Jul 7 05:54:13.541164 containerd[1834]: time="2025-07-07T05:54:13.540699938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:13.541164 containerd[1834]: time="2025-07-07T05:54:13.540779217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:13.541164 containerd[1834]: time="2025-07-07T05:54:13.540795657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:13.541164 containerd[1834]: time="2025-07-07T05:54:13.540908057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:13.634928 containerd[1834]: time="2025-07-07T05:54:13.633620927Z" level=info msg="StartContainer for \"5fc39715c8692faec5075bb21e2f2c5e6fcca34cafe32adbde91b0f91516be3d\" returns successfully" Jul 7 05:54:13.639478 containerd[1834]: time="2025-07-07T05:54:13.639186206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7clz,Uid:c92a7bb2-2db4-4f96-97d0-028fc27545ab,Namespace:calico-system,Attempt:1,} returns sandbox id \"cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137\"" Jul 7 05:54:13.730664 kubelet[3307]: I0707 05:54:13.730451 3307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-b7bc54c89-dqx59" podStartSLOduration=2.06544827 podStartE2EDuration="7.730429396s" podCreationTimestamp="2025-07-07 05:54:06 +0000 UTC" firstStartedPulling="2025-07-07 05:54:07.358755589 +0000 UTC m=+46.094894972" lastFinishedPulling="2025-07-07 05:54:13.023736715 +0000 UTC m=+51.759876098" observedRunningTime="2025-07-07 05:54:13.703771839 +0000 UTC m=+52.439911262" watchObservedRunningTime="2025-07-07 05:54:13.730429396 +0000 UTC m=+52.466568739" Jul 7 05:54:14.393819 containerd[1834]: time="2025-07-07T05:54:14.393496362Z" level=info msg="StopPodSandbox for \"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\"" Jul 7 05:54:14.393819 containerd[1834]: time="2025-07-07T05:54:14.393566242Z" level=info msg="StopPodSandbox for \"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\"" Jul 7 05:54:14.463273 kubelet[3307]: I0707 05:54:14.463113 3307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-znvbk" podStartSLOduration=47.463083474 podStartE2EDuration="47.463083474s" podCreationTimestamp="2025-07-07 05:53:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:54:13.731195436 +0000 UTC m=+52.467334819" watchObservedRunningTime="2025-07-07 05:54:14.463083474 +0000 UTC m=+53.199222857" Jul 7 05:54:14.485299 systemd-networkd[1397]: cali46c72a69039: Gained IPv6LL Jul 7 05:54:14.512454 containerd[1834]: 2025-07-07 05:54:14.461 [INFO][5479] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Jul 7 05:54:14.512454 containerd[1834]: 2025-07-07 05:54:14.462 [INFO][5479] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" iface="eth0" netns="/var/run/netns/cni-ef2885d0-4641-976c-01b3-61ccfb555c18" Jul 7 05:54:14.512454 containerd[1834]: 2025-07-07 05:54:14.462 [INFO][5479] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" iface="eth0" netns="/var/run/netns/cni-ef2885d0-4641-976c-01b3-61ccfb555c18" Jul 7 05:54:14.512454 containerd[1834]: 2025-07-07 05:54:14.462 [INFO][5479] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" iface="eth0" netns="/var/run/netns/cni-ef2885d0-4641-976c-01b3-61ccfb555c18" Jul 7 05:54:14.512454 containerd[1834]: 2025-07-07 05:54:14.462 [INFO][5479] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Jul 7 05:54:14.512454 containerd[1834]: 2025-07-07 05:54:14.462 [INFO][5479] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Jul 7 05:54:14.512454 containerd[1834]: 2025-07-07 05:54:14.494 [INFO][5493] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" HandleID="k8s-pod-network.a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0" Jul 7 05:54:14.512454 containerd[1834]: 2025-07-07 05:54:14.494 [INFO][5493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:14.512454 containerd[1834]: 2025-07-07 05:54:14.494 [INFO][5493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:14.512454 containerd[1834]: 2025-07-07 05:54:14.504 [WARNING][5493] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" HandleID="k8s-pod-network.a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0" Jul 7 05:54:14.512454 containerd[1834]: 2025-07-07 05:54:14.504 [INFO][5493] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" HandleID="k8s-pod-network.a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0" Jul 7 05:54:14.512454 containerd[1834]: 2025-07-07 05:54:14.505 [INFO][5493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:14.512454 containerd[1834]: 2025-07-07 05:54:14.510 [INFO][5479] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Jul 7 05:54:14.515758 containerd[1834]: time="2025-07-07T05:54:14.513113029Z" level=info msg="TearDown network for sandbox \"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\" successfully" Jul 7 05:54:14.516526 containerd[1834]: time="2025-07-07T05:54:14.516444868Z" level=info msg="StopPodSandbox for \"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\" returns successfully" Jul 7 05:54:14.518148 containerd[1834]: time="2025-07-07T05:54:14.517815708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577bd8b5bc-khd5g,Uid:04f2c61c-8783-44a5-a6b4-59965cc32dc5,Namespace:calico-apiserver,Attempt:1,}" Jul 7 05:54:14.521426 systemd[1]: run-netns-cni\x2def2885d0\x2d4641\x2d976c\x2d01b3\x2d61ccfb555c18.mount: Deactivated successfully. Jul 7 05:54:14.540082 containerd[1834]: 2025-07-07 05:54:14.468 [INFO][5480] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Jul 7 05:54:14.540082 containerd[1834]: 2025-07-07 05:54:14.468 [INFO][5480] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" iface="eth0" netns="/var/run/netns/cni-a9a8ed81-6434-068f-f0c1-0ea1ca1e56b6" Jul 7 05:54:14.540082 containerd[1834]: 2025-07-07 05:54:14.469 [INFO][5480] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" iface="eth0" netns="/var/run/netns/cni-a9a8ed81-6434-068f-f0c1-0ea1ca1e56b6" Jul 7 05:54:14.540082 containerd[1834]: 2025-07-07 05:54:14.469 [INFO][5480] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" iface="eth0" netns="/var/run/netns/cni-a9a8ed81-6434-068f-f0c1-0ea1ca1e56b6" Jul 7 05:54:14.540082 containerd[1834]: 2025-07-07 05:54:14.469 [INFO][5480] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Jul 7 05:54:14.540082 containerd[1834]: 2025-07-07 05:54:14.469 [INFO][5480] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Jul 7 05:54:14.540082 containerd[1834]: 2025-07-07 05:54:14.500 [INFO][5497] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" HandleID="k8s-pod-network.4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0" Jul 7 05:54:14.540082 containerd[1834]: 2025-07-07 05:54:14.500 [INFO][5497] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:14.540082 containerd[1834]: 2025-07-07 05:54:14.506 [INFO][5497] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:14.540082 containerd[1834]: 2025-07-07 05:54:14.522 [WARNING][5497] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" HandleID="k8s-pod-network.4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0" Jul 7 05:54:14.540082 containerd[1834]: 2025-07-07 05:54:14.522 [INFO][5497] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" HandleID="k8s-pod-network.4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0" Jul 7 05:54:14.540082 containerd[1834]: 2025-07-07 05:54:14.532 [INFO][5497] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:14.540082 containerd[1834]: 2025-07-07 05:54:14.534 [INFO][5480] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Jul 7 05:54:14.540082 containerd[1834]: time="2025-07-07T05:54:14.537964586Z" level=info msg="TearDown network for sandbox \"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\" successfully" Jul 7 05:54:14.540082 containerd[1834]: time="2025-07-07T05:54:14.537995026Z" level=info msg="StopPodSandbox for \"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\" returns successfully" Jul 7 05:54:14.544344 containerd[1834]: time="2025-07-07T05:54:14.542416905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577bd8b5bc-8dtfq,Uid:617d9772-8b6b-4315-9d5b-66e788e56a42,Namespace:calico-apiserver,Attempt:1,}" Jul 7 05:54:14.542600 systemd[1]: run-netns-cni\x2da9a8ed81\x2d6434\x2d068f\x2df0c1\x2d0ea1ca1e56b6.mount: Deactivated successfully. Jul 7 05:54:14.551215 systemd-networkd[1397]: cali8de4630aef1: Gained IPv6LL Jul 7 05:54:14.805568 systemd-networkd[1397]: calic4efa11edf8: Gained IPv6LL Jul 7 05:54:14.938615 systemd-networkd[1397]: cali311c6d8b1ab: Link UP Jul 7 05:54:14.939121 systemd-networkd[1397]: cali311c6d8b1ab: Gained carrier Jul 7 05:54:15.020386 systemd-networkd[1397]: cali3bad4ea4619: Link UP Jul 7 05:54:15.021949 systemd-networkd[1397]: cali3bad4ea4619: Gained carrier Jul 7 05:54:15.023826 containerd[1834]: 2025-07-07 05:54:14.705 [INFO][5519] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0 calico-apiserver-577bd8b5bc- calico-apiserver 617d9772-8b6b-4315-9d5b-66e788e56a42 975 0 2025-07-07 05:53:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:577bd8b5bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-a-5429f7cfbd calico-apiserver-577bd8b5bc-8dtfq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali311c6d8b1ab [] [] }} ContainerID="70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749" Namespace="calico-apiserver" Pod="calico-apiserver-577bd8b5bc-8dtfq" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-" Jul 7 05:54:15.023826 containerd[1834]: 2025-07-07 05:54:14.706 [INFO][5519] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749" Namespace="calico-apiserver" Pod="calico-apiserver-577bd8b5bc-8dtfq" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0" Jul 7 05:54:15.023826 containerd[1834]: 2025-07-07 05:54:14.816 [INFO][5536] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749" HandleID="k8s-pod-network.70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0" Jul 7 05:54:15.023826 containerd[1834]: 2025-07-07 05:54:14.816 [INFO][5536] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749" HandleID="k8s-pod-network.70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000375690), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-a-5429f7cfbd", "pod":"calico-apiserver-577bd8b5bc-8dtfq", "timestamp":"2025-07-07 05:54:14.816408395 +0000 UTC"}, Hostname:"ci-4081.3.4-a-5429f7cfbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:54:15.023826 containerd[1834]: 2025-07-07 05:54:14.819 [INFO][5536] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:15.023826 containerd[1834]: 2025-07-07 05:54:14.819 [INFO][5536] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:15.023826 containerd[1834]: 2025-07-07 05:54:14.819 [INFO][5536] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-5429f7cfbd' Jul 7 05:54:15.023826 containerd[1834]: 2025-07-07 05:54:14.849 [INFO][5536] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:15.023826 containerd[1834]: 2025-07-07 05:54:14.863 [INFO][5536] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:15.023826 containerd[1834]: 2025-07-07 05:54:14.885 [INFO][5536] ipam/ipam.go 511: Trying affinity for 192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:15.023826 containerd[1834]: 2025-07-07 05:54:14.890 [INFO][5536] ipam/ipam.go 158: Attempting to load block cidr=192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:15.023826 containerd[1834]: 2025-07-07 05:54:14.894 [INFO][5536] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:15.023826 containerd[1834]: 2025-07-07 05:54:14.894 [INFO][5536] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:15.023826 containerd[1834]: 2025-07-07 05:54:14.897 [INFO][5536] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749 Jul 7 05:54:15.023826 containerd[1834]: 2025-07-07 05:54:14.908 [INFO][5536] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:15.023826 containerd[1834]: 2025-07-07 05:54:14.927 [INFO][5536] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.119.199/26] block=192.168.119.192/26 handle="k8s-pod-network.70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:15.023826 containerd[1834]: 2025-07-07 05:54:14.927 [INFO][5536] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.119.199/26] handle="k8s-pod-network.70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:15.023826 containerd[1834]: 2025-07-07 05:54:14.927 [INFO][5536] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:15.023826 containerd[1834]: 2025-07-07 05:54:14.927 [INFO][5536] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.199/26] IPv6=[] ContainerID="70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749" HandleID="k8s-pod-network.70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0" Jul 7 05:54:15.024718 containerd[1834]: 2025-07-07 05:54:14.932 [INFO][5519] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749" Namespace="calico-apiserver" Pod="calico-apiserver-577bd8b5bc-8dtfq" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0", GenerateName:"calico-apiserver-577bd8b5bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"617d9772-8b6b-4315-9d5b-66e788e56a42", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"577bd8b5bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"", Pod:"calico-apiserver-577bd8b5bc-8dtfq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali311c6d8b1ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:15.024718 containerd[1834]: 2025-07-07 05:54:14.932 [INFO][5519] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.199/32] ContainerID="70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749" Namespace="calico-apiserver" Pod="calico-apiserver-577bd8b5bc-8dtfq" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0" Jul 7 05:54:15.024718 containerd[1834]: 2025-07-07 05:54:14.932 [INFO][5519] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali311c6d8b1ab ContainerID="70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749" Namespace="calico-apiserver" Pod="calico-apiserver-577bd8b5bc-8dtfq" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0" Jul 7 05:54:15.024718 containerd[1834]: 2025-07-07 05:54:15.000 [INFO][5519] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749" Namespace="calico-apiserver" Pod="calico-apiserver-577bd8b5bc-8dtfq" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0" Jul 7 05:54:15.024718 containerd[1834]: 2025-07-07 05:54:15.001 [INFO][5519] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749" Namespace="calico-apiserver" Pod="calico-apiserver-577bd8b5bc-8dtfq" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0", GenerateName:"calico-apiserver-577bd8b5bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"617d9772-8b6b-4315-9d5b-66e788e56a42", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"577bd8b5bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749", Pod:"calico-apiserver-577bd8b5bc-8dtfq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali311c6d8b1ab", MAC:"72:8d:8f:6f:dc:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:15.024718 containerd[1834]: 2025-07-07 05:54:15.018 [INFO][5519] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749" Namespace="calico-apiserver" Pod="calico-apiserver-577bd8b5bc-8dtfq" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0" Jul 7 05:54:15.052140 containerd[1834]: 2025-07-07 05:54:14.676 [INFO][5507] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0 calico-apiserver-577bd8b5bc- calico-apiserver 04f2c61c-8783-44a5-a6b4-59965cc32dc5 974 0 2025-07-07 05:53:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:577bd8b5bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-a-5429f7cfbd calico-apiserver-577bd8b5bc-khd5g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3bad4ea4619 [] [] }} ContainerID="0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f" Namespace="calico-apiserver" Pod="calico-apiserver-577bd8b5bc-khd5g" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-" Jul 7 05:54:15.052140 containerd[1834]: 2025-07-07 05:54:14.677 [INFO][5507] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f" Namespace="calico-apiserver" Pod="calico-apiserver-577bd8b5bc-khd5g" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0" Jul 7 05:54:15.052140 containerd[1834]: 2025-07-07 05:54:14.827 [INFO][5531] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f" HandleID="k8s-pod-network.0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0" Jul 7 05:54:15.052140 containerd[1834]: 2025-07-07 05:54:14.834 [INFO][5531] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f" HandleID="k8s-pod-network.0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330240), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-a-5429f7cfbd", "pod":"calico-apiserver-577bd8b5bc-khd5g", "timestamp":"2025-07-07 05:54:14.827868154 +0000 UTC"}, Hostname:"ci-4081.3.4-a-5429f7cfbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:54:15.052140 containerd[1834]: 2025-07-07 05:54:14.834 [INFO][5531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:15.052140 containerd[1834]: 2025-07-07 05:54:14.928 [INFO][5531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:15.052140 containerd[1834]: 2025-07-07 05:54:14.929 [INFO][5531] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-5429f7cfbd' Jul 7 05:54:15.052140 containerd[1834]: 2025-07-07 05:54:14.955 [INFO][5531] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:15.052140 containerd[1834]: 2025-07-07 05:54:14.966 [INFO][5531] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:15.052140 containerd[1834]: 2025-07-07 05:54:14.973 [INFO][5531] ipam/ipam.go 511: Trying affinity for 192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:15.052140 containerd[1834]: 2025-07-07 05:54:14.976 [INFO][5531] ipam/ipam.go 158: Attempting to load block cidr=192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:15.052140 containerd[1834]: 2025-07-07 05:54:14.980 [INFO][5531] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:15.052140 containerd[1834]: 2025-07-07 05:54:14.980 [INFO][5531] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:15.052140 containerd[1834]: 2025-07-07 05:54:14.982 [INFO][5531] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f Jul 7 05:54:15.052140 containerd[1834]: 2025-07-07 05:54:14.992 [INFO][5531] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:15.052140 containerd[1834]: 2025-07-07 05:54:15.007 [INFO][5531] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.119.200/26] block=192.168.119.192/26 handle="k8s-pod-network.0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:15.052140 containerd[1834]: 2025-07-07 05:54:15.007 [INFO][5531] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.119.200/26] handle="k8s-pod-network.0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f" host="ci-4081.3.4-a-5429f7cfbd" Jul 7 05:54:15.052140 containerd[1834]: 2025-07-07 05:54:15.007 [INFO][5531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:15.052140 containerd[1834]: 2025-07-07 05:54:15.007 [INFO][5531] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.200/26] IPv6=[] ContainerID="0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f" HandleID="k8s-pod-network.0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0" Jul 7 05:54:15.052854 containerd[1834]: 2025-07-07 05:54:15.010 [INFO][5507] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f" Namespace="calico-apiserver" Pod="calico-apiserver-577bd8b5bc-khd5g" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0", GenerateName:"calico-apiserver-577bd8b5bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"04f2c61c-8783-44a5-a6b4-59965cc32dc5", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"577bd8b5bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"", Pod:"calico-apiserver-577bd8b5bc-khd5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3bad4ea4619", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:15.052854 containerd[1834]: 2025-07-07 05:54:15.011 [INFO][5507] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.200/32] ContainerID="0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f" Namespace="calico-apiserver" Pod="calico-apiserver-577bd8b5bc-khd5g" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0" Jul 7 05:54:15.052854 containerd[1834]: 2025-07-07 05:54:15.011 [INFO][5507] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3bad4ea4619 ContainerID="0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f" Namespace="calico-apiserver" Pod="calico-apiserver-577bd8b5bc-khd5g" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0" Jul 7 05:54:15.052854 containerd[1834]: 2025-07-07 05:54:15.022 [INFO][5507] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f" Namespace="calico-apiserver" Pod="calico-apiserver-577bd8b5bc-khd5g" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0" Jul 7 05:54:15.052854 containerd[1834]: 2025-07-07 05:54:15.025 [INFO][5507] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f" Namespace="calico-apiserver" Pod="calico-apiserver-577bd8b5bc-khd5g" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0", GenerateName:"calico-apiserver-577bd8b5bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"04f2c61c-8783-44a5-a6b4-59965cc32dc5", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"577bd8b5bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f", Pod:"calico-apiserver-577bd8b5bc-khd5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3bad4ea4619", MAC:"a6:6d:12:8e:c8:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:15.052854 containerd[1834]: 2025-07-07 05:54:15.047 [INFO][5507] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f" Namespace="calico-apiserver" Pod="calico-apiserver-577bd8b5bc-khd5g" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0" Jul 7 05:54:15.592963 containerd[1834]: time="2025-07-07T05:54:15.592437437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:15.592963 containerd[1834]: time="2025-07-07T05:54:15.592505277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:15.592963 containerd[1834]: time="2025-07-07T05:54:15.592523357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:15.592963 containerd[1834]: time="2025-07-07T05:54:15.592624637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:15.640868 containerd[1834]: time="2025-07-07T05:54:15.640556273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:15.640868 containerd[1834]: time="2025-07-07T05:54:15.640626033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:15.640868 containerd[1834]: time="2025-07-07T05:54:15.640641593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:15.640868 containerd[1834]: time="2025-07-07T05:54:15.640730713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:15.697126 containerd[1834]: time="2025-07-07T05:54:15.696712388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577bd8b5bc-8dtfq,Uid:617d9772-8b6b-4315-9d5b-66e788e56a42,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749\"" Jul 7 05:54:15.721698 containerd[1834]: time="2025-07-07T05:54:15.721651266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577bd8b5bc-khd5g,Uid:04f2c61c-8783-44a5-a6b4-59965cc32dc5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f\"" Jul 7 05:54:16.215368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount236603715.mount: Deactivated successfully. Jul 7 05:54:16.661501 systemd-networkd[1397]: cali3bad4ea4619: Gained IPv6LL Jul 7 05:54:16.780488 containerd[1834]: time="2025-07-07T05:54:16.780422778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:16.783144 containerd[1834]: time="2025-07-07T05:54:16.783094778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 7 05:54:16.788329 containerd[1834]: time="2025-07-07T05:54:16.788229178Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:16.795349 containerd[1834]: time="2025-07-07T05:54:16.795249337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:16.796318 containerd[1834]: time="2025-07-07T05:54:16.796184337Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.770076622s" Jul 7 05:54:16.796318 containerd[1834]: time="2025-07-07T05:54:16.796222777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 7 05:54:16.798377 containerd[1834]: time="2025-07-07T05:54:16.798119417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 05:54:16.800746 containerd[1834]: time="2025-07-07T05:54:16.800697777Z" level=info msg="CreateContainer within sandbox \"efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 05:54:16.841182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2197006727.mount: Deactivated successfully. Jul 7 05:54:16.853280 systemd-networkd[1397]: cali311c6d8b1ab: Gained IPv6LL Jul 7 05:54:16.855386 containerd[1834]: time="2025-07-07T05:54:16.855343652Z" level=info msg="CreateContainer within sandbox \"efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c383aa8cd9d78885c3d1452d70a1c9de323902951b8cc636cff35285a17a1737\"" Jul 7 05:54:16.856994 containerd[1834]: time="2025-07-07T05:54:16.856947252Z" level=info msg="StartContainer for \"c383aa8cd9d78885c3d1452d70a1c9de323902951b8cc636cff35285a17a1737\"" Jul 7 05:54:16.937224 containerd[1834]: time="2025-07-07T05:54:16.937026165Z" level=info msg="StartContainer for \"c383aa8cd9d78885c3d1452d70a1c9de323902951b8cc636cff35285a17a1737\" returns successfully" Jul 7 05:54:19.748513 kubelet[3307]: I0707 05:54:19.747846 3307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-75wzk" podStartSLOduration=28.967279216 podStartE2EDuration="34.746358253s" podCreationTimestamp="2025-07-07 05:53:45 +0000 UTC" firstStartedPulling="2025-07-07 05:54:11.01834214 +0000 UTC m=+49.754481523" lastFinishedPulling="2025-07-07 05:54:16.797421217 +0000 UTC m=+55.533560560" observedRunningTime="2025-07-07 05:54:17.735424539 +0000 UTC m=+56.471563922" watchObservedRunningTime="2025-07-07 05:54:19.746358253 +0000 UTC m=+58.482497636" Jul 7 05:54:19.942114 containerd[1834]: time="2025-07-07T05:54:19.941777517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:19.945568 containerd[1834]: time="2025-07-07T05:54:19.945413597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 7 05:54:19.955380 containerd[1834]: time="2025-07-07T05:54:19.955280516Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:19.960747 containerd[1834]: time="2025-07-07T05:54:19.960658715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:19.961616 containerd[1834]: time="2025-07-07T05:54:19.961449555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 3.163288018s" Jul 7 05:54:19.961616 containerd[1834]: time="2025-07-07T05:54:19.961493755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 7 05:54:19.963371 containerd[1834]: time="2025-07-07T05:54:19.963301115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 05:54:19.977704 containerd[1834]: time="2025-07-07T05:54:19.977635474Z" level=info msg="CreateContainer within sandbox \"650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 05:54:20.042821 containerd[1834]: time="2025-07-07T05:54:20.042685949Z" level=info msg="CreateContainer within sandbox \"650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9d759979b7996912b3dbc11ca8c4dbf3aee501c0a2adbc83d2e17e242359f4f1\"" Jul 7 05:54:20.044478 containerd[1834]: time="2025-07-07T05:54:20.044348468Z" level=info msg="StartContainer for \"9d759979b7996912b3dbc11ca8c4dbf3aee501c0a2adbc83d2e17e242359f4f1\"" Jul 7 05:54:20.132095 containerd[1834]: time="2025-07-07T05:54:20.130435941Z" level=info msg="StartContainer for \"9d759979b7996912b3dbc11ca8c4dbf3aee501c0a2adbc83d2e17e242359f4f1\" returns successfully" Jul 7 05:54:20.760716 kubelet[3307]: I0707 05:54:20.756007 3307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5c48c79585-dwtff" podStartSLOduration=29.277208919 podStartE2EDuration="35.75598537s" podCreationTimestamp="2025-07-07 05:53:45 +0000 UTC" firstStartedPulling="2025-07-07 05:54:13.483713144 +0000 UTC m=+52.219852487" lastFinishedPulling="2025-07-07 05:54:19.962489555 +0000 UTC m=+58.698628938" observedRunningTime="2025-07-07 05:54:20.75553477 +0000 UTC m=+59.491674153" watchObservedRunningTime="2025-07-07 05:54:20.75598537 +0000 UTC m=+59.492124753" Jul 7 05:54:21.391139 containerd[1834]: time="2025-07-07T05:54:21.390975237Z" level=info msg="StopPodSandbox for \"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\"" Jul 7 05:54:21.600093 containerd[1834]: 2025-07-07 05:54:21.483 [WARNING][5864] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3631a971-c63b-496d-bbcc-d7a38e1fa7de", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963", Pod:"coredns-7c65d6cfc9-4668m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2b44b1a8a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:21.600093 containerd[1834]: 2025-07-07 05:54:21.488 [INFO][5864] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Jul 7 05:54:21.600093 containerd[1834]: 2025-07-07 05:54:21.488 [INFO][5864] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" iface="eth0" netns="" Jul 7 05:54:21.600093 containerd[1834]: 2025-07-07 05:54:21.488 [INFO][5864] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Jul 7 05:54:21.600093 containerd[1834]: 2025-07-07 05:54:21.488 [INFO][5864] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Jul 7 05:54:21.600093 containerd[1834]: 2025-07-07 05:54:21.557 [INFO][5877] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" HandleID="k8s-pod-network.484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0" Jul 7 05:54:21.600093 containerd[1834]: 2025-07-07 05:54:21.557 [INFO][5877] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:21.600093 containerd[1834]: 2025-07-07 05:54:21.557 [INFO][5877] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:21.600093 containerd[1834]: 2025-07-07 05:54:21.570 [WARNING][5877] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" HandleID="k8s-pod-network.484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0" Jul 7 05:54:21.600093 containerd[1834]: 2025-07-07 05:54:21.570 [INFO][5877] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" HandleID="k8s-pod-network.484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0" Jul 7 05:54:21.600093 containerd[1834]: 2025-07-07 05:54:21.576 [INFO][5877] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:21.600093 containerd[1834]: 2025-07-07 05:54:21.585 [INFO][5864] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Jul 7 05:54:21.602096 containerd[1834]: time="2025-07-07T05:54:21.601403020Z" level=info msg="TearDown network for sandbox \"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\" successfully" Jul 7 05:54:21.602096 containerd[1834]: time="2025-07-07T05:54:21.601443940Z" level=info msg="StopPodSandbox for \"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\" returns successfully" Jul 7 05:54:21.603108 containerd[1834]: time="2025-07-07T05:54:21.602260380Z" level=info msg="RemovePodSandbox for \"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\"" Jul 7 05:54:21.603108 containerd[1834]: time="2025-07-07T05:54:21.602306940Z" level=info msg="Forcibly stopping sandbox \"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\"" Jul 7 05:54:21.840227 containerd[1834]: 2025-07-07 05:54:21.732 [WARNING][5891] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3631a971-c63b-496d-bbcc-d7a38e1fa7de", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"3a9240674f45e7ac9f8ac4f4fa06028f5baf5b02bf078d67eb64b2f1a8e95963", Pod:"coredns-7c65d6cfc9-4668m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2b44b1a8a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:21.840227 containerd[1834]: 2025-07-07 05:54:21.732 [INFO][5891] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Jul 7 05:54:21.840227 containerd[1834]: 2025-07-07 05:54:21.732 [INFO][5891] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" iface="eth0" netns="" Jul 7 05:54:21.840227 containerd[1834]: 2025-07-07 05:54:21.732 [INFO][5891] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Jul 7 05:54:21.840227 containerd[1834]: 2025-07-07 05:54:21.732 [INFO][5891] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Jul 7 05:54:21.840227 containerd[1834]: 2025-07-07 05:54:21.815 [INFO][5898] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" HandleID="k8s-pod-network.484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0" Jul 7 05:54:21.840227 containerd[1834]: 2025-07-07 05:54:21.816 [INFO][5898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:21.840227 containerd[1834]: 2025-07-07 05:54:21.816 [INFO][5898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:21.840227 containerd[1834]: 2025-07-07 05:54:21.832 [WARNING][5898] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" HandleID="k8s-pod-network.484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0" Jul 7 05:54:21.840227 containerd[1834]: 2025-07-07 05:54:21.832 [INFO][5898] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" HandleID="k8s-pod-network.484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--4668m-eth0" Jul 7 05:54:21.840227 containerd[1834]: 2025-07-07 05:54:21.835 [INFO][5898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:21.840227 containerd[1834]: 2025-07-07 05:54:21.837 [INFO][5891] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59" Jul 7 05:54:21.841508 containerd[1834]: time="2025-07-07T05:54:21.840269160Z" level=info msg="TearDown network for sandbox \"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\" successfully" Jul 7 05:54:21.848852 containerd[1834]: time="2025-07-07T05:54:21.848766279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:21.852526 containerd[1834]: time="2025-07-07T05:54:21.852477719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 7 05:54:21.868201 containerd[1834]: time="2025-07-07T05:54:21.867624798Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:21.881173 containerd[1834]: time="2025-07-07T05:54:21.879796877Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:54:21.887750 containerd[1834]: time="2025-07-07T05:54:21.884329676Z" level=info msg="RemovePodSandbox \"484008d039776aa6079e139ebdf9e34acb477697c1a00bb6cf544a20983f1a59\" returns successfully" Jul 7 05:54:21.890743 containerd[1834]: time="2025-07-07T05:54:21.890694636Z" level=info msg="StopPodSandbox for \"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\"" Jul 7 05:54:21.916107 containerd[1834]: time="2025-07-07T05:54:21.913024434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.949668359s" Jul 7 05:54:21.916107 containerd[1834]: time="2025-07-07T05:54:21.915143834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 7 05:54:21.916107 containerd[1834]: time="2025-07-07T05:54:21.915440874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:21.925424 containerd[1834]: time="2025-07-07T05:54:21.925275953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 05:54:21.928482 containerd[1834]: time="2025-07-07T05:54:21.928303033Z" level=info msg="CreateContainer within sandbox \"cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 05:54:22.035447 containerd[1834]: time="2025-07-07T05:54:22.035175184Z" level=info msg="CreateContainer within sandbox \"cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"089fa427955ec00553e2d36750dea23976d2aeba20603b90d4cc11d8aa96be87\"" Jul 7 05:54:22.039192 containerd[1834]: time="2025-07-07T05:54:22.037764784Z" level=info msg="StartContainer for \"089fa427955ec00553e2d36750dea23976d2aeba20603b90d4cc11d8aa96be87\"" Jul 7 05:54:22.121854 containerd[1834]: 2025-07-07 05:54:22.023 [WARNING][5913] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0", GenerateName:"calico-apiserver-577bd8b5bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"04f2c61c-8783-44a5-a6b4-59965cc32dc5", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"577bd8b5bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f", Pod:"calico-apiserver-577bd8b5bc-khd5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3bad4ea4619", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:22.121854 containerd[1834]: 2025-07-07 05:54:22.023 [INFO][5913] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Jul 7 05:54:22.121854 containerd[1834]: 2025-07-07 05:54:22.023 [INFO][5913] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" iface="eth0" netns="" Jul 7 05:54:22.121854 containerd[1834]: 2025-07-07 05:54:22.023 [INFO][5913] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Jul 7 05:54:22.121854 containerd[1834]: 2025-07-07 05:54:22.023 [INFO][5913] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Jul 7 05:54:22.121854 containerd[1834]: 2025-07-07 05:54:22.068 [INFO][5920] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" HandleID="k8s-pod-network.a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0" Jul 7 05:54:22.121854 containerd[1834]: 2025-07-07 05:54:22.068 [INFO][5920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:22.121854 containerd[1834]: 2025-07-07 05:54:22.069 [INFO][5920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:22.121854 containerd[1834]: 2025-07-07 05:54:22.095 [WARNING][5920] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" HandleID="k8s-pod-network.a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0" Jul 7 05:54:22.121854 containerd[1834]: 2025-07-07 05:54:22.095 [INFO][5920] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" HandleID="k8s-pod-network.a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0" Jul 7 05:54:22.121854 containerd[1834]: 2025-07-07 05:54:22.100 [INFO][5920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:22.121854 containerd[1834]: 2025-07-07 05:54:22.103 [INFO][5913] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Jul 7 05:54:22.123378 containerd[1834]: time="2025-07-07T05:54:22.123340937Z" level=info msg="TearDown network for sandbox \"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\" successfully" Jul 7 05:54:22.123478 containerd[1834]: time="2025-07-07T05:54:22.123464656Z" level=info msg="StopPodSandbox for \"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\" returns successfully" Jul 7 05:54:22.124580 containerd[1834]: time="2025-07-07T05:54:22.124552976Z" level=info msg="RemovePodSandbox for \"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\"" Jul 7 05:54:22.124882 containerd[1834]: time="2025-07-07T05:54:22.124719416Z" level=info msg="Forcibly stopping sandbox \"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\"" Jul 7 05:54:22.170465 containerd[1834]: time="2025-07-07T05:54:22.170407973Z" level=info msg="StartContainer for \"089fa427955ec00553e2d36750dea23976d2aeba20603b90d4cc11d8aa96be87\" returns successfully" Jul 7 05:54:22.247403 containerd[1834]: 2025-07-07 05:54:22.203 [WARNING][5962] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0", GenerateName:"calico-apiserver-577bd8b5bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"04f2c61c-8783-44a5-a6b4-59965cc32dc5", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"577bd8b5bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f", Pod:"calico-apiserver-577bd8b5bc-khd5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3bad4ea4619", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:22.247403 containerd[1834]: 2025-07-07 05:54:22.203 [INFO][5962] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Jul 7 05:54:22.247403 containerd[1834]: 2025-07-07 05:54:22.203 [INFO][5962] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" iface="eth0" netns="" Jul 7 05:54:22.247403 containerd[1834]: 2025-07-07 05:54:22.203 [INFO][5962] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Jul 7 05:54:22.247403 containerd[1834]: 2025-07-07 05:54:22.203 [INFO][5962] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Jul 7 05:54:22.247403 containerd[1834]: 2025-07-07 05:54:22.230 [INFO][5978] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" HandleID="k8s-pod-network.a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0" Jul 7 05:54:22.247403 containerd[1834]: 2025-07-07 05:54:22.230 [INFO][5978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:22.247403 containerd[1834]: 2025-07-07 05:54:22.230 [INFO][5978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:22.247403 containerd[1834]: 2025-07-07 05:54:22.240 [WARNING][5978] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" HandleID="k8s-pod-network.a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0" Jul 7 05:54:22.247403 containerd[1834]: 2025-07-07 05:54:22.240 [INFO][5978] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" HandleID="k8s-pod-network.a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--khd5g-eth0" Jul 7 05:54:22.247403 containerd[1834]: 2025-07-07 05:54:22.242 [INFO][5978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:22.247403 containerd[1834]: 2025-07-07 05:54:22.243 [INFO][5962] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c" Jul 7 05:54:22.247403 containerd[1834]: time="2025-07-07T05:54:22.247368886Z" level=info msg="TearDown network for sandbox \"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\" successfully" Jul 7 05:54:22.290195 containerd[1834]: time="2025-07-07T05:54:22.278771884Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:54:22.291076 containerd[1834]: time="2025-07-07T05:54:22.290980123Z" level=info msg="RemovePodSandbox \"a96bc5cd6b5fe93f61a41d93ecededf5a6369afeec8aa9da7b735bdaa199456c\" returns successfully" Jul 7 05:54:22.292189 containerd[1834]: time="2025-07-07T05:54:22.291617043Z" level=info msg="StopPodSandbox for \"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\"" Jul 7 05:54:22.391223 containerd[1834]: 2025-07-07 05:54:22.341 [WARNING][5993] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ad1a3f21-6e23-4ca3-b64b-f1320f379e83", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8", Pod:"coredns-7c65d6cfc9-znvbk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46c72a69039", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:22.391223 containerd[1834]: 2025-07-07 05:54:22.341 [INFO][5993] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Jul 7 05:54:22.391223 containerd[1834]: 2025-07-07 05:54:22.341 [INFO][5993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" iface="eth0" netns="" Jul 7 05:54:22.391223 containerd[1834]: 2025-07-07 05:54:22.341 [INFO][5993] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Jul 7 05:54:22.391223 containerd[1834]: 2025-07-07 05:54:22.341 [INFO][5993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Jul 7 05:54:22.391223 containerd[1834]: 2025-07-07 05:54:22.369 [INFO][6000] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" HandleID="k8s-pod-network.4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0" Jul 7 05:54:22.391223 containerd[1834]: 2025-07-07 05:54:22.369 [INFO][6000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:22.391223 containerd[1834]: 2025-07-07 05:54:22.370 [INFO][6000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:22.391223 containerd[1834]: 2025-07-07 05:54:22.380 [WARNING][6000] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" HandleID="k8s-pod-network.4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0" Jul 7 05:54:22.391223 containerd[1834]: 2025-07-07 05:54:22.381 [INFO][6000] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" HandleID="k8s-pod-network.4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0" Jul 7 05:54:22.391223 containerd[1834]: 2025-07-07 05:54:22.382 [INFO][6000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:22.391223 containerd[1834]: 2025-07-07 05:54:22.388 [INFO][5993] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Jul 7 05:54:22.391223 containerd[1834]: time="2025-07-07T05:54:22.391201234Z" level=info msg="TearDown network for sandbox \"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\" successfully" Jul 7 05:54:22.393848 containerd[1834]: time="2025-07-07T05:54:22.391229274Z" level=info msg="StopPodSandbox for \"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\" returns successfully" Jul 7 05:54:22.393848 containerd[1834]: time="2025-07-07T05:54:22.392202874Z" level=info msg="RemovePodSandbox for \"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\"" Jul 7 05:54:22.393848 containerd[1834]: time="2025-07-07T05:54:22.392258434Z" level=info msg="Forcibly stopping sandbox \"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\"" Jul 7 05:54:22.564119 containerd[1834]: 2025-07-07 05:54:22.459 [WARNING][6014] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ad1a3f21-6e23-4ca3-b64b-f1320f379e83", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"23def43e848bde7583362cb1a1a99440421dd0d3384693be34c51e4576e2cee8", Pod:"coredns-7c65d6cfc9-znvbk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46c72a69039", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:22.564119 containerd[1834]: 2025-07-07 05:54:22.459 [INFO][6014] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Jul 7 05:54:22.564119 containerd[1834]: 2025-07-07 05:54:22.459 [INFO][6014] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" iface="eth0" netns="" Jul 7 05:54:22.564119 containerd[1834]: 2025-07-07 05:54:22.459 [INFO][6014] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Jul 7 05:54:22.564119 containerd[1834]: 2025-07-07 05:54:22.459 [INFO][6014] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Jul 7 05:54:22.564119 containerd[1834]: 2025-07-07 05:54:22.511 [INFO][6021] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" HandleID="k8s-pod-network.4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0" Jul 7 05:54:22.564119 containerd[1834]: 2025-07-07 05:54:22.512 [INFO][6021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:22.564119 containerd[1834]: 2025-07-07 05:54:22.512 [INFO][6021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:22.564119 containerd[1834]: 2025-07-07 05:54:22.526 [WARNING][6021] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" HandleID="k8s-pod-network.4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0" Jul 7 05:54:22.564119 containerd[1834]: 2025-07-07 05:54:22.526 [INFO][6021] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" HandleID="k8s-pod-network.4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-coredns--7c65d6cfc9--znvbk-eth0" Jul 7 05:54:22.564119 containerd[1834]: 2025-07-07 05:54:22.543 [INFO][6021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:22.564119 containerd[1834]: 2025-07-07 05:54:22.548 [INFO][6014] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee" Jul 7 05:54:22.564119 containerd[1834]: time="2025-07-07T05:54:22.564021660Z" level=info msg="TearDown network for sandbox \"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\" successfully" Jul 7 05:54:22.624289 containerd[1834]: time="2025-07-07T05:54:22.623719055Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:54:22.624289 containerd[1834]: time="2025-07-07T05:54:22.623870255Z" level=info msg="RemovePodSandbox \"4a80186213ed01f9d485086b9dc30f1ce383666df4dc929b630cca3cd074d0ee\" returns successfully" Jul 7 05:54:22.625835 containerd[1834]: time="2025-07-07T05:54:22.625762895Z" level=info msg="StopPodSandbox for \"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\"" Jul 7 05:54:22.730035 containerd[1834]: 2025-07-07 05:54:22.685 [WARNING][6036] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4f15f550-1ef2-4040-9e85-a0ab3be387d3", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70", Pod:"goldmane-58fd7646b9-75wzk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.119.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali36bbbc129e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:22.730035 containerd[1834]: 2025-07-07 05:54:22.685 [INFO][6036] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Jul 7 05:54:22.730035 containerd[1834]: 2025-07-07 05:54:22.685 [INFO][6036] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" iface="eth0" netns="" Jul 7 05:54:22.730035 containerd[1834]: 2025-07-07 05:54:22.685 [INFO][6036] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Jul 7 05:54:22.730035 containerd[1834]: 2025-07-07 05:54:22.685 [INFO][6036] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Jul 7 05:54:22.730035 containerd[1834]: 2025-07-07 05:54:22.713 [INFO][6043] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" HandleID="k8s-pod-network.e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0" Jul 7 05:54:22.730035 containerd[1834]: 2025-07-07 05:54:22.713 [INFO][6043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:22.730035 containerd[1834]: 2025-07-07 05:54:22.713 [INFO][6043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:22.730035 containerd[1834]: 2025-07-07 05:54:22.724 [WARNING][6043] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" HandleID="k8s-pod-network.e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0" Jul 7 05:54:22.730035 containerd[1834]: 2025-07-07 05:54:22.724 [INFO][6043] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" HandleID="k8s-pod-network.e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0" Jul 7 05:54:22.730035 containerd[1834]: 2025-07-07 05:54:22.726 [INFO][6043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:22.730035 containerd[1834]: 2025-07-07 05:54:22.728 [INFO][6036] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Jul 7 05:54:22.730035 containerd[1834]: time="2025-07-07T05:54:22.729892126Z" level=info msg="TearDown network for sandbox \"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\" successfully" Jul 7 05:54:22.730035 containerd[1834]: time="2025-07-07T05:54:22.729922566Z" level=info msg="StopPodSandbox for \"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\" returns successfully" Jul 7 05:54:22.731254 containerd[1834]: time="2025-07-07T05:54:22.731016566Z" level=info msg="RemovePodSandbox for \"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\"" Jul 7 05:54:22.731254 containerd[1834]: time="2025-07-07T05:54:22.731082366Z" level=info msg="Forcibly stopping sandbox \"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\"" Jul 7 05:54:22.864320 containerd[1834]: 2025-07-07 05:54:22.794 [WARNING][6057] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4f15f550-1ef2-4040-9e85-a0ab3be387d3", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"efb2dc06fd3c304fe3d22203af3c115eb5516df0796c75940427f74ac67e6a70", Pod:"goldmane-58fd7646b9-75wzk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.119.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali36bbbc129e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:22.864320 containerd[1834]: 2025-07-07 05:54:22.795 [INFO][6057] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Jul 7 05:54:22.864320 containerd[1834]: 2025-07-07 05:54:22.795 [INFO][6057] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" iface="eth0" netns="" Jul 7 05:54:22.864320 containerd[1834]: 2025-07-07 05:54:22.795 [INFO][6057] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Jul 7 05:54:22.864320 containerd[1834]: 2025-07-07 05:54:22.795 [INFO][6057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Jul 7 05:54:22.864320 containerd[1834]: 2025-07-07 05:54:22.842 [INFO][6064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" HandleID="k8s-pod-network.e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0" Jul 7 05:54:22.864320 containerd[1834]: 2025-07-07 05:54:22.844 [INFO][6064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:22.864320 containerd[1834]: 2025-07-07 05:54:22.844 [INFO][6064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:22.864320 containerd[1834]: 2025-07-07 05:54:22.855 [WARNING][6064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" HandleID="k8s-pod-network.e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0" Jul 7 05:54:22.864320 containerd[1834]: 2025-07-07 05:54:22.856 [INFO][6064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" HandleID="k8s-pod-network.e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-goldmane--58fd7646b9--75wzk-eth0" Jul 7 05:54:22.864320 containerd[1834]: 2025-07-07 05:54:22.858 [INFO][6064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:22.864320 containerd[1834]: 2025-07-07 05:54:22.860 [INFO][6057] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d" Jul 7 05:54:22.864770 containerd[1834]: time="2025-07-07T05:54:22.864364515Z" level=info msg="TearDown network for sandbox \"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\" successfully" Jul 7 05:54:22.881095 containerd[1834]: time="2025-07-07T05:54:22.880985074Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:54:22.881095 containerd[1834]: time="2025-07-07T05:54:22.881090634Z" level=info msg="RemovePodSandbox \"e3af87f4308479e73208128816fe826672b51a891746090098ada9c383a6f49d\" returns successfully" Jul 7 05:54:22.883250 containerd[1834]: time="2025-07-07T05:54:22.883008234Z" level=info msg="StopPodSandbox for \"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\"" Jul 7 05:54:22.989540 containerd[1834]: 2025-07-07 05:54:22.935 [WARNING][6078] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c92a7bb2-2db4-4f96-97d0-028fc27545ab", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137", Pod:"csi-node-driver-z7clz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.119.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4efa11edf8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:22.989540 containerd[1834]: 2025-07-07 05:54:22.935 [INFO][6078] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Jul 7 05:54:22.989540 containerd[1834]: 2025-07-07 05:54:22.935 [INFO][6078] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" iface="eth0" netns="" Jul 7 05:54:22.989540 containerd[1834]: 2025-07-07 05:54:22.935 [INFO][6078] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Jul 7 05:54:22.989540 containerd[1834]: 2025-07-07 05:54:22.935 [INFO][6078] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Jul 7 05:54:22.989540 containerd[1834]: 2025-07-07 05:54:22.969 [INFO][6085] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" HandleID="k8s-pod-network.081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0" Jul 7 05:54:22.989540 containerd[1834]: 2025-07-07 05:54:22.969 [INFO][6085] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:22.989540 containerd[1834]: 2025-07-07 05:54:22.969 [INFO][6085] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:22.989540 containerd[1834]: 2025-07-07 05:54:22.981 [WARNING][6085] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" HandleID="k8s-pod-network.081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0" Jul 7 05:54:22.989540 containerd[1834]: 2025-07-07 05:54:22.981 [INFO][6085] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" HandleID="k8s-pod-network.081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0" Jul 7 05:54:22.989540 containerd[1834]: 2025-07-07 05:54:22.984 [INFO][6085] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:22.989540 containerd[1834]: 2025-07-07 05:54:22.986 [INFO][6078] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Jul 7 05:54:22.989540 containerd[1834]: time="2025-07-07T05:54:22.989286025Z" level=info msg="TearDown network for sandbox \"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\" successfully" Jul 7 05:54:22.989540 containerd[1834]: time="2025-07-07T05:54:22.989319305Z" level=info msg="StopPodSandbox for \"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\" returns successfully" Jul 7 05:54:22.992985 containerd[1834]: time="2025-07-07T05:54:22.991771225Z" level=info msg="RemovePodSandbox for \"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\"" Jul 7 05:54:22.992985 containerd[1834]: time="2025-07-07T05:54:22.991818785Z" level=info msg="Forcibly stopping sandbox \"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\"" Jul 7 05:54:23.110318 containerd[1834]: 2025-07-07 05:54:23.061 [WARNING][6099] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c92a7bb2-2db4-4f96-97d0-028fc27545ab", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137", Pod:"csi-node-driver-z7clz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.119.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4efa11edf8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:23.110318 containerd[1834]: 2025-07-07 05:54:23.063 [INFO][6099] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Jul 7 05:54:23.110318 containerd[1834]: 2025-07-07 05:54:23.063 [INFO][6099] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" iface="eth0" netns="" Jul 7 05:54:23.110318 containerd[1834]: 2025-07-07 05:54:23.063 [INFO][6099] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Jul 7 05:54:23.110318 containerd[1834]: 2025-07-07 05:54:23.063 [INFO][6099] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Jul 7 05:54:23.110318 containerd[1834]: 2025-07-07 05:54:23.092 [INFO][6107] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" HandleID="k8s-pod-network.081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0" Jul 7 05:54:23.110318 containerd[1834]: 2025-07-07 05:54:23.092 [INFO][6107] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:23.110318 containerd[1834]: 2025-07-07 05:54:23.093 [INFO][6107] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:23.110318 containerd[1834]: 2025-07-07 05:54:23.104 [WARNING][6107] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" HandleID="k8s-pod-network.081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0" Jul 7 05:54:23.110318 containerd[1834]: 2025-07-07 05:54:23.104 [INFO][6107] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" HandleID="k8s-pod-network.081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-csi--node--driver--z7clz-eth0" Jul 7 05:54:23.110318 containerd[1834]: 2025-07-07 05:54:23.106 [INFO][6107] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:23.110318 containerd[1834]: 2025-07-07 05:54:23.108 [INFO][6099] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28" Jul 7 05:54:23.111284 containerd[1834]: time="2025-07-07T05:54:23.110475735Z" level=info msg="TearDown network for sandbox \"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\" successfully" Jul 7 05:54:23.140339 containerd[1834]: time="2025-07-07T05:54:23.140123172Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:54:23.140339 containerd[1834]: time="2025-07-07T05:54:23.140273052Z" level=info msg="RemovePodSandbox \"081adb404450b7bdb140c60e1ef2e5b2c531fd4066272d4d9b56827d5f0efc28\" returns successfully" Jul 7 05:54:23.141399 containerd[1834]: time="2025-07-07T05:54:23.141282772Z" level=info msg="StopPodSandbox for \"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\"" Jul 7 05:54:23.248566 containerd[1834]: 2025-07-07 05:54:23.195 [WARNING][6121] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--df7ff855f--vz8wf-eth0" Jul 7 05:54:23.248566 containerd[1834]: 2025-07-07 05:54:23.195 [INFO][6121] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Jul 7 05:54:23.248566 containerd[1834]: 2025-07-07 05:54:23.196 [INFO][6121] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" iface="eth0" netns="" Jul 7 05:54:23.248566 containerd[1834]: 2025-07-07 05:54:23.196 [INFO][6121] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Jul 7 05:54:23.248566 containerd[1834]: 2025-07-07 05:54:23.196 [INFO][6121] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Jul 7 05:54:23.248566 containerd[1834]: 2025-07-07 05:54:23.231 [INFO][6128] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" HandleID="k8s-pod-network.023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--df7ff855f--vz8wf-eth0" Jul 7 05:54:23.248566 containerd[1834]: 2025-07-07 05:54:23.232 [INFO][6128] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:23.248566 containerd[1834]: 2025-07-07 05:54:23.232 [INFO][6128] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:23.248566 containerd[1834]: 2025-07-07 05:54:23.242 [WARNING][6128] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" HandleID="k8s-pod-network.023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--df7ff855f--vz8wf-eth0" Jul 7 05:54:23.248566 containerd[1834]: 2025-07-07 05:54:23.242 [INFO][6128] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" HandleID="k8s-pod-network.023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--df7ff855f--vz8wf-eth0" Jul 7 05:54:23.248566 containerd[1834]: 2025-07-07 05:54:23.244 [INFO][6128] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:23.248566 containerd[1834]: 2025-07-07 05:54:23.245 [INFO][6121] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Jul 7 05:54:23.249360 containerd[1834]: time="2025-07-07T05:54:23.248193483Z" level=info msg="TearDown network for sandbox \"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\" successfully" Jul 7 05:54:23.249360 containerd[1834]: time="2025-07-07T05:54:23.249170243Z" level=info msg="StopPodSandbox for \"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\" returns successfully" Jul 7 05:54:23.249899 containerd[1834]: time="2025-07-07T05:54:23.249870083Z" level=info msg="RemovePodSandbox for \"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\"" Jul 7 05:54:23.250162 containerd[1834]: time="2025-07-07T05:54:23.249993483Z" level=info msg="Forcibly stopping sandbox \"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\"" Jul 7 05:54:23.390184 containerd[1834]: 2025-07-07 05:54:23.329 [WARNING][6142] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" WorkloadEndpoint="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--df7ff855f--vz8wf-eth0" Jul 7 05:54:23.390184 containerd[1834]: 2025-07-07 05:54:23.330 [INFO][6142] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Jul 7 05:54:23.390184 containerd[1834]: 2025-07-07 05:54:23.330 [INFO][6142] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" iface="eth0" netns="" Jul 7 05:54:23.390184 containerd[1834]: 2025-07-07 05:54:23.330 [INFO][6142] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Jul 7 05:54:23.390184 containerd[1834]: 2025-07-07 05:54:23.330 [INFO][6142] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Jul 7 05:54:23.390184 containerd[1834]: 2025-07-07 05:54:23.375 [INFO][6149] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" HandleID="k8s-pod-network.023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--df7ff855f--vz8wf-eth0" Jul 7 05:54:23.390184 containerd[1834]: 2025-07-07 05:54:23.375 [INFO][6149] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:23.390184 containerd[1834]: 2025-07-07 05:54:23.375 [INFO][6149] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:23.390184 containerd[1834]: 2025-07-07 05:54:23.384 [WARNING][6149] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" HandleID="k8s-pod-network.023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--df7ff855f--vz8wf-eth0" Jul 7 05:54:23.390184 containerd[1834]: 2025-07-07 05:54:23.385 [INFO][6149] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" HandleID="k8s-pod-network.023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-whisker--df7ff855f--vz8wf-eth0" Jul 7 05:54:23.390184 containerd[1834]: 2025-07-07 05:54:23.386 [INFO][6149] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:23.390184 containerd[1834]: 2025-07-07 05:54:23.388 [INFO][6142] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e" Jul 7 05:54:23.390650 containerd[1834]: time="2025-07-07T05:54:23.390195894Z" level=info msg="TearDown network for sandbox \"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\" successfully" Jul 7 05:54:23.401434 containerd[1834]: time="2025-07-07T05:54:23.400662696Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:54:23.401434 containerd[1834]: time="2025-07-07T05:54:23.400743656Z" level=info msg="RemovePodSandbox \"023fbad5de093ee3cb79c4b8de5b7bedcb197891d5aca65dc2a86f8afa12696e\" returns successfully" Jul 7 05:54:23.402456 containerd[1834]: time="2025-07-07T05:54:23.402095896Z" level=info msg="StopPodSandbox for \"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\"" Jul 7 05:54:23.623405 containerd[1834]: 2025-07-07 05:54:23.496 [WARNING][6163] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0", GenerateName:"calico-apiserver-577bd8b5bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"617d9772-8b6b-4315-9d5b-66e788e56a42", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"577bd8b5bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749", Pod:"calico-apiserver-577bd8b5bc-8dtfq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali311c6d8b1ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:23.623405 containerd[1834]: 2025-07-07 05:54:23.499 [INFO][6163] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Jul 7 05:54:23.623405 containerd[1834]: 2025-07-07 05:54:23.499 [INFO][6163] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" iface="eth0" netns="" Jul 7 05:54:23.623405 containerd[1834]: 2025-07-07 05:54:23.499 [INFO][6163] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Jul 7 05:54:23.623405 containerd[1834]: 2025-07-07 05:54:23.499 [INFO][6163] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Jul 7 05:54:23.623405 containerd[1834]: 2025-07-07 05:54:23.590 [INFO][6170] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" HandleID="k8s-pod-network.4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0" Jul 7 05:54:23.623405 containerd[1834]: 2025-07-07 05:54:23.590 [INFO][6170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:23.623405 containerd[1834]: 2025-07-07 05:54:23.590 [INFO][6170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:23.623405 containerd[1834]: 2025-07-07 05:54:23.615 [WARNING][6170] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" HandleID="k8s-pod-network.4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0" Jul 7 05:54:23.623405 containerd[1834]: 2025-07-07 05:54:23.615 [INFO][6170] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" HandleID="k8s-pod-network.4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0" Jul 7 05:54:23.623405 containerd[1834]: 2025-07-07 05:54:23.617 [INFO][6170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:23.623405 containerd[1834]: 2025-07-07 05:54:23.620 [INFO][6163] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Jul 7 05:54:23.623405 containerd[1834]: time="2025-07-07T05:54:23.622795464Z" level=info msg="TearDown network for sandbox \"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\" successfully" Jul 7 05:54:23.623405 containerd[1834]: time="2025-07-07T05:54:23.622839304Z" level=info msg="StopPodSandbox for \"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\" returns successfully" Jul 7 05:54:23.624565 containerd[1834]: time="2025-07-07T05:54:23.624518064Z" level=info msg="RemovePodSandbox for \"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\"" Jul 7 05:54:23.624565 containerd[1834]: time="2025-07-07T05:54:23.624564384Z" level=info msg="Forcibly stopping sandbox \"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\"" Jul 7 05:54:23.815378 containerd[1834]: 2025-07-07 05:54:23.716 [WARNING][6184] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0", GenerateName:"calico-apiserver-577bd8b5bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"617d9772-8b6b-4315-9d5b-66e788e56a42", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"577bd8b5bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749", Pod:"calico-apiserver-577bd8b5bc-8dtfq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali311c6d8b1ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:23.815378 containerd[1834]: 2025-07-07 05:54:23.716 [INFO][6184] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Jul 7 05:54:23.815378 containerd[1834]: 2025-07-07 05:54:23.716 [INFO][6184] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" iface="eth0" netns="" Jul 7 05:54:23.815378 containerd[1834]: 2025-07-07 05:54:23.716 [INFO][6184] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Jul 7 05:54:23.815378 containerd[1834]: 2025-07-07 05:54:23.716 [INFO][6184] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Jul 7 05:54:23.815378 containerd[1834]: 2025-07-07 05:54:23.777 [INFO][6191] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" HandleID="k8s-pod-network.4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0" Jul 7 05:54:23.815378 containerd[1834]: 2025-07-07 05:54:23.777 [INFO][6191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:23.815378 containerd[1834]: 2025-07-07 05:54:23.778 [INFO][6191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:23.815378 containerd[1834]: 2025-07-07 05:54:23.801 [WARNING][6191] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" HandleID="k8s-pod-network.4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0" Jul 7 05:54:23.815378 containerd[1834]: 2025-07-07 05:54:23.801 [INFO][6191] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" HandleID="k8s-pod-network.4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--apiserver--577bd8b5bc--8dtfq-eth0" Jul 7 05:54:23.815378 containerd[1834]: 2025-07-07 05:54:23.805 [INFO][6191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:23.815378 containerd[1834]: 2025-07-07 05:54:23.809 [INFO][6184] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716" Jul 7 05:54:23.815378 containerd[1834]: time="2025-07-07T05:54:23.815251426Z" level=info msg="TearDown network for sandbox \"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\" successfully" Jul 7 05:54:23.935541 containerd[1834]: time="2025-07-07T05:54:23.935232052Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:54:23.935541 containerd[1834]: time="2025-07-07T05:54:23.935327972Z" level=info msg="RemovePodSandbox \"4f7df72db8b346513be7d7e63f7c439096b1fc03f70c643ac815bb1c5e8e9716\" returns successfully" Jul 7 05:54:23.937420 containerd[1834]: time="2025-07-07T05:54:23.937337692Z" level=info msg="StopPodSandbox for \"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\"" Jul 7 05:54:24.062558 containerd[1834]: 2025-07-07 05:54:23.986 [WARNING][6205] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0", GenerateName:"calico-kube-controllers-5c48c79585-", Namespace:"calico-system", SelfLink:"", UID:"241b2127-b6e0-4525-9ffd-ee1fb00c225a", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c48c79585", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3", Pod:"calico-kube-controllers-5c48c79585-dwtff", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8de4630aef1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:24.062558 containerd[1834]: 2025-07-07 05:54:23.986 [INFO][6205] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Jul 7 05:54:24.062558 containerd[1834]: 2025-07-07 05:54:23.986 [INFO][6205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" iface="eth0" netns="" Jul 7 05:54:24.062558 containerd[1834]: 2025-07-07 05:54:23.986 [INFO][6205] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Jul 7 05:54:24.062558 containerd[1834]: 2025-07-07 05:54:23.986 [INFO][6205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Jul 7 05:54:24.062558 containerd[1834]: 2025-07-07 05:54:24.019 [INFO][6212] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" HandleID="k8s-pod-network.fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0" Jul 7 05:54:24.062558 containerd[1834]: 2025-07-07 05:54:24.019 [INFO][6212] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:24.062558 containerd[1834]: 2025-07-07 05:54:24.020 [INFO][6212] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:24.062558 containerd[1834]: 2025-07-07 05:54:24.038 [WARNING][6212] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" HandleID="k8s-pod-network.fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0" Jul 7 05:54:24.062558 containerd[1834]: 2025-07-07 05:54:24.038 [INFO][6212] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" HandleID="k8s-pod-network.fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0" Jul 7 05:54:24.062558 containerd[1834]: 2025-07-07 05:54:24.049 [INFO][6212] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:24.062558 containerd[1834]: 2025-07-07 05:54:24.055 [INFO][6205] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Jul 7 05:54:24.065875 containerd[1834]: time="2025-07-07T05:54:24.063036519Z" level=info msg="TearDown network for sandbox \"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\" successfully" Jul 7 05:54:24.065875 containerd[1834]: time="2025-07-07T05:54:24.065132880Z" level=info msg="StopPodSandbox for \"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\" returns successfully" Jul 7 05:54:24.066484 containerd[1834]: time="2025-07-07T05:54:24.066455600Z" level=info msg="RemovePodSandbox for \"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\"" Jul 7 05:54:24.066660 containerd[1834]: time="2025-07-07T05:54:24.066599600Z" level=info msg="Forcibly stopping sandbox \"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\"" Jul 7 05:54:24.287762 containerd[1834]: 2025-07-07 05:54:24.203 [WARNING][6230] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0", GenerateName:"calico-kube-controllers-5c48c79585-", Namespace:"calico-system", SelfLink:"", UID:"241b2127-b6e0-4525-9ffd-ee1fb00c225a", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 53, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c48c79585", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-5429f7cfbd", ContainerID:"650870c92d4f62f4ad84f0cdfeade389f2489923387028a0afbaf41e517edbf3", Pod:"calico-kube-controllers-5c48c79585-dwtff", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8de4630aef1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:54:24.287762 containerd[1834]: 2025-07-07 05:54:24.206 [INFO][6230] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Jul 7 05:54:24.287762 containerd[1834]: 2025-07-07 05:54:24.206 [INFO][6230] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" iface="eth0" netns="" Jul 7 05:54:24.287762 containerd[1834]: 2025-07-07 05:54:24.207 [INFO][6230] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Jul 7 05:54:24.287762 containerd[1834]: 2025-07-07 05:54:24.207 [INFO][6230] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Jul 7 05:54:24.287762 containerd[1834]: 2025-07-07 05:54:24.262 [INFO][6238] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" HandleID="k8s-pod-network.fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0" Jul 7 05:54:24.287762 containerd[1834]: 2025-07-07 05:54:24.262 [INFO][6238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:54:24.287762 containerd[1834]: 2025-07-07 05:54:24.262 [INFO][6238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:54:24.287762 containerd[1834]: 2025-07-07 05:54:24.278 [WARNING][6238] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" HandleID="k8s-pod-network.fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0" Jul 7 05:54:24.287762 containerd[1834]: 2025-07-07 05:54:24.278 [INFO][6238] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" HandleID="k8s-pod-network.fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Workload="ci--4081.3.4--a--5429f7cfbd-k8s-calico--kube--controllers--5c48c79585--dwtff-eth0" Jul 7 05:54:24.287762 containerd[1834]: 2025-07-07 05:54:24.282 [INFO][6238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:54:24.287762 containerd[1834]: 2025-07-07 05:54:24.285 [INFO][6230] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44" Jul 7 05:54:24.289200 containerd[1834]: time="2025-07-07T05:54:24.288451688Z" level=info msg="TearDown network for sandbox \"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\" successfully" Jul 7 05:54:24.305681 containerd[1834]: time="2025-07-07T05:54:24.305585292Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:54:24.305681 containerd[1834]: time="2025-07-07T05:54:24.305758332Z" level=info msg="RemovePodSandbox \"fe27fc040bdb52457f98fe3877f3649bc1cebbb185ffc7cbc02537b46ecb0b44\" returns successfully" Jul 7 05:54:25.661856 containerd[1834]: time="2025-07-07T05:54:25.661789225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:25.671104 containerd[1834]: time="2025-07-07T05:54:25.670513267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 7 05:54:25.679585 containerd[1834]: time="2025-07-07T05:54:25.679530229Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:25.689450 containerd[1834]: time="2025-07-07T05:54:25.689178751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:25.690770 containerd[1834]: time="2025-07-07T05:54:25.690589871Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 3.764363758s" Jul 7 05:54:25.690770 containerd[1834]: time="2025-07-07T05:54:25.690640631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 7 05:54:25.692986 containerd[1834]: time="2025-07-07T05:54:25.692937552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 05:54:25.698199 containerd[1834]: time="2025-07-07T05:54:25.698131193Z" level=info msg="CreateContainer within sandbox \"70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 05:54:25.746354 containerd[1834]: time="2025-07-07T05:54:25.746183923Z" level=info msg="CreateContainer within sandbox \"70241564aa1798ff48196875e3e04fc7c9ee31d750cd83c5c2468191dbb2f749\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5f13e72b7b8bb1ecba9c68fda25cc82687bbda960dd60824f781861a1f0da95a\"" Jul 7 05:54:25.748147 containerd[1834]: time="2025-07-07T05:54:25.747600004Z" level=info msg="StartContainer for \"5f13e72b7b8bb1ecba9c68fda25cc82687bbda960dd60824f781861a1f0da95a\"" Jul 7 05:54:25.866218 containerd[1834]: time="2025-07-07T05:54:25.864535869Z" level=info msg="StartContainer for \"5f13e72b7b8bb1ecba9c68fda25cc82687bbda960dd60824f781861a1f0da95a\" returns successfully" Jul 7 05:54:26.094777 containerd[1834]: time="2025-07-07T05:54:26.092657398Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:26.096319 containerd[1834]: time="2025-07-07T05:54:26.096275599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 05:54:26.106093 containerd[1834]: time="2025-07-07T05:54:26.104264761Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 411.276209ms" Jul 7 05:54:26.106367 containerd[1834]: time="2025-07-07T05:54:26.106327081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 7 05:54:26.114461 containerd[1834]: time="2025-07-07T05:54:26.114138763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 05:54:26.117102 containerd[1834]: time="2025-07-07T05:54:26.115023363Z" level=info msg="CreateContainer within sandbox \"0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 05:54:26.178783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount342625597.mount: Deactivated successfully. Jul 7 05:54:26.186465 containerd[1834]: time="2025-07-07T05:54:26.186290698Z" level=info msg="CreateContainer within sandbox \"0942795e69d627c7e7cea198e51f4f03d6954997c78eed02ed7c9feae363683f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b4cfd91b99f79c4763ac3a23b0ffec13aa3829041899bff6311f6e809addfcd7\"" Jul 7 05:54:26.188897 containerd[1834]: time="2025-07-07T05:54:26.188852179Z" level=info msg="StartContainer for \"b4cfd91b99f79c4763ac3a23b0ffec13aa3829041899bff6311f6e809addfcd7\"" Jul 7 05:54:26.380287 containerd[1834]: time="2025-07-07T05:54:26.379459860Z" level=info msg="StartContainer for \"b4cfd91b99f79c4763ac3a23b0ffec13aa3829041899bff6311f6e809addfcd7\" returns successfully" Jul 7 05:54:26.864306 kubelet[3307]: I0707 05:54:26.862713 3307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-577bd8b5bc-khd5g" podStartSLOduration=37.478268709 podStartE2EDuration="47.862685965s" podCreationTimestamp="2025-07-07 05:53:39 +0000 UTC" firstStartedPulling="2025-07-07 05:54:15.723903386 +0000 UTC m=+54.460042769" lastFinishedPulling="2025-07-07 05:54:26.108320562 +0000 UTC m=+64.844460025" observedRunningTime="2025-07-07 05:54:26.850986922 +0000 UTC m=+65.587126305" watchObservedRunningTime="2025-07-07 05:54:26.862685965 +0000 UTC m=+65.598825348" Jul 7 05:54:27.841810 kubelet[3307]: I0707 05:54:27.841666 3307 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 05:54:27.842593 kubelet[3307]: I0707 05:54:27.842205 3307 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 05:54:28.175425 containerd[1834]: time="2025-07-07T05:54:28.174365608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:28.180518 containerd[1834]: time="2025-07-07T05:54:28.180322250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 7 05:54:28.185997 containerd[1834]: time="2025-07-07T05:54:28.185428811Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:28.195184 containerd[1834]: time="2025-07-07T05:54:28.193659133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:28.196954 containerd[1834]: time="2025-07-07T05:54:28.196900173Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 2.08269069s" Jul 7 05:54:28.197206 containerd[1834]: time="2025-07-07T05:54:28.197150453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 7 05:54:28.200899 containerd[1834]: time="2025-07-07T05:54:28.200763334Z" level=info msg="CreateContainer within sandbox \"cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 05:54:28.258589 containerd[1834]: time="2025-07-07T05:54:28.258492147Z" level=info msg="CreateContainer within sandbox \"cd08e5744f94101c42ae6ba84fbd8386e82ac4593d60fdc4b315a3f57b904137\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1a678ed2cb8f5e9a4b4f5882e95b2bc3c9ac209d99ab88bb363168517a27dc3d\"" Jul 7 05:54:28.260634 containerd[1834]: time="2025-07-07T05:54:28.260121147Z" level=info msg="StartContainer for \"1a678ed2cb8f5e9a4b4f5882e95b2bc3c9ac209d99ab88bb363168517a27dc3d\"" Jul 7 05:54:28.443741 containerd[1834]: time="2025-07-07T05:54:28.443507227Z" level=info msg="StartContainer for \"1a678ed2cb8f5e9a4b4f5882e95b2bc3c9ac209d99ab88bb363168517a27dc3d\" returns successfully" Jul 7 05:54:28.532731 kubelet[3307]: I0707 05:54:28.532621 3307 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 05:54:28.537292 kubelet[3307]: I0707 05:54:28.537253 3307 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 05:54:28.878391 kubelet[3307]: I0707 05:54:28.878094 3307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-577bd8b5bc-8dtfq" podStartSLOduration=39.886097637 podStartE2EDuration="49.878074881s" podCreationTimestamp="2025-07-07 05:53:39 +0000 UTC" firstStartedPulling="2025-07-07 05:54:15.700268628 +0000 UTC m=+54.436408011" lastFinishedPulling="2025-07-07 05:54:25.692245872 +0000 UTC m=+64.428385255" observedRunningTime="2025-07-07 05:54:26.882497009 +0000 UTC m=+65.618636392" watchObservedRunningTime="2025-07-07 05:54:28.878074881 +0000 UTC m=+67.614214264" Jul 7 05:54:39.037591 systemd[1]: Started sshd@7-10.200.20.35:22-10.200.16.10:36134.service - OpenSSH per-connection server daemon (10.200.16.10:36134). Jul 7 05:54:39.517979 sshd[6425]: Accepted publickey for core from 10.200.16.10 port 36134 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:54:39.524786 sshd[6425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:39.542149 systemd-logind[1801]: New session 10 of user core. Jul 7 05:54:39.546693 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 05:54:40.042441 sshd[6425]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:40.054017 systemd[1]: sshd@7-10.200.20.35:22-10.200.16.10:36134.service: Deactivated successfully. Jul 7 05:54:40.055180 systemd-logind[1801]: Session 10 logged out. Waiting for processes to exit. Jul 7 05:54:40.060511 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 05:54:40.062211 systemd-logind[1801]: Removed session 10. Jul 7 05:54:45.129306 systemd[1]: Started sshd@8-10.200.20.35:22-10.200.16.10:44040.service - OpenSSH per-connection server daemon (10.200.16.10:44040). Jul 7 05:54:45.612286 sshd[6441]: Accepted publickey for core from 10.200.16.10 port 44040 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:54:45.614999 sshd[6441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:45.623263 systemd-logind[1801]: New session 11 of user core. Jul 7 05:54:45.628507 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 05:54:46.133489 sshd[6441]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:46.145659 systemd[1]: sshd@8-10.200.20.35:22-10.200.16.10:44040.service: Deactivated successfully. Jul 7 05:54:46.152650 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 05:54:46.155774 systemd-logind[1801]: Session 11 logged out. Waiting for processes to exit. Jul 7 05:54:46.159140 systemd-logind[1801]: Removed session 11. Jul 7 05:54:48.526967 kubelet[3307]: I0707 05:54:48.526725 3307 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 05:54:48.567526 kubelet[3307]: I0707 05:54:48.566022 3307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-z7clz" podStartSLOduration=49.012224293 podStartE2EDuration="1m3.565995501s" podCreationTimestamp="2025-07-07 05:53:45 +0000 UTC" firstStartedPulling="2025-07-07 05:54:13.644560206 +0000 UTC m=+52.380699589" lastFinishedPulling="2025-07-07 05:54:28.198331454 +0000 UTC m=+66.934470797" observedRunningTime="2025-07-07 05:54:28.880207241 +0000 UTC m=+67.616346624" watchObservedRunningTime="2025-07-07 05:54:48.565995501 +0000 UTC m=+87.302135004" Jul 7 05:54:49.098502 kubelet[3307]: I0707 05:54:49.097865 3307 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 05:54:51.231765 systemd[1]: Started sshd@9-10.200.20.35:22-10.200.16.10:41754.service - OpenSSH per-connection server daemon (10.200.16.10:41754). Jul 7 05:54:51.722188 sshd[6490]: Accepted publickey for core from 10.200.16.10 port 41754 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:54:51.723426 sshd[6490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:51.736677 systemd-logind[1801]: New session 12 of user core. Jul 7 05:54:51.739386 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 05:54:52.236331 sshd[6490]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:52.241713 systemd[1]: sshd@9-10.200.20.35:22-10.200.16.10:41754.service: Deactivated successfully. Jul 7 05:54:52.241922 systemd-logind[1801]: Session 12 logged out. Waiting for processes to exit. Jul 7 05:54:52.247371 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 05:54:52.254171 systemd-logind[1801]: Removed session 12. Jul 7 05:54:52.316503 systemd[1]: Started sshd@10-10.200.20.35:22-10.200.16.10:41766.service - OpenSSH per-connection server daemon (10.200.16.10:41766). Jul 7 05:54:52.769512 sshd[6507]: Accepted publickey for core from 10.200.16.10 port 41766 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:54:52.771425 sshd[6507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:52.777148 systemd-logind[1801]: New session 13 of user core. Jul 7 05:54:52.791408 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 05:54:53.341618 sshd[6507]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:53.350578 systemd[1]: sshd@10-10.200.20.35:22-10.200.16.10:41766.service: Deactivated successfully. Jul 7 05:54:53.356906 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 05:54:53.359677 systemd-logind[1801]: Session 13 logged out. Waiting for processes to exit. Jul 7 05:54:53.366557 systemd-logind[1801]: Removed session 13. Jul 7 05:54:53.428053 systemd[1]: Started sshd@11-10.200.20.35:22-10.200.16.10:41770.service - OpenSSH per-connection server daemon (10.200.16.10:41770). Jul 7 05:54:53.899439 sshd[6519]: Accepted publickey for core from 10.200.16.10 port 41770 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:54:53.901887 sshd[6519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:53.917438 systemd-logind[1801]: New session 14 of user core. Jul 7 05:54:53.926642 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 05:54:54.378363 sshd[6519]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:54.387032 systemd[1]: sshd@11-10.200.20.35:22-10.200.16.10:41770.service: Deactivated successfully. Jul 7 05:54:54.396742 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 05:54:54.403195 systemd-logind[1801]: Session 14 logged out. Waiting for processes to exit. Jul 7 05:54:54.406140 systemd-logind[1801]: Removed session 14. Jul 7 05:54:59.472787 systemd[1]: Started sshd@12-10.200.20.35:22-10.200.16.10:41776.service - OpenSSH per-connection server daemon (10.200.16.10:41776). Jul 7 05:54:59.961137 sshd[6582]: Accepted publickey for core from 10.200.16.10 port 41776 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:54:59.966800 sshd[6582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:59.977719 systemd-logind[1801]: New session 15 of user core. Jul 7 05:55:00.006708 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 05:55:00.445048 sshd[6582]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:00.455310 systemd[1]: sshd@12-10.200.20.35:22-10.200.16.10:41776.service: Deactivated successfully. Jul 7 05:55:00.469289 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 05:55:00.473434 systemd-logind[1801]: Session 15 logged out. Waiting for processes to exit. Jul 7 05:55:00.475514 systemd-logind[1801]: Removed session 15. Jul 7 05:55:05.531380 systemd[1]: Started sshd@13-10.200.20.35:22-10.200.16.10:50336.service - OpenSSH per-connection server daemon (10.200.16.10:50336). Jul 7 05:55:05.995259 sshd[6596]: Accepted publickey for core from 10.200.16.10 port 50336 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:55:05.996963 sshd[6596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:06.002697 systemd-logind[1801]: New session 16 of user core. Jul 7 05:55:06.008127 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 05:55:06.418716 sshd[6596]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:06.424518 systemd[1]: sshd@13-10.200.20.35:22-10.200.16.10:50336.service: Deactivated successfully. Jul 7 05:55:06.424712 systemd-logind[1801]: Session 16 logged out. Waiting for processes to exit. Jul 7 05:55:06.428776 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 05:55:06.430475 systemd-logind[1801]: Removed session 16. Jul 7 05:55:11.514703 systemd[1]: Started sshd@14-10.200.20.35:22-10.200.16.10:44768.service - OpenSSH per-connection server daemon (10.200.16.10:44768). Jul 7 05:55:11.659930 systemd[1]: run-containerd-runc-k8s.io-9d759979b7996912b3dbc11ca8c4dbf3aee501c0a2adbc83d2e17e242359f4f1-runc.VcnyCx.mount: Deactivated successfully. Jul 7 05:55:12.040363 sshd[6610]: Accepted publickey for core from 10.200.16.10 port 44768 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:55:12.042833 sshd[6610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:12.049640 systemd-logind[1801]: New session 17 of user core. Jul 7 05:55:12.055447 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 05:55:12.526533 sshd[6610]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:12.538409 systemd-logind[1801]: Session 17 logged out. Waiting for processes to exit. Jul 7 05:55:12.539029 systemd[1]: sshd@14-10.200.20.35:22-10.200.16.10:44768.service: Deactivated successfully. Jul 7 05:55:12.547968 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 05:55:12.553955 systemd-logind[1801]: Removed session 17. Jul 7 05:55:12.609603 systemd[1]: Started sshd@15-10.200.20.35:22-10.200.16.10:44780.service - OpenSSH per-connection server daemon (10.200.16.10:44780). Jul 7 05:55:13.100900 sshd[6645]: Accepted publickey for core from 10.200.16.10 port 44780 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:55:13.103285 sshd[6645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:13.113646 systemd-logind[1801]: New session 18 of user core. Jul 7 05:55:13.122900 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 05:55:13.717904 sshd[6645]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:13.722981 systemd-logind[1801]: Session 18 logged out. Waiting for processes to exit. Jul 7 05:55:13.728681 systemd[1]: sshd@15-10.200.20.35:22-10.200.16.10:44780.service: Deactivated successfully. Jul 7 05:55:13.731744 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 05:55:13.734572 systemd-logind[1801]: Removed session 18. Jul 7 05:55:13.802507 systemd[1]: Started sshd@16-10.200.20.35:22-10.200.16.10:44782.service - OpenSSH per-connection server daemon (10.200.16.10:44782). Jul 7 05:55:14.298651 sshd[6657]: Accepted publickey for core from 10.200.16.10 port 44782 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:55:14.302187 sshd[6657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:14.316887 systemd-logind[1801]: New session 19 of user core. Jul 7 05:55:14.326021 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 05:55:17.523215 sshd[6657]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:17.532237 systemd[1]: sshd@16-10.200.20.35:22-10.200.16.10:44782.service: Deactivated successfully. Jul 7 05:55:17.537479 systemd-logind[1801]: Session 19 logged out. Waiting for processes to exit. Jul 7 05:55:17.537791 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 05:55:17.539798 systemd-logind[1801]: Removed session 19. Jul 7 05:55:17.609666 systemd[1]: Started sshd@17-10.200.20.35:22-10.200.16.10:44790.service - OpenSSH per-connection server daemon (10.200.16.10:44790). Jul 7 05:55:18.108096 sshd[6698]: Accepted publickey for core from 10.200.16.10 port 44790 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:55:18.108638 sshd[6698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:18.113398 systemd-logind[1801]: New session 20 of user core. Jul 7 05:55:18.120339 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 05:55:18.894694 sshd[6698]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:18.903390 systemd[1]: sshd@17-10.200.20.35:22-10.200.16.10:44790.service: Deactivated successfully. Jul 7 05:55:18.911293 systemd-logind[1801]: Session 20 logged out. Waiting for processes to exit. Jul 7 05:55:18.912190 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 05:55:18.914487 systemd-logind[1801]: Removed session 20. Jul 7 05:55:18.980870 systemd[1]: Started sshd@18-10.200.20.35:22-10.200.16.10:44804.service - OpenSSH per-connection server daemon (10.200.16.10:44804). Jul 7 05:55:19.476527 sshd[6710]: Accepted publickey for core from 10.200.16.10 port 44804 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:55:19.478401 sshd[6710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:19.483618 systemd-logind[1801]: New session 21 of user core. Jul 7 05:55:19.487396 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 05:55:19.928390 sshd[6710]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:19.937251 systemd[1]: sshd@18-10.200.20.35:22-10.200.16.10:44804.service: Deactivated successfully. Jul 7 05:55:19.944246 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 05:55:19.945988 systemd-logind[1801]: Session 21 logged out. Waiting for processes to exit. Jul 7 05:55:19.954167 systemd-logind[1801]: Removed session 21. Jul 7 05:55:25.018441 systemd[1]: Started sshd@19-10.200.20.35:22-10.200.16.10:49350.service - OpenSSH per-connection server daemon (10.200.16.10:49350). Jul 7 05:55:25.476490 sshd[6753]: Accepted publickey for core from 10.200.16.10 port 49350 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:55:25.479181 sshd[6753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:25.485106 systemd-logind[1801]: New session 22 of user core. Jul 7 05:55:25.489419 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 05:55:25.878965 sshd[6753]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:25.887486 systemd-logind[1801]: Session 22 logged out. Waiting for processes to exit. Jul 7 05:55:25.888618 systemd[1]: sshd@19-10.200.20.35:22-10.200.16.10:49350.service: Deactivated successfully. Jul 7 05:55:25.903268 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 05:55:25.909655 systemd-logind[1801]: Removed session 22. Jul 7 05:55:30.963767 systemd[1]: Started sshd@20-10.200.20.35:22-10.200.16.10:33956.service - OpenSSH per-connection server daemon (10.200.16.10:33956). Jul 7 05:55:31.457199 sshd[6812]: Accepted publickey for core from 10.200.16.10 port 33956 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:55:31.457347 sshd[6812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:31.464268 systemd-logind[1801]: New session 23 of user core. Jul 7 05:55:31.471989 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 05:55:31.929885 sshd[6812]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:31.940295 systemd[1]: sshd@20-10.200.20.35:22-10.200.16.10:33956.service: Deactivated successfully. Jul 7 05:55:31.946904 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 05:55:31.949185 systemd-logind[1801]: Session 23 logged out. Waiting for processes to exit. Jul 7 05:55:31.950767 systemd-logind[1801]: Removed session 23. Jul 7 05:55:37.008423 systemd[1]: Started sshd@21-10.200.20.35:22-10.200.16.10:33972.service - OpenSSH per-connection server daemon (10.200.16.10:33972). Jul 7 05:55:37.459987 sshd[6828]: Accepted publickey for core from 10.200.16.10 port 33972 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:55:37.462344 sshd[6828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:37.468516 systemd-logind[1801]: New session 24 of user core. Jul 7 05:55:37.472990 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 05:55:37.862846 sshd[6828]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:37.867449 systemd-logind[1801]: Session 24 logged out. Waiting for processes to exit. Jul 7 05:55:37.869115 systemd[1]: sshd@21-10.200.20.35:22-10.200.16.10:33972.service: Deactivated successfully. Jul 7 05:55:37.879982 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 05:55:37.882305 systemd-logind[1801]: Removed session 24. Jul 7 05:55:42.951581 systemd[1]: Started sshd@22-10.200.20.35:22-10.200.16.10:34764.service - OpenSSH per-connection server daemon (10.200.16.10:34764). Jul 7 05:55:43.427047 sshd[6862]: Accepted publickey for core from 10.200.16.10 port 34764 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:55:43.431809 sshd[6862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:43.444328 systemd-logind[1801]: New session 25 of user core. Jul 7 05:55:43.450456 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 05:55:43.857114 sshd[6862]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:43.863255 systemd-logind[1801]: Session 25 logged out. Waiting for processes to exit. Jul 7 05:55:43.863502 systemd[1]: sshd@22-10.200.20.35:22-10.200.16.10:34764.service: Deactivated successfully. Jul 7 05:55:43.868321 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 05:55:43.873605 systemd-logind[1801]: Removed session 25. Jul 7 05:55:48.939803 systemd[1]: Started sshd@23-10.200.20.35:22-10.200.16.10:34778.service - OpenSSH per-connection server daemon (10.200.16.10:34778). Jul 7 05:55:49.422869 sshd[6876]: Accepted publickey for core from 10.200.16.10 port 34778 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:55:49.424409 sshd[6876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:49.429035 systemd-logind[1801]: New session 26 of user core. Jul 7 05:55:49.434418 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 05:55:49.847231 sshd[6876]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:49.850983 systemd-logind[1801]: Session 26 logged out. Waiting for processes to exit. Jul 7 05:55:49.851745 systemd[1]: sshd@23-10.200.20.35:22-10.200.16.10:34778.service: Deactivated successfully. Jul 7 05:55:49.857579 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 05:55:49.861266 systemd-logind[1801]: Removed session 26. Jul 7 05:55:54.932371 systemd[1]: Started sshd@24-10.200.20.35:22-10.200.16.10:60492.service - OpenSSH per-connection server daemon (10.200.16.10:60492). Jul 7 05:55:55.395633 sshd[6913]: Accepted publickey for core from 10.200.16.10 port 60492 ssh2: RSA SHA256:9Tff9AeKQw7GwDLLteDmuZ6FHEIXkQ9sH32heblLris Jul 7 05:55:55.397366 sshd[6913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:55.402965 systemd-logind[1801]: New session 27 of user core. Jul 7 05:55:55.410514 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 05:55:55.811231 sshd[6913]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:55.815417 systemd[1]: sshd@24-10.200.20.35:22-10.200.16.10:60492.service: Deactivated successfully. Jul 7 05:55:55.820777 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 05:55:55.821763 systemd-logind[1801]: Session 27 logged out. Waiting for processes to exit. Jul 7 05:55:55.822744 systemd-logind[1801]: Removed session 27. Jul 7 05:55:58.630552 systemd[1]: run-containerd-runc-k8s.io-9d759979b7996912b3dbc11ca8c4dbf3aee501c0a2adbc83d2e17e242359f4f1-runc.FSFbqC.mount: Deactivated successfully.