Jul 7 06:07:05.310882 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 7 06:07:05.310904 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Sun Jul 6 22:28:26 -00 2025 Jul 7 06:07:05.310912 kernel: KASLR enabled Jul 7 06:07:05.310918 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 7 06:07:05.310925 kernel: printk: bootconsole [pl11] enabled Jul 7 06:07:05.310931 kernel: efi: EFI v2.7 by EDK II Jul 7 06:07:05.310938 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jul 7 06:07:05.310945 kernel: random: crng init done Jul 7 06:07:05.310951 kernel: ACPI: Early table checksum verification disabled Jul 7 06:07:05.310957 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 7 06:07:05.310964 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:07:05.310970 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:07:05.310978 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 7 06:07:05.310984 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:07:05.310992 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:07:05.310998 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:07:05.311005 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:07:05.311013 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:07:05.311020 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:07:05.311026 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 7 06:07:05.311033 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:07:05.311039 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 7 06:07:05.311045 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 7 06:07:05.311052 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 7 06:07:05.311058 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 7 06:07:05.311064 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 7 06:07:05.311071 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 7 06:07:05.311077 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 7 06:07:05.311085 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 7 06:07:05.311091 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 7 06:07:05.311098 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 7 06:07:05.311104 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 7 06:07:05.311110 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 7 06:07:05.311117 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 7 06:07:05.311123 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jul 7 06:07:05.311129 kernel: Zone ranges: Jul 7 06:07:05.311136 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 7 06:07:05.311142 kernel: DMA32 empty Jul 7 06:07:05.311148 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 7 06:07:05.311155 kernel: Movable zone start for each node Jul 7 06:07:05.311165 kernel: Early memory node ranges Jul 7 06:07:05.311172 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 7 06:07:05.311179 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jul 7 06:07:05.311186 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 7 06:07:05.311192 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 7 06:07:05.311201 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 7 06:07:05.311208 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 7 06:07:05.311215 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 7 06:07:05.311221 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 7 06:07:05.311228 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 7 06:07:05.311235 kernel: psci: probing for conduit method from ACPI. Jul 7 06:07:05.311242 kernel: psci: PSCIv1.1 detected in firmware. Jul 7 06:07:05.311248 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 06:07:05.311255 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 7 06:07:05.311262 kernel: psci: SMC Calling Convention v1.4 Jul 7 06:07:05.311269 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 7 06:07:05.311276 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 7 06:07:05.311284 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 7 06:07:05.311291 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 7 06:07:05.311298 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 7 06:07:05.313349 kernel: Detected PIPT I-cache on CPU0 Jul 7 06:07:05.313369 kernel: CPU features: detected: GIC system register CPU interface Jul 7 06:07:05.313377 kernel: CPU features: detected: Hardware dirty bit management Jul 7 06:07:05.313384 kernel: CPU features: detected: Spectre-BHB Jul 7 06:07:05.313391 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 7 06:07:05.313398 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 7 06:07:05.313405 kernel: CPU features: detected: ARM erratum 1418040 Jul 7 06:07:05.313412 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 7 06:07:05.313424 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 7 06:07:05.313431 kernel: alternatives: applying boot alternatives Jul 7 06:07:05.313441 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 06:07:05.313449 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:07:05.313455 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 06:07:05.313462 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:07:05.313469 kernel: Fallback order for Node 0: 0 Jul 7 06:07:05.313476 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 7 06:07:05.313483 kernel: Policy zone: Normal Jul 7 06:07:05.313489 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:07:05.313496 kernel: software IO TLB: area num 2. Jul 7 06:07:05.313505 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jul 7 06:07:05.313512 kernel: Memory: 3982628K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 211532K reserved, 0K cma-reserved) Jul 7 06:07:05.313519 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 06:07:05.313526 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:07:05.313534 kernel: rcu: RCU event tracing is enabled. Jul 7 06:07:05.313541 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 06:07:05.313548 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:07:05.313555 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:07:05.313562 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:07:05.313569 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 06:07:05.313576 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 06:07:05.313584 kernel: GICv3: 960 SPIs implemented Jul 7 06:07:05.313591 kernel: GICv3: 0 Extended SPIs implemented Jul 7 06:07:05.313598 kernel: Root IRQ handler: gic_handle_irq Jul 7 06:07:05.313604 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 7 06:07:05.313611 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 7 06:07:05.313618 kernel: ITS: No ITS available, not enabling LPIs Jul 7 06:07:05.313625 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:07:05.313632 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:07:05.313639 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 7 06:07:05.313646 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 7 06:07:05.313653 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 7 06:07:05.313661 kernel: Console: colour dummy device 80x25 Jul 7 06:07:05.313669 kernel: printk: console [tty1] enabled Jul 7 06:07:05.313676 kernel: ACPI: Core revision 20230628 Jul 7 06:07:05.313683 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 7 06:07:05.313690 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:07:05.313698 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 06:07:05.313705 kernel: landlock: Up and running. Jul 7 06:07:05.313712 kernel: SELinux: Initializing. Jul 7 06:07:05.313719 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:07:05.313726 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:07:05.313735 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:07:05.313742 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:07:05.313749 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 7 06:07:05.313756 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Jul 7 06:07:05.313763 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 7 06:07:05.313770 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:07:05.313777 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:07:05.313792 kernel: Remapping and enabling EFI services. Jul 7 06:07:05.313799 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:07:05.313806 kernel: Detected PIPT I-cache on CPU1 Jul 7 06:07:05.313814 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 7 06:07:05.313823 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:07:05.313830 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 7 06:07:05.313838 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 06:07:05.313845 kernel: SMP: Total of 2 processors activated. Jul 7 06:07:05.313853 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 06:07:05.313862 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 7 06:07:05.313870 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 7 06:07:05.313878 kernel: CPU features: detected: CRC32 instructions Jul 7 06:07:05.313885 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 7 06:07:05.313893 kernel: CPU features: detected: LSE atomic instructions Jul 7 06:07:05.313900 kernel: CPU features: detected: Privileged Access Never Jul 7 06:07:05.313908 kernel: CPU: All CPU(s) started at EL1 Jul 7 06:07:05.313915 kernel: alternatives: applying system-wide alternatives Jul 7 06:07:05.313922 kernel: devtmpfs: initialized Jul 7 06:07:05.313932 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:07:05.313940 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 06:07:05.313947 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:07:05.313955 kernel: SMBIOS 3.1.0 present. Jul 7 06:07:05.313963 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 7 06:07:05.313970 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:07:05.313978 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 7 06:07:05.313985 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 06:07:05.313993 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 06:07:05.314002 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:07:05.314009 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jul 7 06:07:05.314017 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:07:05.314024 kernel: cpuidle: using governor menu Jul 7 06:07:05.314031 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 06:07:05.314039 kernel: ASID allocator initialised with 32768 entries Jul 7 06:07:05.314047 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:07:05.314054 kernel: Serial: AMBA PL011 UART driver Jul 7 06:07:05.314061 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 7 06:07:05.314070 kernel: Modules: 0 pages in range for non-PLT usage Jul 7 06:07:05.314078 kernel: Modules: 509008 pages in range for PLT usage Jul 7 06:07:05.314085 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:07:05.314093 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:07:05.314101 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 06:07:05.314109 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 06:07:05.314117 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:07:05.314125 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:07:05.314132 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 06:07:05.314142 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 06:07:05.314150 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:07:05.314157 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:07:05.314165 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:07:05.314172 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 06:07:05.314180 kernel: ACPI: Interpreter enabled Jul 7 06:07:05.314187 kernel: ACPI: Using GIC for interrupt routing Jul 7 06:07:05.314195 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 7 06:07:05.314202 kernel: printk: console [ttyAMA0] enabled Jul 7 06:07:05.314211 kernel: printk: bootconsole [pl11] disabled Jul 7 06:07:05.314218 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 7 06:07:05.314226 kernel: iommu: Default domain type: Translated Jul 7 06:07:05.314233 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 06:07:05.314241 kernel: efivars: Registered efivars operations Jul 7 06:07:05.314248 kernel: vgaarb: loaded Jul 7 06:07:05.314255 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 06:07:05.314263 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:07:05.314270 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:07:05.314280 kernel: pnp: PnP ACPI init Jul 7 06:07:05.314287 kernel: pnp: PnP ACPI: found 0 devices Jul 7 06:07:05.314294 kernel: NET: Registered PF_INET protocol family Jul 7 06:07:05.314302 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 06:07:05.314322 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 06:07:05.314330 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:07:05.314338 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 06:07:05.314346 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 06:07:05.314354 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 06:07:05.314363 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:07:05.314370 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:07:05.314378 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:07:05.314386 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:07:05.314393 kernel: kvm [1]: HYP mode not available Jul 7 06:07:05.314401 kernel: Initialise system trusted keyrings Jul 7 06:07:05.314409 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 06:07:05.314416 kernel: Key type asymmetric registered Jul 7 06:07:05.314423 kernel: Asymmetric key parser 'x509' registered Jul 7 06:07:05.314432 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:07:05.314440 kernel: io scheduler mq-deadline registered Jul 7 06:07:05.314447 kernel: io scheduler kyber registered Jul 7 06:07:05.314454 kernel: io scheduler bfq registered Jul 7 06:07:05.314462 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:07:05.314469 kernel: thunder_xcv, ver 1.0 Jul 7 06:07:05.314477 kernel: thunder_bgx, ver 1.0 Jul 7 06:07:05.314484 kernel: nicpf, ver 1.0 Jul 7 06:07:05.314491 kernel: nicvf, ver 1.0 Jul 7 06:07:05.314661 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 06:07:05.314740 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T06:07:04 UTC (1751868424) Jul 7 06:07:05.314750 kernel: efifb: probing for efifb Jul 7 06:07:05.314758 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 7 06:07:05.314765 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 7 06:07:05.314773 kernel: efifb: scrolling: redraw Jul 7 06:07:05.314781 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 06:07:05.314788 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 06:07:05.314797 kernel: fb0: EFI VGA frame buffer device Jul 7 06:07:05.314805 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 7 06:07:05.314812 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 06:07:05.314820 kernel: No ACPI PMU IRQ for CPU0 Jul 7 06:07:05.314827 kernel: No ACPI PMU IRQ for CPU1 Jul 7 06:07:05.314834 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 7 06:07:05.314842 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 7 06:07:05.314849 kernel: watchdog: Hard watchdog permanently disabled Jul 7 06:07:05.314856 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:07:05.314865 kernel: Segment Routing with IPv6 Jul 7 06:07:05.314873 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:07:05.314881 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:07:05.314888 kernel: Key type dns_resolver registered Jul 7 06:07:05.314895 kernel: registered taskstats version 1 Jul 7 06:07:05.314903 kernel: Loading compiled-in X.509 certificates Jul 7 06:07:05.314910 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 238b9dc1e5bb098e9decff566778e6505241ab94' Jul 7 06:07:05.314917 kernel: Key type .fscrypt registered Jul 7 06:07:05.314925 kernel: Key type fscrypt-provisioning registered Jul 7 06:07:05.314934 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:07:05.314942 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:07:05.314949 kernel: ima: No architecture policies found Jul 7 06:07:05.314957 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 06:07:05.314965 kernel: clk: Disabling unused clocks Jul 7 06:07:05.314972 kernel: Freeing unused kernel memory: 39424K Jul 7 06:07:05.314980 kernel: Run /init as init process Jul 7 06:07:05.314987 kernel: with arguments: Jul 7 06:07:05.314994 kernel: /init Jul 7 06:07:05.315004 kernel: with environment: Jul 7 06:07:05.315012 kernel: HOME=/ Jul 7 06:07:05.315019 kernel: TERM=linux Jul 7 06:07:05.315026 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:07:05.315036 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 06:07:05.315046 systemd[1]: Detected virtualization microsoft. Jul 7 06:07:05.315054 systemd[1]: Detected architecture arm64. Jul 7 06:07:05.315061 systemd[1]: Running in initrd. Jul 7 06:07:05.315071 systemd[1]: No hostname configured, using default hostname. Jul 7 06:07:05.315079 systemd[1]: Hostname set to . Jul 7 06:07:05.315087 systemd[1]: Initializing machine ID from random generator. Jul 7 06:07:05.315095 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:07:05.315103 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:07:05.315111 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:07:05.315119 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:07:05.315127 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:07:05.315137 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:07:05.315146 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:07:05.315156 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:07:05.315164 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:07:05.315173 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:07:05.315181 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:07:05.315190 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:07:05.315198 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:07:05.315206 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:07:05.315214 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:07:05.315225 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:07:05.315235 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:07:05.315245 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:07:05.315254 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 06:07:05.315263 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:07:05.315276 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:07:05.315285 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:07:05.315295 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:07:05.317348 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:07:05.317376 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:07:05.317385 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:07:05.317394 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:07:05.317402 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:07:05.317440 systemd-journald[217]: Collecting audit messages is disabled. Jul 7 06:07:05.317467 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:07:05.317476 systemd-journald[217]: Journal started Jul 7 06:07:05.317501 systemd-journald[217]: Runtime Journal (/run/log/journal/3793d7731ea74c2fb230dd3105cc1432) is 8.0M, max 78.5M, 70.5M free. Jul 7 06:07:05.317546 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:07:05.323770 systemd-modules-load[218]: Inserted module 'overlay' Jul 7 06:07:05.354325 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:07:05.363903 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:07:05.363958 kernel: Bridge firewalling registered Jul 7 06:07:05.364044 systemd-modules-load[218]: Inserted module 'br_netfilter' Jul 7 06:07:05.372557 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:07:05.383949 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:07:05.396618 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:07:05.407299 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:07:05.417514 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:07:05.438602 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:07:05.447500 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:07:05.473537 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:07:05.491531 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:07:05.504335 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:07:05.517389 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:07:05.534956 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:07:05.547767 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:07:05.572554 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:07:05.580487 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:07:05.595508 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:07:05.620359 dracut-cmdline[250]: dracut-dracut-053 Jul 7 06:07:05.626131 systemd-resolved[251]: Positive Trust Anchors: Jul 7 06:07:05.626152 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:07:05.646790 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 06:07:05.626183 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:07:05.628424 systemd-resolved[251]: Defaulting to hostname 'linux'. Jul 7 06:07:05.631402 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:07:05.640046 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:07:05.655099 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:07:05.815334 kernel: SCSI subsystem initialized Jul 7 06:07:05.824322 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:07:05.835329 kernel: iscsi: registered transport (tcp) Jul 7 06:07:05.853521 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:07:05.853578 kernel: QLogic iSCSI HBA Driver Jul 7 06:07:05.894923 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:07:05.909667 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:07:05.942684 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:07:05.942757 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:07:05.948858 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 06:07:05.998337 kernel: raid6: neonx8 gen() 15754 MB/s Jul 7 06:07:06.018321 kernel: raid6: neonx4 gen() 15685 MB/s Jul 7 06:07:06.038319 kernel: raid6: neonx2 gen() 13239 MB/s Jul 7 06:07:06.059318 kernel: raid6: neonx1 gen() 10492 MB/s Jul 7 06:07:06.079316 kernel: raid6: int64x8 gen() 6966 MB/s Jul 7 06:07:06.099316 kernel: raid6: int64x4 gen() 7349 MB/s Jul 7 06:07:06.120323 kernel: raid6: int64x2 gen() 6131 MB/s Jul 7 06:07:06.143441 kernel: raid6: int64x1 gen() 5058 MB/s Jul 7 06:07:06.143463 kernel: raid6: using algorithm neonx8 gen() 15754 MB/s Jul 7 06:07:06.167448 kernel: raid6: .... xor() 11947 MB/s, rmw enabled Jul 7 06:07:06.167479 kernel: raid6: using neon recovery algorithm Jul 7 06:07:06.179701 kernel: xor: measuring software checksum speed Jul 7 06:07:06.179731 kernel: 8regs : 19754 MB/sec Jul 7 06:07:06.183185 kernel: 32regs : 19617 MB/sec Jul 7 06:07:06.186840 kernel: arm64_neon : 26238 MB/sec Jul 7 06:07:06.190731 kernel: xor: using function: arm64_neon (26238 MB/sec) Jul 7 06:07:06.242331 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:07:06.253238 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:07:06.268490 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:07:06.290954 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jul 7 06:07:06.294303 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:07:06.317506 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:07:06.329527 dracut-pre-trigger[444]: rd.md=0: removing MD RAID activation Jul 7 06:07:06.356893 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:07:06.372441 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:07:06.415050 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:07:06.434621 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:07:06.456413 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:07:06.467519 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:07:06.487767 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:07:06.501864 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:07:06.517590 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:07:06.542515 kernel: hv_vmbus: Vmbus version:5.3 Jul 7 06:07:06.542914 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:07:06.588984 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 7 06:07:06.589043 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 7 06:07:06.589054 kernel: hv_vmbus: registering driver hid_hyperv Jul 7 06:07:06.589068 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 7 06:07:06.585152 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:07:06.624252 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 7 06:07:06.624285 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 7 06:07:06.624295 kernel: hv_vmbus: registering driver hv_storvsc Jul 7 06:07:06.624315 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 7 06:07:06.585381 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:07:06.663030 kernel: hv_vmbus: registering driver hv_netvsc Jul 7 06:07:06.663059 kernel: scsi host0: storvsc_host_t Jul 7 06:07:06.663236 kernel: scsi host1: storvsc_host_t Jul 7 06:07:06.663345 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 7 06:07:06.607210 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:07:06.643191 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:07:06.643544 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:07:06.669499 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:07:06.704867 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:07:06.728718 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 7 06:07:06.727751 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:07:06.741642 kernel: PTP clock support registered Jul 7 06:07:06.727849 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:07:06.773366 kernel: hv_utils: Registering HyperV Utility Driver Jul 7 06:07:06.773390 kernel: hv_netvsc 00224879-5c5e-0022-4879-5c5e00224879 eth0: VF slot 1 added Jul 7 06:07:06.773550 kernel: hv_vmbus: registering driver hv_utils Jul 7 06:07:06.780511 kernel: hv_utils: Heartbeat IC version 3.0 Jul 7 06:07:06.780553 kernel: hv_utils: Shutdown IC version 3.2 Jul 7 06:07:06.774611 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:07:07.072747 kernel: hv_utils: TimeSync IC version 4.0 Jul 7 06:07:07.072776 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 7 06:07:07.072971 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 06:07:07.060560 systemd-resolved[251]: Clock change detected. Flushing caches. Jul 7 06:07:07.088334 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 7 06:07:07.089534 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:07:07.114267 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 7 06:07:07.114512 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 7 06:07:07.119317 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:07:07.164972 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 7 06:07:07.165139 kernel: hv_vmbus: registering driver hv_pci Jul 7 06:07:07.165150 kernel: hv_pci 7ad42c06-31eb-4567-918e-69a6bfc6e133: PCI VMBus probing: Using version 0x10004 Jul 7 06:07:07.165282 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 7 06:07:07.165377 kernel: hv_pci 7ad42c06-31eb-4567-918e-69a6bfc6e133: PCI host bridge to bus 31eb:00 Jul 7 06:07:07.165453 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 7 06:07:07.165536 kernel: pci_bus 31eb:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 7 06:07:07.172841 kernel: pci_bus 31eb:00: No busn resource found for root bus, will use [bus 00-ff] Jul 7 06:07:07.173009 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 06:07:07.181050 kernel: pci 31eb:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 7 06:07:07.181105 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 7 06:07:07.190510 kernel: pci 31eb:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 7 06:07:07.197776 kernel: pci 31eb:00:02.0: enabling Extended Tags Jul 7 06:07:07.223313 kernel: pci 31eb:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 31eb:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 7 06:07:07.225413 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:07:07.251140 kernel: pci_bus 31eb:00: busn_res: [bus 00-ff] end is updated to 00 Jul 7 06:07:07.251321 kernel: pci 31eb:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 7 06:07:07.280280 kernel: mlx5_core 31eb:00:02.0: enabling device (0000 -> 0002) Jul 7 06:07:07.286241 kernel: mlx5_core 31eb:00:02.0: firmware version: 16.30.1284 Jul 7 06:07:07.485865 kernel: hv_netvsc 00224879-5c5e-0022-4879-5c5e00224879 eth0: VF registering: eth1 Jul 7 06:07:07.486071 kernel: mlx5_core 31eb:00:02.0 eth1: joined to eth0 Jul 7 06:07:07.493267 kernel: mlx5_core 31eb:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 7 06:07:07.503258 kernel: mlx5_core 31eb:00:02.0 enP12779s1: renamed from eth1 Jul 7 06:07:07.685246 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (497) Jul 7 06:07:07.700000 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 7 06:07:07.769398 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 7 06:07:08.683162 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 7 06:07:09.802262 kernel: BTRFS: device fsid 8b9ce65a-b4d6-4744-987c-133e7f159d2d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (498) Jul 7 06:07:09.817442 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 7 06:07:09.823879 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 7 06:07:09.855531 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:07:09.883264 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 06:07:09.894253 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 06:07:10.904996 disk-uuid[604]: The operation has completed successfully. Jul 7 06:07:10.910134 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 06:07:10.970276 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:07:10.970384 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:07:11.006395 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:07:11.018860 sh[690]: Success Jul 7 06:07:11.042318 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 7 06:07:11.221933 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:07:11.229316 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:07:11.251317 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:07:11.281028 kernel: BTRFS info (device dm-0): first mount of filesystem 8b9ce65a-b4d6-4744-987c-133e7f159d2d Jul 7 06:07:11.281094 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:07:11.287762 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 06:07:11.292831 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 06:07:11.298241 kernel: BTRFS info (device dm-0): using free space tree Jul 7 06:07:11.603042 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:07:11.608549 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:07:11.634573 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:07:11.642418 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:07:11.679084 kernel: BTRFS info (device sda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:07:11.679139 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:07:11.683643 kernel: BTRFS info (device sda6): using free space tree Jul 7 06:07:11.726142 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:07:11.740878 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 06:07:11.744481 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:07:11.765402 kernel: BTRFS info (device sda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:07:11.759964 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 06:07:11.778200 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:07:11.795402 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:07:11.811045 systemd-networkd[867]: lo: Link UP Jul 7 06:07:11.811061 systemd-networkd[867]: lo: Gained carrier Jul 7 06:07:11.812632 systemd-networkd[867]: Enumeration completed Jul 7 06:07:11.814730 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:07:11.815433 systemd-networkd[867]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:07:11.815437 systemd-networkd[867]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:07:11.825600 systemd[1]: Reached target network.target - Network. Jul 7 06:07:11.910246 kernel: mlx5_core 31eb:00:02.0 enP12779s1: Link up Jul 7 06:07:11.956469 kernel: hv_netvsc 00224879-5c5e-0022-4879-5c5e00224879 eth0: Data path switched to VF: enP12779s1 Jul 7 06:07:11.956115 systemd-networkd[867]: enP12779s1: Link UP Jul 7 06:07:11.956203 systemd-networkd[867]: eth0: Link UP Jul 7 06:07:11.956325 systemd-networkd[867]: eth0: Gained carrier Jul 7 06:07:11.956334 systemd-networkd[867]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:07:11.981592 systemd-networkd[867]: enP12779s1: Gained carrier Jul 7 06:07:11.998307 systemd-networkd[867]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 7 06:07:12.575127 ignition[874]: Ignition 2.19.0 Jul 7 06:07:12.575334 ignition[874]: Stage: fetch-offline Jul 7 06:07:12.575391 ignition[874]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:07:12.575401 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:07:12.587177 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:07:12.575543 ignition[874]: parsed url from cmdline: "" Jul 7 06:07:12.606549 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 06:07:12.575550 ignition[874]: no config URL provided Jul 7 06:07:12.575555 ignition[874]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:07:12.575563 ignition[874]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:07:12.575569 ignition[874]: failed to fetch config: resource requires networking Jul 7 06:07:12.578882 ignition[874]: Ignition finished successfully Jul 7 06:07:12.629743 ignition[883]: Ignition 2.19.0 Jul 7 06:07:12.629754 ignition[883]: Stage: fetch Jul 7 06:07:12.629960 ignition[883]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:07:12.629970 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:07:12.630079 ignition[883]: parsed url from cmdline: "" Jul 7 06:07:12.630083 ignition[883]: no config URL provided Jul 7 06:07:12.630088 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:07:12.630098 ignition[883]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:07:12.630121 ignition[883]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 7 06:07:12.752511 ignition[883]: GET result: OK Jul 7 06:07:12.752571 ignition[883]: config has been read from IMDS userdata Jul 7 06:07:12.752617 ignition[883]: parsing config with SHA512: 5ba6ef795f26cfe065b5eb98a69ae88817b5e2bebcf78de77635b613ac1d4f644da2ef7a76923e599b5a7cbf33b33a35a3758ac75afab9cc2c3845e0f7d50827 Jul 7 06:07:12.756251 unknown[883]: fetched base config from "system" Jul 7 06:07:12.756614 ignition[883]: fetch: fetch complete Jul 7 06:07:12.756260 unknown[883]: fetched base config from "system" Jul 7 06:07:12.756619 ignition[883]: fetch: fetch passed Jul 7 06:07:12.756265 unknown[883]: fetched user config from "azure" Jul 7 06:07:12.756662 ignition[883]: Ignition finished successfully Jul 7 06:07:12.762350 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 06:07:12.781377 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:07:12.799683 ignition[889]: Ignition 2.19.0 Jul 7 06:07:12.802859 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:07:12.799690 ignition[889]: Stage: kargs Jul 7 06:07:12.799877 ignition[889]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:07:12.799887 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:07:12.827536 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:07:12.800791 ignition[889]: kargs: kargs passed Jul 7 06:07:12.800840 ignition[889]: Ignition finished successfully Jul 7 06:07:12.855268 ignition[896]: Ignition 2.19.0 Jul 7 06:07:12.855276 ignition[896]: Stage: disks Jul 7 06:07:12.862269 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:07:12.855476 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:07:12.868768 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:07:12.855486 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:07:12.877651 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:07:12.856595 ignition[896]: disks: disks passed Jul 7 06:07:12.889385 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:07:12.856654 ignition[896]: Ignition finished successfully Jul 7 06:07:12.899472 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:07:12.910684 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:07:12.937503 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:07:13.017320 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 7 06:07:13.026069 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:07:13.044475 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:07:13.107290 kernel: EXT4-fs (sda9): mounted filesystem bea371b7-1069-4e98-84b2-bf5b94f934f3 r/w with ordered data mode. Quota mode: none. Jul 7 06:07:13.108296 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:07:13.117491 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:07:13.161323 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:07:13.168424 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:07:13.196259 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (915) Jul 7 06:07:13.196321 kernel: BTRFS info (device sda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:07:13.202389 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:07:13.203470 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 06:07:13.225833 kernel: BTRFS info (device sda6): using free space tree Jul 7 06:07:13.225858 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 06:07:13.207338 systemd-networkd[867]: eth0: Gained IPv6LL Jul 7 06:07:13.214499 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:07:13.214535 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:07:13.233454 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:07:13.242113 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:07:13.274547 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:07:13.716358 systemd-networkd[867]: enP12779s1: Gained IPv6LL Jul 7 06:07:13.742083 coreos-metadata[917]: Jul 07 06:07:13.741 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 7 06:07:13.752245 coreos-metadata[917]: Jul 07 06:07:13.752 INFO Fetch successful Jul 7 06:07:13.757932 coreos-metadata[917]: Jul 07 06:07:13.757 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 7 06:07:13.779506 coreos-metadata[917]: Jul 07 06:07:13.779 INFO Fetch successful Jul 7 06:07:13.793244 coreos-metadata[917]: Jul 07 06:07:13.793 INFO wrote hostname ci-4081.3.4-a-d5356a388e to /sysroot/etc/hostname Jul 7 06:07:13.802488 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 06:07:13.995063 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:07:14.015930 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:07:14.023952 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:07:14.033920 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:07:14.817940 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:07:14.832430 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:07:14.844848 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:07:14.860479 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:07:14.872602 kernel: BTRFS info (device sda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:07:14.882025 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:07:14.896549 ignition[1036]: INFO : Ignition 2.19.0 Jul 7 06:07:14.901598 ignition[1036]: INFO : Stage: mount Jul 7 06:07:14.901598 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:07:14.901598 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:07:14.925166 ignition[1036]: INFO : mount: mount passed Jul 7 06:07:14.925166 ignition[1036]: INFO : Ignition finished successfully Jul 7 06:07:14.912455 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:07:14.937343 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:07:14.950425 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:07:14.983633 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1044) Jul 7 06:07:14.983684 kernel: BTRFS info (device sda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:07:14.989877 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:07:14.994202 kernel: BTRFS info (device sda6): using free space tree Jul 7 06:07:15.001244 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 06:07:15.002961 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:07:15.027528 ignition[1062]: INFO : Ignition 2.19.0 Jul 7 06:07:15.027528 ignition[1062]: INFO : Stage: files Jul 7 06:07:15.035475 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:07:15.035475 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:07:15.035475 ignition[1062]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:07:15.054383 ignition[1062]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:07:15.054383 ignition[1062]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:07:15.127598 ignition[1062]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:07:15.135205 ignition[1062]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:07:15.135205 ignition[1062]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:07:15.127964 unknown[1062]: wrote ssh authorized keys file for user: core Jul 7 06:07:15.155479 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 7 06:07:15.166012 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 7 06:07:15.211088 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:07:15.359441 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 7 06:07:15.370918 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:07:15.370918 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:07:15.370918 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:07:15.370918 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:07:15.370918 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:07:15.370918 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:07:15.370918 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:07:15.370918 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:07:15.370918 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:07:15.459645 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:07:15.459645 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 06:07:15.459645 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 06:07:15.459645 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 06:07:15.459645 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 7 06:07:16.165188 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 06:07:16.391509 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 06:07:16.391509 ignition[1062]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 06:07:16.416865 ignition[1062]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:07:16.428556 ignition[1062]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:07:16.428556 ignition[1062]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 06:07:16.428556 ignition[1062]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:07:16.428556 ignition[1062]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:07:16.428556 ignition[1062]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:07:16.428556 ignition[1062]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:07:16.428556 ignition[1062]: INFO : files: files passed Jul 7 06:07:16.428556 ignition[1062]: INFO : Ignition finished successfully Jul 7 06:07:16.429196 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:07:16.469575 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:07:16.487431 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:07:16.547034 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:07:16.547034 initrd-setup-root-after-ignition[1088]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:07:16.512416 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:07:16.575482 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:07:16.512507 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:07:16.523481 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:07:16.540488 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:07:16.584501 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:07:16.624616 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:07:16.624765 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:07:16.637470 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:07:16.649728 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:07:16.660825 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:07:16.680547 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:07:16.691686 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:07:16.705465 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:07:16.730031 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:07:16.743794 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:07:16.750802 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:07:16.761829 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:07:16.762023 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:07:16.778104 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:07:16.789711 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:07:16.799945 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:07:16.811020 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:07:16.822974 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:07:16.835895 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:07:16.848082 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:07:16.861291 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:07:16.874686 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:07:16.886038 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:07:16.896029 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:07:16.896211 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:07:16.912392 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:07:16.924582 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:07:16.937206 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:07:16.943297 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:07:16.950829 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:07:16.951000 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:07:16.969455 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:07:16.969646 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:07:16.982380 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:07:16.982541 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:07:16.993447 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 06:07:16.993603 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 06:07:17.026387 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:07:17.055422 ignition[1113]: INFO : Ignition 2.19.0 Jul 7 06:07:17.055422 ignition[1113]: INFO : Stage: umount Jul 7 06:07:17.055422 ignition[1113]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:07:17.055422 ignition[1113]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:07:17.055422 ignition[1113]: INFO : umount: umount passed Jul 7 06:07:17.055422 ignition[1113]: INFO : Ignition finished successfully Jul 7 06:07:17.050287 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:07:17.064974 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:07:17.065167 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:07:17.082399 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:07:17.082534 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:07:17.098089 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:07:17.098778 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:07:17.098880 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:07:17.110169 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:07:17.110351 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:07:17.117634 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:07:17.117696 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:07:17.127253 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 06:07:17.127308 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 06:07:17.138516 systemd[1]: Stopped target network.target - Network. Jul 7 06:07:17.143644 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:07:17.143713 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:07:17.156207 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:07:17.167498 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:07:17.173336 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:07:17.180177 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:07:17.190783 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:07:17.200764 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:07:17.200830 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:07:17.211552 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:07:17.211609 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:07:17.224914 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:07:17.224973 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:07:17.230655 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:07:17.230703 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:07:17.242583 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:07:17.491562 kernel: hv_netvsc 00224879-5c5e-0022-4879-5c5e00224879 eth0: Data path switched from VF: enP12779s1 Jul 7 06:07:17.254822 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:07:17.265274 systemd-networkd[867]: eth0: DHCPv6 lease lost Jul 7 06:07:17.265751 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:07:17.265851 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:07:17.279554 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:07:17.279665 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:07:17.291946 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:07:17.292056 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:07:17.307330 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:07:17.307395 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:07:17.333744 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:07:17.352596 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:07:17.352688 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:07:17.359869 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:07:17.359923 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:07:17.370361 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:07:17.370419 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:07:17.381557 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:07:17.381625 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:07:17.399994 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:07:17.436595 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:07:17.436792 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:07:17.449479 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:07:17.449539 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:07:17.460426 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:07:17.460470 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:07:17.479538 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:07:17.479619 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:07:17.497951 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:07:17.498029 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:07:17.507805 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:07:17.507878 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:07:17.545437 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:07:17.558984 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:07:17.559063 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:07:17.573554 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 06:07:17.573624 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:07:17.587194 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:07:17.587288 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:07:17.599491 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:07:17.599556 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:07:17.611447 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:07:17.611574 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:07:17.624258 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:07:17.626246 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:07:17.776985 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:07:17.777141 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:07:17.787795 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:07:17.798634 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:07:17.798705 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:07:17.828474 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:07:17.859207 systemd[1]: Switching root. Jul 7 06:07:17.944907 systemd-journald[217]: Journal stopped Jul 7 06:07:05.310882 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 7 06:07:05.310904 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Sun Jul 6 22:28:26 -00 2025 Jul 7 06:07:05.310912 kernel: KASLR enabled Jul 7 06:07:05.310918 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 7 06:07:05.310925 kernel: printk: bootconsole [pl11] enabled Jul 7 06:07:05.310931 kernel: efi: EFI v2.7 by EDK II Jul 7 06:07:05.310938 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jul 7 06:07:05.310945 kernel: random: crng init done Jul 7 06:07:05.310951 kernel: ACPI: Early table checksum verification disabled Jul 7 06:07:05.310957 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 7 06:07:05.310964 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:07:05.310970 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:07:05.310978 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 7 06:07:05.310984 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:07:05.310992 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:07:05.310998 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:07:05.311005 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:07:05.311013 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:07:05.311020 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:07:05.311026 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 7 06:07:05.311033 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:07:05.311039 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 7 06:07:05.311045 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 7 06:07:05.311052 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 7 06:07:05.311058 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 7 06:07:05.311064 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 7 06:07:05.311071 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 7 06:07:05.311077 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 7 06:07:05.311085 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 7 06:07:05.311091 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 7 06:07:05.311098 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 7 06:07:05.311104 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 7 06:07:05.311110 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 7 06:07:05.311117 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 7 06:07:05.311123 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jul 7 06:07:05.311129 kernel: Zone ranges: Jul 7 06:07:05.311136 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 7 06:07:05.311142 kernel: DMA32 empty Jul 7 06:07:05.311148 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 7 06:07:05.311155 kernel: Movable zone start for each node Jul 7 06:07:05.311165 kernel: Early memory node ranges Jul 7 06:07:05.311172 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 7 06:07:05.311179 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jul 7 06:07:05.311186 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 7 06:07:05.311192 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 7 06:07:05.311201 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 7 06:07:05.311208 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 7 06:07:05.311215 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 7 06:07:05.311221 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 7 06:07:05.311228 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 7 06:07:05.311235 kernel: psci: probing for conduit method from ACPI. Jul 7 06:07:05.311242 kernel: psci: PSCIv1.1 detected in firmware. Jul 7 06:07:05.311248 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 06:07:05.311255 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 7 06:07:05.311262 kernel: psci: SMC Calling Convention v1.4 Jul 7 06:07:05.311269 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 7 06:07:05.311276 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 7 06:07:05.311284 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 7 06:07:05.311291 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 7 06:07:05.311298 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 7 06:07:05.313349 kernel: Detected PIPT I-cache on CPU0 Jul 7 06:07:05.313369 kernel: CPU features: detected: GIC system register CPU interface Jul 7 06:07:05.313377 kernel: CPU features: detected: Hardware dirty bit management Jul 7 06:07:05.313384 kernel: CPU features: detected: Spectre-BHB Jul 7 06:07:05.313391 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 7 06:07:05.313398 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 7 06:07:05.313405 kernel: CPU features: detected: ARM erratum 1418040 Jul 7 06:07:05.313412 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 7 06:07:05.313424 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 7 06:07:05.313431 kernel: alternatives: applying boot alternatives Jul 7 06:07:05.313441 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 06:07:05.313449 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:07:05.313455 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 06:07:05.313462 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:07:05.313469 kernel: Fallback order for Node 0: 0 Jul 7 06:07:05.313476 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 7 06:07:05.313483 kernel: Policy zone: Normal Jul 7 06:07:05.313489 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:07:05.313496 kernel: software IO TLB: area num 2. Jul 7 06:07:05.313505 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jul 7 06:07:05.313512 kernel: Memory: 3982628K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 211532K reserved, 0K cma-reserved) Jul 7 06:07:05.313519 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 06:07:05.313526 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:07:05.313534 kernel: rcu: RCU event tracing is enabled. Jul 7 06:07:05.313541 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 06:07:05.313548 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:07:05.313555 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:07:05.313562 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:07:05.313569 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 06:07:05.313576 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 06:07:05.313584 kernel: GICv3: 960 SPIs implemented Jul 7 06:07:05.313591 kernel: GICv3: 0 Extended SPIs implemented Jul 7 06:07:05.313598 kernel: Root IRQ handler: gic_handle_irq Jul 7 06:07:05.313604 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 7 06:07:05.313611 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 7 06:07:05.313618 kernel: ITS: No ITS available, not enabling LPIs Jul 7 06:07:05.313625 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:07:05.313632 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:07:05.313639 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 7 06:07:05.313646 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 7 06:07:05.313653 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 7 06:07:05.313661 kernel: Console: colour dummy device 80x25 Jul 7 06:07:05.313669 kernel: printk: console [tty1] enabled Jul 7 06:07:05.313676 kernel: ACPI: Core revision 20230628 Jul 7 06:07:05.313683 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 7 06:07:05.313690 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:07:05.313698 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 06:07:05.313705 kernel: landlock: Up and running. Jul 7 06:07:05.313712 kernel: SELinux: Initializing. Jul 7 06:07:05.313719 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:07:05.313726 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:07:05.313735 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:07:05.313742 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:07:05.313749 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 7 06:07:05.313756 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Jul 7 06:07:05.313763 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 7 06:07:05.313770 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:07:05.313777 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:07:05.313792 kernel: Remapping and enabling EFI services. Jul 7 06:07:05.313799 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:07:05.313806 kernel: Detected PIPT I-cache on CPU1 Jul 7 06:07:05.313814 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 7 06:07:05.313823 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:07:05.313830 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 7 06:07:05.313838 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 06:07:05.313845 kernel: SMP: Total of 2 processors activated. Jul 7 06:07:05.313853 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 06:07:05.313862 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 7 06:07:05.313870 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 7 06:07:05.313878 kernel: CPU features: detected: CRC32 instructions Jul 7 06:07:05.313885 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 7 06:07:05.313893 kernel: CPU features: detected: LSE atomic instructions Jul 7 06:07:05.313900 kernel: CPU features: detected: Privileged Access Never Jul 7 06:07:05.313908 kernel: CPU: All CPU(s) started at EL1 Jul 7 06:07:05.313915 kernel: alternatives: applying system-wide alternatives Jul 7 06:07:05.313922 kernel: devtmpfs: initialized Jul 7 06:07:05.313932 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:07:05.313940 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 06:07:05.313947 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:07:05.313955 kernel: SMBIOS 3.1.0 present. Jul 7 06:07:05.313963 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 7 06:07:05.313970 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:07:05.313978 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 7 06:07:05.313985 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 06:07:05.313993 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 06:07:05.314002 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:07:05.314009 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jul 7 06:07:05.314017 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:07:05.314024 kernel: cpuidle: using governor menu Jul 7 06:07:05.314031 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 06:07:05.314039 kernel: ASID allocator initialised with 32768 entries Jul 7 06:07:05.314047 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:07:05.314054 kernel: Serial: AMBA PL011 UART driver Jul 7 06:07:05.314061 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 7 06:07:05.314070 kernel: Modules: 0 pages in range for non-PLT usage Jul 7 06:07:05.314078 kernel: Modules: 509008 pages in range for PLT usage Jul 7 06:07:05.314085 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:07:05.314093 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:07:05.314101 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 06:07:05.314109 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 06:07:05.314117 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:07:05.314125 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:07:05.314132 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 06:07:05.314142 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 06:07:05.314150 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:07:05.314157 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:07:05.314165 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:07:05.314172 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 06:07:05.314180 kernel: ACPI: Interpreter enabled Jul 7 06:07:05.314187 kernel: ACPI: Using GIC for interrupt routing Jul 7 06:07:05.314195 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 7 06:07:05.314202 kernel: printk: console [ttyAMA0] enabled Jul 7 06:07:05.314211 kernel: printk: bootconsole [pl11] disabled Jul 7 06:07:05.314218 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 7 06:07:05.314226 kernel: iommu: Default domain type: Translated Jul 7 06:07:05.314233 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 06:07:05.314241 kernel: efivars: Registered efivars operations Jul 7 06:07:05.314248 kernel: vgaarb: loaded Jul 7 06:07:05.314255 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 06:07:05.314263 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:07:05.314270 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:07:05.314280 kernel: pnp: PnP ACPI init Jul 7 06:07:05.314287 kernel: pnp: PnP ACPI: found 0 devices Jul 7 06:07:05.314294 kernel: NET: Registered PF_INET protocol family Jul 7 06:07:05.314302 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 06:07:05.314322 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 06:07:05.314330 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:07:05.314338 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 06:07:05.314346 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 06:07:05.314354 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 06:07:05.314363 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:07:05.314370 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:07:05.314378 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:07:05.314386 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:07:05.314393 kernel: kvm [1]: HYP mode not available Jul 7 06:07:05.314401 kernel: Initialise system trusted keyrings Jul 7 06:07:05.314409 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 06:07:05.314416 kernel: Key type asymmetric registered Jul 7 06:07:05.314423 kernel: Asymmetric key parser 'x509' registered Jul 7 06:07:05.314432 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:07:05.314440 kernel: io scheduler mq-deadline registered Jul 7 06:07:05.314447 kernel: io scheduler kyber registered Jul 7 06:07:05.314454 kernel: io scheduler bfq registered Jul 7 06:07:05.314462 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:07:05.314469 kernel: thunder_xcv, ver 1.0 Jul 7 06:07:05.314477 kernel: thunder_bgx, ver 1.0 Jul 7 06:07:05.314484 kernel: nicpf, ver 1.0 Jul 7 06:07:05.314491 kernel: nicvf, ver 1.0 Jul 7 06:07:05.314661 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 06:07:05.314740 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T06:07:04 UTC (1751868424) Jul 7 06:07:05.314750 kernel: efifb: probing for efifb Jul 7 06:07:05.314758 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 7 06:07:05.314765 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 7 06:07:05.314773 kernel: efifb: scrolling: redraw Jul 7 06:07:05.314781 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 06:07:05.314788 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 06:07:05.314797 kernel: fb0: EFI VGA frame buffer device Jul 7 06:07:05.314805 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 7 06:07:05.314812 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 06:07:05.314820 kernel: No ACPI PMU IRQ for CPU0 Jul 7 06:07:05.314827 kernel: No ACPI PMU IRQ for CPU1 Jul 7 06:07:05.314834 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 7 06:07:05.314842 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 7 06:07:05.314849 kernel: watchdog: Hard watchdog permanently disabled Jul 7 06:07:05.314856 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:07:05.314865 kernel: Segment Routing with IPv6 Jul 7 06:07:05.314873 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:07:05.314881 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:07:05.314888 kernel: Key type dns_resolver registered Jul 7 06:07:05.314895 kernel: registered taskstats version 1 Jul 7 06:07:05.314903 kernel: Loading compiled-in X.509 certificates Jul 7 06:07:05.314910 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 238b9dc1e5bb098e9decff566778e6505241ab94' Jul 7 06:07:05.314917 kernel: Key type .fscrypt registered Jul 7 06:07:05.314925 kernel: Key type fscrypt-provisioning registered Jul 7 06:07:05.314934 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:07:05.314942 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:07:05.314949 kernel: ima: No architecture policies found Jul 7 06:07:05.314957 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 06:07:05.314965 kernel: clk: Disabling unused clocks Jul 7 06:07:05.314972 kernel: Freeing unused kernel memory: 39424K Jul 7 06:07:05.314980 kernel: Run /init as init process Jul 7 06:07:05.314987 kernel: with arguments: Jul 7 06:07:05.314994 kernel: /init Jul 7 06:07:05.315004 kernel: with environment: Jul 7 06:07:05.315012 kernel: HOME=/ Jul 7 06:07:05.315019 kernel: TERM=linux Jul 7 06:07:05.315026 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:07:05.315036 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 06:07:05.315046 systemd[1]: Detected virtualization microsoft. Jul 7 06:07:05.315054 systemd[1]: Detected architecture arm64. Jul 7 06:07:05.315061 systemd[1]: Running in initrd. Jul 7 06:07:05.315071 systemd[1]: No hostname configured, using default hostname. Jul 7 06:07:05.315079 systemd[1]: Hostname set to . Jul 7 06:07:05.315087 systemd[1]: Initializing machine ID from random generator. Jul 7 06:07:05.315095 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:07:05.315103 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:07:05.315111 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:07:05.315119 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:07:05.315127 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:07:05.315137 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:07:05.315146 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:07:05.315156 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:07:05.315164 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:07:05.315173 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:07:05.315181 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:07:05.315190 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:07:05.315198 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:07:05.315206 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:07:05.315214 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:07:05.315225 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:07:05.315235 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:07:05.315245 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:07:05.315254 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 06:07:05.315263 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:07:05.315276 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:07:05.315285 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:07:05.315295 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:07:05.317348 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:07:05.317376 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:07:05.317385 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:07:05.317394 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:07:05.317402 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:07:05.317440 systemd-journald[217]: Collecting audit messages is disabled. Jul 7 06:07:05.317467 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:07:05.317476 systemd-journald[217]: Journal started Jul 7 06:07:05.317501 systemd-journald[217]: Runtime Journal (/run/log/journal/3793d7731ea74c2fb230dd3105cc1432) is 8.0M, max 78.5M, 70.5M free. Jul 7 06:07:05.317546 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:07:05.323770 systemd-modules-load[218]: Inserted module 'overlay' Jul 7 06:07:05.354325 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:07:05.363903 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:07:05.363958 kernel: Bridge firewalling registered Jul 7 06:07:05.364044 systemd-modules-load[218]: Inserted module 'br_netfilter' Jul 7 06:07:05.372557 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:07:05.383949 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:07:05.396618 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:07:05.407299 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:07:05.417514 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:07:05.438602 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:07:05.447500 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:07:05.473537 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:07:05.491531 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:07:05.504335 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:07:05.517389 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:07:05.534956 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:07:05.547767 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:07:05.572554 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:07:05.580487 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:07:05.595508 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:07:05.620359 dracut-cmdline[250]: dracut-dracut-053 Jul 7 06:07:05.626131 systemd-resolved[251]: Positive Trust Anchors: Jul 7 06:07:05.626152 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:07:05.646790 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 06:07:05.626183 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:07:05.628424 systemd-resolved[251]: Defaulting to hostname 'linux'. Jul 7 06:07:05.631402 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:07:05.640046 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:07:05.655099 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:07:05.815334 kernel: SCSI subsystem initialized Jul 7 06:07:05.824322 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:07:05.835329 kernel: iscsi: registered transport (tcp) Jul 7 06:07:05.853521 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:07:05.853578 kernel: QLogic iSCSI HBA Driver Jul 7 06:07:05.894923 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:07:05.909667 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:07:05.942684 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:07:05.942757 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:07:05.948858 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 06:07:05.998337 kernel: raid6: neonx8 gen() 15754 MB/s Jul 7 06:07:06.018321 kernel: raid6: neonx4 gen() 15685 MB/s Jul 7 06:07:06.038319 kernel: raid6: neonx2 gen() 13239 MB/s Jul 7 06:07:06.059318 kernel: raid6: neonx1 gen() 10492 MB/s Jul 7 06:07:06.079316 kernel: raid6: int64x8 gen() 6966 MB/s Jul 7 06:07:06.099316 kernel: raid6: int64x4 gen() 7349 MB/s Jul 7 06:07:06.120323 kernel: raid6: int64x2 gen() 6131 MB/s Jul 7 06:07:06.143441 kernel: raid6: int64x1 gen() 5058 MB/s Jul 7 06:07:06.143463 kernel: raid6: using algorithm neonx8 gen() 15754 MB/s Jul 7 06:07:06.167448 kernel: raid6: .... xor() 11947 MB/s, rmw enabled Jul 7 06:07:06.167479 kernel: raid6: using neon recovery algorithm Jul 7 06:07:06.179701 kernel: xor: measuring software checksum speed Jul 7 06:07:06.179731 kernel: 8regs : 19754 MB/sec Jul 7 06:07:06.183185 kernel: 32regs : 19617 MB/sec Jul 7 06:07:06.186840 kernel: arm64_neon : 26238 MB/sec Jul 7 06:07:06.190731 kernel: xor: using function: arm64_neon (26238 MB/sec) Jul 7 06:07:06.242331 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:07:06.253238 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:07:06.268490 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:07:06.290954 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jul 7 06:07:06.294303 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:07:06.317506 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:07:06.329527 dracut-pre-trigger[444]: rd.md=0: removing MD RAID activation Jul 7 06:07:06.356893 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:07:06.372441 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:07:06.415050 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:07:06.434621 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:07:06.456413 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:07:06.467519 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:07:06.487767 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:07:06.501864 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:07:06.517590 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:07:06.542515 kernel: hv_vmbus: Vmbus version:5.3 Jul 7 06:07:06.542914 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:07:06.588984 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 7 06:07:06.589043 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 7 06:07:06.589054 kernel: hv_vmbus: registering driver hid_hyperv Jul 7 06:07:06.589068 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 7 06:07:06.585152 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:07:06.624252 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 7 06:07:06.624285 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 7 06:07:06.624295 kernel: hv_vmbus: registering driver hv_storvsc Jul 7 06:07:06.624315 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 7 06:07:06.585381 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:07:06.663030 kernel: hv_vmbus: registering driver hv_netvsc Jul 7 06:07:06.663059 kernel: scsi host0: storvsc_host_t Jul 7 06:07:06.663236 kernel: scsi host1: storvsc_host_t Jul 7 06:07:06.663345 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 7 06:07:06.607210 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:07:06.643191 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:07:06.643544 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:07:06.669499 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:07:06.704867 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:07:06.728718 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 7 06:07:06.727751 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:07:06.741642 kernel: PTP clock support registered Jul 7 06:07:06.727849 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:07:06.773366 kernel: hv_utils: Registering HyperV Utility Driver Jul 7 06:07:06.773390 kernel: hv_netvsc 00224879-5c5e-0022-4879-5c5e00224879 eth0: VF slot 1 added Jul 7 06:07:06.773550 kernel: hv_vmbus: registering driver hv_utils Jul 7 06:07:06.780511 kernel: hv_utils: Heartbeat IC version 3.0 Jul 7 06:07:06.780553 kernel: hv_utils: Shutdown IC version 3.2 Jul 7 06:07:06.774611 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:07:07.072747 kernel: hv_utils: TimeSync IC version 4.0 Jul 7 06:07:07.072776 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 7 06:07:07.072971 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 06:07:07.060560 systemd-resolved[251]: Clock change detected. Flushing caches. Jul 7 06:07:07.088334 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 7 06:07:07.089534 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:07:07.114267 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 7 06:07:07.114512 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 7 06:07:07.119317 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:07:07.164972 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 7 06:07:07.165139 kernel: hv_vmbus: registering driver hv_pci Jul 7 06:07:07.165150 kernel: hv_pci 7ad42c06-31eb-4567-918e-69a6bfc6e133: PCI VMBus probing: Using version 0x10004 Jul 7 06:07:07.165282 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 7 06:07:07.165377 kernel: hv_pci 7ad42c06-31eb-4567-918e-69a6bfc6e133: PCI host bridge to bus 31eb:00 Jul 7 06:07:07.165453 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 7 06:07:07.165536 kernel: pci_bus 31eb:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 7 06:07:07.172841 kernel: pci_bus 31eb:00: No busn resource found for root bus, will use [bus 00-ff] Jul 7 06:07:07.173009 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 06:07:07.181050 kernel: pci 31eb:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 7 06:07:07.181105 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 7 06:07:07.190510 kernel: pci 31eb:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 7 06:07:07.197776 kernel: pci 31eb:00:02.0: enabling Extended Tags Jul 7 06:07:07.223313 kernel: pci 31eb:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 31eb:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 7 06:07:07.225413 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:07:07.251140 kernel: pci_bus 31eb:00: busn_res: [bus 00-ff] end is updated to 00 Jul 7 06:07:07.251321 kernel: pci 31eb:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 7 06:07:07.280280 kernel: mlx5_core 31eb:00:02.0: enabling device (0000 -> 0002) Jul 7 06:07:07.286241 kernel: mlx5_core 31eb:00:02.0: firmware version: 16.30.1284 Jul 7 06:07:07.485865 kernel: hv_netvsc 00224879-5c5e-0022-4879-5c5e00224879 eth0: VF registering: eth1 Jul 7 06:07:07.486071 kernel: mlx5_core 31eb:00:02.0 eth1: joined to eth0 Jul 7 06:07:07.493267 kernel: mlx5_core 31eb:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 7 06:07:07.503258 kernel: mlx5_core 31eb:00:02.0 enP12779s1: renamed from eth1 Jul 7 06:07:07.685246 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (497) Jul 7 06:07:07.700000 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 7 06:07:07.769398 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 7 06:07:08.683162 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 7 06:07:09.802262 kernel: BTRFS: device fsid 8b9ce65a-b4d6-4744-987c-133e7f159d2d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (498) Jul 7 06:07:09.817442 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 7 06:07:09.823879 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 7 06:07:09.855531 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:07:09.883264 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 06:07:09.894253 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 06:07:10.904996 disk-uuid[604]: The operation has completed successfully. Jul 7 06:07:10.910134 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 06:07:10.970276 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:07:10.970384 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:07:11.006395 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:07:11.018860 sh[690]: Success Jul 7 06:07:11.042318 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 7 06:07:11.221933 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:07:11.229316 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:07:11.251317 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:07:11.281028 kernel: BTRFS info (device dm-0): first mount of filesystem 8b9ce65a-b4d6-4744-987c-133e7f159d2d Jul 7 06:07:11.281094 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:07:11.287762 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 06:07:11.292831 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 06:07:11.298241 kernel: BTRFS info (device dm-0): using free space tree Jul 7 06:07:11.603042 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:07:11.608549 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:07:11.634573 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:07:11.642418 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:07:11.679084 kernel: BTRFS info (device sda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:07:11.679139 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:07:11.683643 kernel: BTRFS info (device sda6): using free space tree Jul 7 06:07:11.726142 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:07:11.740878 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 06:07:11.744481 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:07:11.765402 kernel: BTRFS info (device sda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:07:11.759964 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 06:07:11.778200 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:07:11.795402 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:07:11.811045 systemd-networkd[867]: lo: Link UP Jul 7 06:07:11.811061 systemd-networkd[867]: lo: Gained carrier Jul 7 06:07:11.812632 systemd-networkd[867]: Enumeration completed Jul 7 06:07:11.814730 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:07:11.815433 systemd-networkd[867]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:07:11.815437 systemd-networkd[867]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:07:11.825600 systemd[1]: Reached target network.target - Network. Jul 7 06:07:11.910246 kernel: mlx5_core 31eb:00:02.0 enP12779s1: Link up Jul 7 06:07:11.956469 kernel: hv_netvsc 00224879-5c5e-0022-4879-5c5e00224879 eth0: Data path switched to VF: enP12779s1 Jul 7 06:07:11.956115 systemd-networkd[867]: enP12779s1: Link UP Jul 7 06:07:11.956203 systemd-networkd[867]: eth0: Link UP Jul 7 06:07:11.956325 systemd-networkd[867]: eth0: Gained carrier Jul 7 06:07:11.956334 systemd-networkd[867]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:07:11.981592 systemd-networkd[867]: enP12779s1: Gained carrier Jul 7 06:07:11.998307 systemd-networkd[867]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 7 06:07:12.575127 ignition[874]: Ignition 2.19.0 Jul 7 06:07:12.575334 ignition[874]: Stage: fetch-offline Jul 7 06:07:12.575391 ignition[874]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:07:12.575401 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:07:12.587177 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:07:12.575543 ignition[874]: parsed url from cmdline: "" Jul 7 06:07:12.606549 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 06:07:12.575550 ignition[874]: no config URL provided Jul 7 06:07:12.575555 ignition[874]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:07:12.575563 ignition[874]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:07:12.575569 ignition[874]: failed to fetch config: resource requires networking Jul 7 06:07:12.578882 ignition[874]: Ignition finished successfully Jul 7 06:07:12.629743 ignition[883]: Ignition 2.19.0 Jul 7 06:07:12.629754 ignition[883]: Stage: fetch Jul 7 06:07:12.629960 ignition[883]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:07:12.629970 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:07:12.630079 ignition[883]: parsed url from cmdline: "" Jul 7 06:07:12.630083 ignition[883]: no config URL provided Jul 7 06:07:12.630088 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:07:12.630098 ignition[883]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:07:12.630121 ignition[883]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 7 06:07:12.752511 ignition[883]: GET result: OK Jul 7 06:07:12.752571 ignition[883]: config has been read from IMDS userdata Jul 7 06:07:12.752617 ignition[883]: parsing config with SHA512: 5ba6ef795f26cfe065b5eb98a69ae88817b5e2bebcf78de77635b613ac1d4f644da2ef7a76923e599b5a7cbf33b33a35a3758ac75afab9cc2c3845e0f7d50827 Jul 7 06:07:12.756251 unknown[883]: fetched base config from "system" Jul 7 06:07:12.756614 ignition[883]: fetch: fetch complete Jul 7 06:07:12.756260 unknown[883]: fetched base config from "system" Jul 7 06:07:12.756619 ignition[883]: fetch: fetch passed Jul 7 06:07:12.756265 unknown[883]: fetched user config from "azure" Jul 7 06:07:12.756662 ignition[883]: Ignition finished successfully Jul 7 06:07:12.762350 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 06:07:12.781377 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:07:12.799683 ignition[889]: Ignition 2.19.0 Jul 7 06:07:12.802859 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:07:12.799690 ignition[889]: Stage: kargs Jul 7 06:07:12.799877 ignition[889]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:07:12.799887 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:07:12.827536 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:07:12.800791 ignition[889]: kargs: kargs passed Jul 7 06:07:12.800840 ignition[889]: Ignition finished successfully Jul 7 06:07:12.855268 ignition[896]: Ignition 2.19.0 Jul 7 06:07:12.855276 ignition[896]: Stage: disks Jul 7 06:07:12.862269 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:07:12.855476 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:07:12.868768 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:07:12.855486 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:07:12.877651 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:07:12.856595 ignition[896]: disks: disks passed Jul 7 06:07:12.889385 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:07:12.856654 ignition[896]: Ignition finished successfully Jul 7 06:07:12.899472 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:07:12.910684 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:07:12.937503 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:07:13.017320 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 7 06:07:13.026069 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:07:13.044475 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:07:13.107290 kernel: EXT4-fs (sda9): mounted filesystem bea371b7-1069-4e98-84b2-bf5b94f934f3 r/w with ordered data mode. Quota mode: none. Jul 7 06:07:13.108296 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:07:13.117491 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:07:13.161323 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:07:13.168424 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:07:13.196259 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (915) Jul 7 06:07:13.196321 kernel: BTRFS info (device sda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:07:13.202389 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:07:13.203470 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 06:07:13.225833 kernel: BTRFS info (device sda6): using free space tree Jul 7 06:07:13.225858 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 06:07:13.207338 systemd-networkd[867]: eth0: Gained IPv6LL Jul 7 06:07:13.214499 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:07:13.214535 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:07:13.233454 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:07:13.242113 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:07:13.274547 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:07:13.716358 systemd-networkd[867]: enP12779s1: Gained IPv6LL Jul 7 06:07:13.742083 coreos-metadata[917]: Jul 07 06:07:13.741 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 7 06:07:13.752245 coreos-metadata[917]: Jul 07 06:07:13.752 INFO Fetch successful Jul 7 06:07:13.757932 coreos-metadata[917]: Jul 07 06:07:13.757 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 7 06:07:13.779506 coreos-metadata[917]: Jul 07 06:07:13.779 INFO Fetch successful Jul 7 06:07:13.793244 coreos-metadata[917]: Jul 07 06:07:13.793 INFO wrote hostname ci-4081.3.4-a-d5356a388e to /sysroot/etc/hostname Jul 7 06:07:13.802488 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 06:07:13.995063 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:07:14.015930 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:07:14.023952 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:07:14.033920 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:07:14.817940 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:07:14.832430 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:07:14.844848 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:07:14.860479 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:07:14.872602 kernel: BTRFS info (device sda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:07:14.882025 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:07:14.896549 ignition[1036]: INFO : Ignition 2.19.0 Jul 7 06:07:14.901598 ignition[1036]: INFO : Stage: mount Jul 7 06:07:14.901598 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:07:14.901598 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:07:14.925166 ignition[1036]: INFO : mount: mount passed Jul 7 06:07:14.925166 ignition[1036]: INFO : Ignition finished successfully Jul 7 06:07:14.912455 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:07:14.937343 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:07:14.950425 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:07:14.983633 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1044) Jul 7 06:07:14.983684 kernel: BTRFS info (device sda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:07:14.989877 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:07:14.994202 kernel: BTRFS info (device sda6): using free space tree Jul 7 06:07:15.001244 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 06:07:15.002961 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:07:15.027528 ignition[1062]: INFO : Ignition 2.19.0 Jul 7 06:07:15.027528 ignition[1062]: INFO : Stage: files Jul 7 06:07:15.035475 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:07:15.035475 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:07:15.035475 ignition[1062]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:07:15.054383 ignition[1062]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:07:15.054383 ignition[1062]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:07:15.127598 ignition[1062]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:07:15.135205 ignition[1062]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:07:15.135205 ignition[1062]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:07:15.127964 unknown[1062]: wrote ssh authorized keys file for user: core Jul 7 06:07:15.155479 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 7 06:07:15.166012 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 7 06:07:15.211088 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:07:15.359441 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 7 06:07:15.370918 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:07:15.370918 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:07:15.370918 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:07:15.370918 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:07:15.370918 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:07:15.370918 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:07:15.370918 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:07:15.370918 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:07:15.370918 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:07:15.459645 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:07:15.459645 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 06:07:15.459645 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 06:07:15.459645 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 06:07:15.459645 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 7 06:07:16.165188 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 06:07:16.391509 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 06:07:16.391509 ignition[1062]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 06:07:16.416865 ignition[1062]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:07:16.428556 ignition[1062]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:07:16.428556 ignition[1062]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 06:07:16.428556 ignition[1062]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:07:16.428556 ignition[1062]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:07:16.428556 ignition[1062]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:07:16.428556 ignition[1062]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:07:16.428556 ignition[1062]: INFO : files: files passed Jul 7 06:07:16.428556 ignition[1062]: INFO : Ignition finished successfully Jul 7 06:07:16.429196 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:07:16.469575 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:07:16.487431 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:07:16.547034 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:07:16.547034 initrd-setup-root-after-ignition[1088]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:07:16.512416 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:07:16.575482 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:07:16.512507 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:07:16.523481 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:07:16.540488 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:07:16.584501 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:07:16.624616 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:07:16.624765 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:07:16.637470 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:07:16.649728 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:07:16.660825 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:07:16.680547 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:07:16.691686 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:07:16.705465 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:07:16.730031 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:07:16.743794 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:07:16.750802 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:07:16.761829 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:07:16.762023 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:07:16.778104 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:07:16.789711 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:07:16.799945 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:07:16.811020 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:07:16.822974 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:07:16.835895 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:07:16.848082 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:07:16.861291 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:07:16.874686 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:07:16.886038 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:07:16.896029 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:07:16.896211 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:07:16.912392 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:07:16.924582 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:07:16.937206 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:07:16.943297 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:07:16.950829 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:07:16.951000 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:07:16.969455 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:07:16.969646 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:07:16.982380 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:07:16.982541 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:07:16.993447 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 06:07:16.993603 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 06:07:17.026387 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:07:17.055422 ignition[1113]: INFO : Ignition 2.19.0 Jul 7 06:07:17.055422 ignition[1113]: INFO : Stage: umount Jul 7 06:07:17.055422 ignition[1113]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:07:17.055422 ignition[1113]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:07:17.055422 ignition[1113]: INFO : umount: umount passed Jul 7 06:07:17.055422 ignition[1113]: INFO : Ignition finished successfully Jul 7 06:07:17.050287 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:07:17.064974 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:07:17.065167 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:07:17.082399 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:07:17.082534 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:07:17.098089 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:07:17.098778 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:07:17.098880 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:07:17.110169 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:07:17.110351 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:07:17.117634 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:07:17.117696 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:07:17.127253 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 06:07:17.127308 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 06:07:17.138516 systemd[1]: Stopped target network.target - Network. Jul 7 06:07:17.143644 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:07:17.143713 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:07:17.156207 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:07:17.167498 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:07:17.173336 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:07:17.180177 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:07:17.190783 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:07:17.200764 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:07:17.200830 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:07:17.211552 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:07:17.211609 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:07:17.224914 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:07:17.224973 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:07:17.230655 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:07:17.230703 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:07:17.242583 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:07:17.491562 kernel: hv_netvsc 00224879-5c5e-0022-4879-5c5e00224879 eth0: Data path switched from VF: enP12779s1 Jul 7 06:07:17.254822 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:07:17.265274 systemd-networkd[867]: eth0: DHCPv6 lease lost Jul 7 06:07:17.265751 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:07:17.265851 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:07:17.279554 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:07:17.279665 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:07:17.291946 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:07:17.292056 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:07:17.307330 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:07:17.307395 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:07:17.333744 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:07:17.352596 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:07:17.352688 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:07:17.359869 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:07:17.359923 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:07:17.370361 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:07:17.370419 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:07:17.381557 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:07:17.381625 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:07:17.399994 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:07:17.436595 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:07:17.436792 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:07:17.449479 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:07:17.449539 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:07:17.460426 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:07:17.460470 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:07:17.479538 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:07:17.479619 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:07:17.497951 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:07:17.498029 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:07:17.507805 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:07:17.507878 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:07:17.545437 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:07:17.558984 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:07:17.559063 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:07:17.573554 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 06:07:17.573624 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:07:17.587194 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:07:17.587288 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:07:17.599491 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:07:17.599556 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:07:17.611447 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:07:17.611574 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:07:17.624258 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:07:17.626246 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:07:17.776985 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:07:17.777141 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:07:17.787795 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:07:17.798634 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:07:17.798705 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:07:17.828474 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:07:17.859207 systemd[1]: Switching root. Jul 7 06:07:17.944907 systemd-journald[217]: Journal stopped Jul 7 06:07:23.999132 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jul 7 06:07:23.999175 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:07:23.999189 kernel: SELinux: policy capability open_perms=1 Jul 7 06:07:23.999205 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:07:23.999215 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:07:23.999248 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:07:23.999259 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:07:23.999268 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:07:23.999278 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:07:23.999287 kernel: audit: type=1403 audit(1751868438.909:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:07:23.999299 systemd[1]: Successfully loaded SELinux policy in 135.783ms. Jul 7 06:07:23.999309 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.440ms. Jul 7 06:07:23.999320 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 06:07:23.999329 systemd[1]: Detected virtualization microsoft. Jul 7 06:07:23.999339 systemd[1]: Detected architecture arm64. Jul 7 06:07:23.999349 systemd[1]: Detected first boot. Jul 7 06:07:23.999359 systemd[1]: Hostname set to . Jul 7 06:07:23.999368 systemd[1]: Initializing machine ID from random generator. Jul 7 06:07:23.999378 zram_generator::config[1155]: No configuration found. Jul 7 06:07:23.999388 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:07:23.999397 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:07:23.999408 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:07:23.999418 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:07:23.999428 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:07:23.999437 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:07:23.999447 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:07:23.999457 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:07:23.999466 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:07:23.999477 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:07:23.999488 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:07:23.999498 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:07:23.999507 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:07:23.999517 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:07:23.999527 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:07:23.999536 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:07:23.999546 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:07:23.999555 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:07:23.999567 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 7 06:07:23.999576 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:07:23.999586 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:07:23.999598 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:07:23.999608 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:07:23.999617 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:07:23.999627 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:07:23.999638 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:07:23.999648 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:07:23.999658 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:07:23.999667 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:07:23.999677 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:07:23.999687 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:07:23.999698 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:07:23.999710 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:07:23.999720 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:07:23.999730 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:07:23.999741 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:07:23.999750 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:07:23.999760 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:07:23.999771 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:07:23.999781 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:07:23.999791 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:07:23.999801 systemd[1]: Reached target machines.target - Containers. Jul 7 06:07:23.999811 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:07:23.999821 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:07:23.999831 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:07:23.999841 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:07:23.999852 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:07:23.999862 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:07:23.999872 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:07:23.999882 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:07:23.999892 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:07:23.999902 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:07:23.999913 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:07:23.999923 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:07:23.999932 kernel: fuse: init (API version 7.39) Jul 7 06:07:23.999942 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:07:23.999952 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:07:23.999962 kernel: loop: module loaded Jul 7 06:07:23.999971 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:07:23.999981 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:07:23.999991 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:07:24.000001 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:07:24.000041 systemd-journald[1258]: Collecting audit messages is disabled. Jul 7 06:07:24.000068 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:07:24.000078 kernel: ACPI: bus type drm_connector registered Jul 7 06:07:24.000088 systemd-journald[1258]: Journal started Jul 7 06:07:24.000110 systemd-journald[1258]: Runtime Journal (/run/log/journal/585e51b5929e44f69d143b3b1e455e78) is 8.0M, max 78.5M, 70.5M free. Jul 7 06:07:22.936930 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:07:23.095123 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 7 06:07:23.095497 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:07:23.095807 systemd[1]: systemd-journald.service: Consumed 3.153s CPU time. Jul 7 06:07:24.020252 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:07:24.020330 systemd[1]: Stopped verity-setup.service. Jul 7 06:07:24.040615 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:07:24.041607 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:07:24.048176 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:07:24.054503 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:07:24.060011 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:07:24.067163 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:07:24.073779 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:07:24.079854 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:07:24.088855 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:07:24.096456 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:07:24.097428 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:07:24.104560 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:07:24.104717 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:07:24.111855 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:07:24.113271 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:07:24.119616 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:07:24.119759 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:07:24.126986 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:07:24.127139 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:07:24.133947 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:07:24.134088 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:07:24.140519 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:07:24.147555 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:07:24.155062 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:07:24.162831 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:07:24.181741 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:07:24.193326 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:07:24.201048 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:07:24.207852 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:07:24.208071 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:07:24.215006 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 06:07:24.227562 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:07:24.235732 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:07:24.241650 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:07:24.243388 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:07:24.250786 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:07:24.257645 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:07:24.259029 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:07:24.266506 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:07:24.267690 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:07:24.276443 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:07:24.293406 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:07:24.305082 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 06:07:24.311520 systemd-journald[1258]: Time spent on flushing to /var/log/journal/585e51b5929e44f69d143b3b1e455e78 is 14.126ms for 897 entries. Jul 7 06:07:24.311520 systemd-journald[1258]: System Journal (/var/log/journal/585e51b5929e44f69d143b3b1e455e78) is 8.0M, max 2.6G, 2.6G free. Jul 7 06:07:24.356369 systemd-journald[1258]: Received client request to flush runtime journal. Jul 7 06:07:24.356409 kernel: loop0: detected capacity change from 0 to 207008 Jul 7 06:07:24.320555 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:07:24.332463 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:07:24.339565 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:07:24.352931 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:07:24.362283 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:07:24.376671 udevadm[1292]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 7 06:07:24.377940 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:07:24.396580 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 06:07:24.404170 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:07:24.424264 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:07:24.460905 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:07:24.462352 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 06:07:24.473579 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jul 7 06:07:24.473595 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jul 7 06:07:24.479717 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:07:24.491933 kernel: loop1: detected capacity change from 0 to 114328 Jul 7 06:07:24.501410 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:07:24.613729 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:07:24.625597 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:07:24.646895 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Jul 7 06:07:24.646916 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Jul 7 06:07:24.651179 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:07:24.867253 kernel: loop2: detected capacity change from 0 to 114432 Jul 7 06:07:25.242248 kernel: loop3: detected capacity change from 0 to 31320 Jul 7 06:07:25.434055 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:07:25.446410 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:07:25.474774 systemd-udevd[1316]: Using default interface naming scheme 'v255'. Jul 7 06:07:25.561278 kernel: loop4: detected capacity change from 0 to 207008 Jul 7 06:07:25.575259 kernel: loop5: detected capacity change from 0 to 114328 Jul 7 06:07:25.585245 kernel: loop6: detected capacity change from 0 to 114432 Jul 7 06:07:25.594507 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:07:25.602335 kernel: loop7: detected capacity change from 0 to 31320 Jul 7 06:07:25.606872 (sd-merge)[1318]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 7 06:07:25.607301 (sd-merge)[1318]: Merged extensions into '/usr'. Jul 7 06:07:25.615349 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:07:25.636585 systemd[1]: Reloading requested from client PID 1289 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:07:25.636606 systemd[1]: Reloading... Jul 7 06:07:25.729357 zram_generator::config[1366]: No configuration found. Jul 7 06:07:25.850393 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 06:07:25.850483 kernel: hv_vmbus: registering driver hv_balloon Jul 7 06:07:25.875675 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 7 06:07:25.875778 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 7 06:07:25.913254 kernel: hv_vmbus: registering driver hyperv_fb Jul 7 06:07:25.926237 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 7 06:07:25.926336 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 7 06:07:25.933868 kernel: Console: switching to colour dummy device 80x25 Jul 7 06:07:25.937355 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 06:07:25.940679 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:07:25.966491 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1335) Jul 7 06:07:26.015289 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 7 06:07:26.015630 systemd[1]: Reloading finished in 378 ms. Jul 7 06:07:26.035589 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:07:26.068502 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 7 06:07:26.082570 systemd[1]: Starting ensure-sysext.service... Jul 7 06:07:26.091081 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:07:26.100247 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:07:26.110097 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:07:26.119533 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:07:26.138837 systemd[1]: Reloading requested from client PID 1474 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:07:26.138855 systemd[1]: Reloading... Jul 7 06:07:26.146975 systemd-tmpfiles[1476]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:07:26.147282 systemd-tmpfiles[1476]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:07:26.147965 systemd-tmpfiles[1476]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:07:26.148184 systemd-tmpfiles[1476]: ACLs are not supported, ignoring. Jul 7 06:07:26.153078 systemd-tmpfiles[1476]: ACLs are not supported, ignoring. Jul 7 06:07:26.171676 systemd-tmpfiles[1476]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:07:26.171690 systemd-tmpfiles[1476]: Skipping /boot Jul 7 06:07:26.191111 systemd-tmpfiles[1476]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:07:26.191128 systemd-tmpfiles[1476]: Skipping /boot Jul 7 06:07:26.241260 zram_generator::config[1518]: No configuration found. Jul 7 06:07:26.318646 systemd-networkd[1330]: lo: Link UP Jul 7 06:07:26.318659 systemd-networkd[1330]: lo: Gained carrier Jul 7 06:07:26.322494 systemd-networkd[1330]: Enumeration completed Jul 7 06:07:26.322883 systemd-networkd[1330]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:07:26.322886 systemd-networkd[1330]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:07:26.365352 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:07:26.372055 kernel: mlx5_core 31eb:00:02.0 enP12779s1: Link up Jul 7 06:07:26.398716 kernel: hv_netvsc 00224879-5c5e-0022-4879-5c5e00224879 eth0: Data path switched to VF: enP12779s1 Jul 7 06:07:26.399322 systemd-networkd[1330]: enP12779s1: Link UP Jul 7 06:07:26.399440 systemd-networkd[1330]: eth0: Link UP Jul 7 06:07:26.399443 systemd-networkd[1330]: eth0: Gained carrier Jul 7 06:07:26.399460 systemd-networkd[1330]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:07:26.404588 systemd-networkd[1330]: enP12779s1: Gained carrier Jul 7 06:07:26.410393 systemd-networkd[1330]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 7 06:07:26.441609 systemd[1]: Reloading finished in 302 ms. Jul 7 06:07:26.457912 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:07:26.464576 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:07:26.481818 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:07:26.489650 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:07:26.497821 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:07:26.498038 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:07:26.518543 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 06:07:26.542548 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:07:26.551647 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:07:26.562515 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:07:26.578826 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:07:26.586765 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:07:26.601298 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:07:26.611173 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 06:07:26.629683 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:07:26.638608 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:07:26.652485 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:07:26.657631 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 06:07:26.667511 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:07:26.679373 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:07:26.689549 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:07:26.696757 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:07:26.697645 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:07:26.698191 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:07:26.706124 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:07:26.706433 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:07:26.713144 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:07:26.713308 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:07:26.722290 augenrules[1606]: No rules Jul 7 06:07:26.722612 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 06:07:26.726630 systemd-resolved[1586]: Positive Trust Anchors: Jul 7 06:07:26.726958 systemd-resolved[1586]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:07:26.727046 systemd-resolved[1586]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:07:26.734415 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:07:26.734597 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:07:26.737877 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:07:26.744517 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:07:26.748362 systemd-resolved[1586]: Using system hostname 'ci-4081.3.4-a-d5356a388e'. Jul 7 06:07:26.752195 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:07:26.762150 lvm[1602]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 06:07:26.762544 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:07:26.773678 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:07:26.779366 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:07:26.779566 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:07:26.785823 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:07:26.792680 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:07:26.794270 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:07:26.801521 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:07:26.801661 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:07:26.809522 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 06:07:26.817940 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:07:26.818083 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:07:26.825421 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:07:26.825564 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:07:26.834092 systemd[1]: Finished ensure-sysext.service. Jul 7 06:07:26.842118 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:07:26.848771 systemd[1]: Reached target network.target - Network. Jul 7 06:07:26.854420 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:07:26.867745 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 06:07:26.873762 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:07:26.873818 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:07:26.874468 lvm[1626]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 06:07:26.901171 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 06:07:26.920473 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:07:27.085691 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:07:27.093323 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:07:28.052432 systemd-networkd[1330]: eth0: Gained IPv6LL Jul 7 06:07:28.056292 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:07:28.064304 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:07:28.180325 systemd-networkd[1330]: enP12779s1: Gained IPv6LL Jul 7 06:07:28.915254 ldconfig[1284]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:07:28.932507 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:07:28.945502 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:07:28.959359 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:07:28.965907 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:07:28.972169 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:07:28.978946 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:07:28.986275 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:07:28.992403 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:07:29.000076 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:07:29.007615 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:07:29.007653 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:07:29.012939 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:07:29.020296 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:07:29.028270 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:07:29.037150 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:07:29.043523 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:07:29.049909 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:07:29.055725 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:07:29.062724 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:07:29.062753 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:07:29.077347 systemd[1]: Starting chronyd.service - NTP client/server... Jul 7 06:07:29.086392 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:07:29.101361 (chronyd)[1639]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 7 06:07:29.106443 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 06:07:29.114437 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:07:29.123718 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:07:29.131573 chronyd[1647]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 7 06:07:29.132035 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:07:29.137859 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:07:29.137905 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jul 7 06:07:29.140447 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 7 06:07:29.145660 jq[1645]: false Jul 7 06:07:29.149298 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 7 06:07:29.156394 KVP[1649]: KVP starting; pid is:1649 Jul 7 06:07:29.157538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:07:29.159979 chronyd[1647]: Timezone right/UTC failed leap second check, ignoring Jul 7 06:07:29.163769 chronyd[1647]: Loaded seccomp filter (level 2) Jul 7 06:07:29.168550 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:07:29.181378 KVP[1649]: KVP LIC Version: 3.1 Jul 7 06:07:29.189047 kernel: hv_utils: KVP IC version 4.0 Jul 7 06:07:29.183784 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:07:29.194369 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:07:29.198779 extend-filesystems[1648]: Found loop4 Jul 7 06:07:29.198779 extend-filesystems[1648]: Found loop5 Jul 7 06:07:29.198779 extend-filesystems[1648]: Found loop6 Jul 7 06:07:29.198779 extend-filesystems[1648]: Found loop7 Jul 7 06:07:29.198779 extend-filesystems[1648]: Found sda Jul 7 06:07:29.198779 extend-filesystems[1648]: Found sda1 Jul 7 06:07:29.198779 extend-filesystems[1648]: Found sda2 Jul 7 06:07:29.198779 extend-filesystems[1648]: Found sda3 Jul 7 06:07:29.198779 extend-filesystems[1648]: Found usr Jul 7 06:07:29.198779 extend-filesystems[1648]: Found sda4 Jul 7 06:07:29.198779 extend-filesystems[1648]: Found sda6 Jul 7 06:07:29.198779 extend-filesystems[1648]: Found sda7 Jul 7 06:07:29.198779 extend-filesystems[1648]: Found sda9 Jul 7 06:07:29.198779 extend-filesystems[1648]: Checking size of /dev/sda9 Jul 7 06:07:29.359802 extend-filesystems[1648]: Old size kept for /dev/sda9 Jul 7 06:07:29.359802 extend-filesystems[1648]: Found sr0 Jul 7 06:07:29.262319 dbus-daemon[1642]: [system] SELinux support is enabled Jul 7 06:07:29.217601 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:07:29.230814 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:07:29.253019 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:07:29.268602 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:07:29.269123 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:07:29.275529 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:07:29.383954 jq[1679]: true Jul 7 06:07:29.310404 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:07:29.319138 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:07:29.332940 systemd[1]: Started chronyd.service - NTP client/server. Jul 7 06:07:29.343790 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:07:29.344185 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:07:29.344527 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:07:29.345314 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:07:29.370126 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:07:29.370358 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:07:29.385006 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:07:29.402026 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:07:29.409828 update_engine[1673]: I20250707 06:07:29.401535 1673 main.cc:92] Flatcar Update Engine starting Jul 7 06:07:29.409828 update_engine[1673]: I20250707 06:07:29.405617 1673 update_check_scheduler.cc:74] Next update check in 4m30s Jul 7 06:07:29.402260 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:07:29.423316 coreos-metadata[1641]: Jul 07 06:07:29.419 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 7 06:07:29.435311 coreos-metadata[1641]: Jul 07 06:07:29.431 INFO Fetch successful Jul 7 06:07:29.435311 coreos-metadata[1641]: Jul 07 06:07:29.431 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 7 06:07:29.435440 jq[1687]: true Jul 7 06:07:29.437159 coreos-metadata[1641]: Jul 07 06:07:29.437 INFO Fetch successful Jul 7 06:07:29.437159 coreos-metadata[1641]: Jul 07 06:07:29.437 INFO Fetching http://168.63.129.16/machine/dc8fd549-2aeb-43ab-979a-8426793ca570/f426cc42%2D7d1c%2D4803%2Da506%2D7ee814154da2.%5Fci%2D4081.3.4%2Da%2Dd5356a388e?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 7 06:07:29.442814 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:07:29.445067 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:07:29.463884 systemd-logind[1663]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 06:07:29.466523 (ntainerd)[1690]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:07:29.469065 systemd-logind[1663]: New seat seat0. Jul 7 06:07:29.473472 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:07:29.473504 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:07:29.482282 coreos-metadata[1641]: Jul 07 06:07:29.481 INFO Fetch successful Jul 7 06:07:29.482282 coreos-metadata[1641]: Jul 07 06:07:29.481 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 7 06:07:29.483772 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:07:29.493846 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:07:29.499614 coreos-metadata[1641]: Jul 07 06:07:29.499 INFO Fetch successful Jul 7 06:07:29.510246 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1705) Jul 7 06:07:29.537708 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:07:29.568332 tar[1686]: linux-arm64/LICENSE Jul 7 06:07:29.568332 tar[1686]: linux-arm64/helm Jul 7 06:07:29.607655 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 06:07:29.619022 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:07:29.633626 bash[1737]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:07:29.639614 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:07:29.654032 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 06:07:29.796507 locksmithd[1722]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:07:30.095817 containerd[1690]: time="2025-07-07T06:07:30.095665860Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 06:07:30.112694 tar[1686]: linux-arm64/README.md Jul 7 06:07:30.131437 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:07:30.153697 containerd[1690]: time="2025-07-07T06:07:30.153635860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:07:30.155392 containerd[1690]: time="2025-07-07T06:07:30.155340020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:07:30.155392 containerd[1690]: time="2025-07-07T06:07:30.155384940Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 06:07:30.155500 containerd[1690]: time="2025-07-07T06:07:30.155402780Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 06:07:30.155605 containerd[1690]: time="2025-07-07T06:07:30.155577780Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 06:07:30.155636 containerd[1690]: time="2025-07-07T06:07:30.155604820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 06:07:30.155696 containerd[1690]: time="2025-07-07T06:07:30.155674860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:07:30.155696 containerd[1690]: time="2025-07-07T06:07:30.155694860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:07:30.156045 containerd[1690]: time="2025-07-07T06:07:30.156009340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:07:30.156076 containerd[1690]: time="2025-07-07T06:07:30.156042700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 06:07:30.156076 containerd[1690]: time="2025-07-07T06:07:30.156060140Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:07:30.156076 containerd[1690]: time="2025-07-07T06:07:30.156070140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 06:07:30.156185 containerd[1690]: time="2025-07-07T06:07:30.156163740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:07:30.156419 containerd[1690]: time="2025-07-07T06:07:30.156395580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:07:30.156543 containerd[1690]: time="2025-07-07T06:07:30.156519780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:07:30.156569 containerd[1690]: time="2025-07-07T06:07:30.156542620Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 06:07:30.156650 containerd[1690]: time="2025-07-07T06:07:30.156629020Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 06:07:30.156700 containerd[1690]: time="2025-07-07T06:07:30.156680940Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:07:30.180559 containerd[1690]: time="2025-07-07T06:07:30.180503100Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 06:07:30.180698 containerd[1690]: time="2025-07-07T06:07:30.180582860Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 06:07:30.180698 containerd[1690]: time="2025-07-07T06:07:30.180603980Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 06:07:30.180698 containerd[1690]: time="2025-07-07T06:07:30.180632100Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 06:07:30.180698 containerd[1690]: time="2025-07-07T06:07:30.180647500Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 06:07:30.180888 containerd[1690]: time="2025-07-07T06:07:30.180860020Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 06:07:30.182452 containerd[1690]: time="2025-07-07T06:07:30.181162060Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 06:07:30.182452 containerd[1690]: time="2025-07-07T06:07:30.181350460Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 06:07:30.182452 containerd[1690]: time="2025-07-07T06:07:30.181369020Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 06:07:30.182452 containerd[1690]: time="2025-07-07T06:07:30.181381980Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 06:07:30.182452 containerd[1690]: time="2025-07-07T06:07:30.181395420Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 06:07:30.182452 containerd[1690]: time="2025-07-07T06:07:30.181408740Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 06:07:30.182452 containerd[1690]: time="2025-07-07T06:07:30.181428540Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 06:07:30.182452 containerd[1690]: time="2025-07-07T06:07:30.181443100Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 06:07:30.182452 containerd[1690]: time="2025-07-07T06:07:30.181457540Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 06:07:30.182452 containerd[1690]: time="2025-07-07T06:07:30.181470220Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 06:07:30.182452 containerd[1690]: time="2025-07-07T06:07:30.181481700Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 06:07:30.182452 containerd[1690]: time="2025-07-07T06:07:30.181493060Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 06:07:30.182452 containerd[1690]: time="2025-07-07T06:07:30.181516260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 06:07:30.182452 containerd[1690]: time="2025-07-07T06:07:30.181530580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 06:07:30.183110 containerd[1690]: time="2025-07-07T06:07:30.181542580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 06:07:30.183110 containerd[1690]: time="2025-07-07T06:07:30.181554860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 06:07:30.183110 containerd[1690]: time="2025-07-07T06:07:30.181566060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 06:07:30.183110 containerd[1690]: time="2025-07-07T06:07:30.181580700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 06:07:30.183110 containerd[1690]: time="2025-07-07T06:07:30.181592380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 06:07:30.183110 containerd[1690]: time="2025-07-07T06:07:30.181608740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 06:07:30.183110 containerd[1690]: time="2025-07-07T06:07:30.181622140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 06:07:30.183110 containerd[1690]: time="2025-07-07T06:07:30.181638380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 06:07:30.183110 containerd[1690]: time="2025-07-07T06:07:30.181650620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 06:07:30.183110 containerd[1690]: time="2025-07-07T06:07:30.181662580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 06:07:30.183110 containerd[1690]: time="2025-07-07T06:07:30.181674580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 06:07:30.183110 containerd[1690]: time="2025-07-07T06:07:30.181688980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 06:07:30.183110 containerd[1690]: time="2025-07-07T06:07:30.181711620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 06:07:30.183110 containerd[1690]: time="2025-07-07T06:07:30.181723180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 06:07:30.183110 containerd[1690]: time="2025-07-07T06:07:30.181736740Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 06:07:30.183887 containerd[1690]: time="2025-07-07T06:07:30.181801300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 06:07:30.183887 containerd[1690]: time="2025-07-07T06:07:30.181820300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 06:07:30.183887 containerd[1690]: time="2025-07-07T06:07:30.181831140Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 06:07:30.183887 containerd[1690]: time="2025-07-07T06:07:30.181849980Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 06:07:30.183887 containerd[1690]: time="2025-07-07T06:07:30.181861820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 06:07:30.183887 containerd[1690]: time="2025-07-07T06:07:30.181873300Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 06:07:30.183887 containerd[1690]: time="2025-07-07T06:07:30.181884900Z" level=info msg="NRI interface is disabled by configuration." Jul 7 06:07:30.183887 containerd[1690]: time="2025-07-07T06:07:30.181896900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 06:07:30.184036 containerd[1690]: time="2025-07-07T06:07:30.182186340Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 06:07:30.184036 containerd[1690]: time="2025-07-07T06:07:30.182278260Z" level=info msg="Connect containerd service" Jul 7 06:07:30.184036 containerd[1690]: time="2025-07-07T06:07:30.182316780Z" level=info msg="using legacy CRI server" Jul 7 06:07:30.184036 containerd[1690]: time="2025-07-07T06:07:30.182326740Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:07:30.184036 containerd[1690]: time="2025-07-07T06:07:30.182446220Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 06:07:30.184036 containerd[1690]: time="2025-07-07T06:07:30.183070740Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:07:30.184036 containerd[1690]: time="2025-07-07T06:07:30.183239020Z" level=info msg="Start subscribing containerd event" Jul 7 06:07:30.184036 containerd[1690]: time="2025-07-07T06:07:30.183314860Z" level=info msg="Start recovering state" Jul 7 06:07:30.184036 containerd[1690]: time="2025-07-07T06:07:30.183390260Z" level=info msg="Start event monitor" Jul 7 06:07:30.184036 containerd[1690]: time="2025-07-07T06:07:30.183400820Z" level=info msg="Start snapshots syncer" Jul 7 06:07:30.184036 containerd[1690]: time="2025-07-07T06:07:30.183409940Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:07:30.184036 containerd[1690]: time="2025-07-07T06:07:30.183418260Z" level=info msg="Start streaming server" Jul 7 06:07:30.195426 containerd[1690]: time="2025-07-07T06:07:30.187120460Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:07:30.195426 containerd[1690]: time="2025-07-07T06:07:30.187189340Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:07:30.195426 containerd[1690]: time="2025-07-07T06:07:30.187305940Z" level=info msg="containerd successfully booted in 0.094955s" Jul 7 06:07:30.188214 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:07:30.264775 sshd_keygen[1675]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:07:30.286432 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:07:30.300092 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:07:30.310064 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 7 06:07:30.318891 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:07:30.319438 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:07:30.340838 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:07:30.354091 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 7 06:07:30.375623 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:07:30.386768 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:07:30.399804 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 7 06:07:30.408108 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:07:30.566047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:07:30.573861 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:07:30.574566 (kubelet)[1804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:07:30.584331 systemd[1]: Startup finished in 677ms (kernel) + 13.756s (initrd) + 11.808s (userspace) = 26.243s. Jul 7 06:07:30.808450 login[1797]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:30.809930 login[1798]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:30.818652 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:07:30.825623 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:07:30.829903 systemd-logind[1663]: New session 2 of user core. Jul 7 06:07:30.838450 systemd-logind[1663]: New session 1 of user core. Jul 7 06:07:30.848890 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:07:30.859547 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:07:30.862551 (systemd)[1815]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:07:30.997653 systemd[1815]: Queued start job for default target default.target. Jul 7 06:07:31.007667 systemd[1815]: Created slice app.slice - User Application Slice. Jul 7 06:07:31.007693 systemd[1815]: Reached target paths.target - Paths. Jul 7 06:07:31.007705 systemd[1815]: Reached target timers.target - Timers. Jul 7 06:07:31.010414 systemd[1815]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:07:31.019898 systemd[1815]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:07:31.019962 systemd[1815]: Reached target sockets.target - Sockets. Jul 7 06:07:31.019974 systemd[1815]: Reached target basic.target - Basic System. Jul 7 06:07:31.020015 systemd[1815]: Reached target default.target - Main User Target. Jul 7 06:07:31.020042 systemd[1815]: Startup finished in 149ms. Jul 7 06:07:31.020505 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:07:31.026008 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:07:31.027062 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:07:31.075346 kubelet[1804]: E0707 06:07:31.075198 1804 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:07:31.078376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:07:31.078516 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:07:31.813251 waagent[1794]: 2025-07-07T06:07:31.812469Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jul 7 06:07:31.818430 waagent[1794]: 2025-07-07T06:07:31.818360Z INFO Daemon Daemon OS: flatcar 4081.3.4 Jul 7 06:07:31.823972 waagent[1794]: 2025-07-07T06:07:31.823907Z INFO Daemon Daemon Python: 3.11.9 Jul 7 06:07:31.828848 waagent[1794]: 2025-07-07T06:07:31.828620Z INFO Daemon Daemon Run daemon Jul 7 06:07:31.832762 waagent[1794]: 2025-07-07T06:07:31.832714Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.4' Jul 7 06:07:31.841747 waagent[1794]: 2025-07-07T06:07:31.841682Z INFO Daemon Daemon Using waagent for provisioning Jul 7 06:07:31.846934 waagent[1794]: 2025-07-07T06:07:31.846885Z INFO Daemon Daemon Activate resource disk Jul 7 06:07:31.851826 waagent[1794]: 2025-07-07T06:07:31.851771Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 7 06:07:31.864497 waagent[1794]: 2025-07-07T06:07:31.864436Z INFO Daemon Daemon Found device: None Jul 7 06:07:31.869144 waagent[1794]: 2025-07-07T06:07:31.869096Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 7 06:07:31.879089 waagent[1794]: 2025-07-07T06:07:31.879032Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 7 06:07:31.892449 waagent[1794]: 2025-07-07T06:07:31.892393Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 7 06:07:31.898092 waagent[1794]: 2025-07-07T06:07:31.898036Z INFO Daemon Daemon Running default provisioning handler Jul 7 06:07:31.910197 waagent[1794]: 2025-07-07T06:07:31.910108Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 7 06:07:31.925570 waagent[1794]: 2025-07-07T06:07:31.925500Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 7 06:07:31.935146 waagent[1794]: 2025-07-07T06:07:31.935084Z INFO Daemon Daemon cloud-init is enabled: False Jul 7 06:07:31.939986 waagent[1794]: 2025-07-07T06:07:31.939932Z INFO Daemon Daemon Copying ovf-env.xml Jul 7 06:07:32.036124 waagent[1794]: 2025-07-07T06:07:32.036022Z INFO Daemon Daemon Successfully mounted dvd Jul 7 06:07:32.064679 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 7 06:07:32.067038 waagent[1794]: 2025-07-07T06:07:32.066965Z INFO Daemon Daemon Detect protocol endpoint Jul 7 06:07:32.072080 waagent[1794]: 2025-07-07T06:07:32.072027Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 7 06:07:32.079548 waagent[1794]: 2025-07-07T06:07:32.079486Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 7 06:07:32.086580 waagent[1794]: 2025-07-07T06:07:32.086521Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 7 06:07:32.091953 waagent[1794]: 2025-07-07T06:07:32.091898Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 7 06:07:32.096769 waagent[1794]: 2025-07-07T06:07:32.096713Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 7 06:07:32.132324 waagent[1794]: 2025-07-07T06:07:32.132273Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 7 06:07:32.139212 waagent[1794]: 2025-07-07T06:07:32.139181Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 7 06:07:32.144418 waagent[1794]: 2025-07-07T06:07:32.144367Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 7 06:07:32.456276 waagent[1794]: 2025-07-07T06:07:32.456108Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 7 06:07:32.462558 waagent[1794]: 2025-07-07T06:07:32.462493Z INFO Daemon Daemon Forcing an update of the goal state. Jul 7 06:07:32.472417 waagent[1794]: 2025-07-07T06:07:32.472359Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 7 06:07:32.492513 waagent[1794]: 2025-07-07T06:07:32.492465Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 7 06:07:32.498481 waagent[1794]: 2025-07-07T06:07:32.498437Z INFO Daemon Jul 7 06:07:32.501419 waagent[1794]: 2025-07-07T06:07:32.501375Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 508da5ae-c6bc-4aa4-b174-f72f8672495a eTag: 4658257820558512225 source: Fabric] Jul 7 06:07:32.512301 waagent[1794]: 2025-07-07T06:07:32.512255Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 7 06:07:32.520045 waagent[1794]: 2025-07-07T06:07:32.519994Z INFO Daemon Jul 7 06:07:32.522869 waagent[1794]: 2025-07-07T06:07:32.522822Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 7 06:07:32.534363 waagent[1794]: 2025-07-07T06:07:32.534324Z INFO Daemon Daemon Downloading artifacts profile blob Jul 7 06:07:32.688254 waagent[1794]: 2025-07-07T06:07:32.687433Z INFO Daemon Downloaded certificate {'thumbprint': 'E553C7BE7FA317926866EC483A8097F36056737B', 'hasPrivateKey': False} Jul 7 06:07:32.697504 waagent[1794]: 2025-07-07T06:07:32.697454Z INFO Daemon Downloaded certificate {'thumbprint': '7C89F33E20F2DBE6023FEB23D98BBA7A3FCD9D0D', 'hasPrivateKey': True} Jul 7 06:07:32.707297 waagent[1794]: 2025-07-07T06:07:32.707196Z INFO Daemon Fetch goal state completed Jul 7 06:07:32.751770 waagent[1794]: 2025-07-07T06:07:32.751722Z INFO Daemon Daemon Starting provisioning Jul 7 06:07:32.757288 waagent[1794]: 2025-07-07T06:07:32.757205Z INFO Daemon Daemon Handle ovf-env.xml. Jul 7 06:07:32.762049 waagent[1794]: 2025-07-07T06:07:32.761994Z INFO Daemon Daemon Set hostname [ci-4081.3.4-a-d5356a388e] Jul 7 06:07:32.787246 waagent[1794]: 2025-07-07T06:07:32.782359Z INFO Daemon Daemon Publish hostname [ci-4081.3.4-a-d5356a388e] Jul 7 06:07:32.789046 waagent[1794]: 2025-07-07T06:07:32.788977Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 7 06:07:32.795112 waagent[1794]: 2025-07-07T06:07:32.795056Z INFO Daemon Daemon Primary interface is [eth0] Jul 7 06:07:32.839931 systemd-networkd[1330]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:07:32.839939 systemd-networkd[1330]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:07:32.839989 systemd-networkd[1330]: eth0: DHCP lease lost Jul 7 06:07:32.843251 waagent[1794]: 2025-07-07T06:07:32.841089Z INFO Daemon Daemon Create user account if not exists Jul 7 06:07:32.846760 waagent[1794]: 2025-07-07T06:07:32.846694Z INFO Daemon Daemon User core already exists, skip useradd Jul 7 06:07:32.852699 waagent[1794]: 2025-07-07T06:07:32.852638Z INFO Daemon Daemon Configure sudoer Jul 7 06:07:32.853337 systemd-networkd[1330]: eth0: DHCPv6 lease lost Jul 7 06:07:32.857433 waagent[1794]: 2025-07-07T06:07:32.857364Z INFO Daemon Daemon Configure sshd Jul 7 06:07:32.861909 waagent[1794]: 2025-07-07T06:07:32.861849Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 7 06:07:32.876543 waagent[1794]: 2025-07-07T06:07:32.876477Z INFO Daemon Daemon Deploy ssh public key. Jul 7 06:07:32.893363 systemd-networkd[1330]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 7 06:07:33.989608 waagent[1794]: 2025-07-07T06:07:33.989537Z INFO Daemon Daemon Provisioning complete Jul 7 06:07:34.008048 waagent[1794]: 2025-07-07T06:07:34.007998Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 7 06:07:34.014640 waagent[1794]: 2025-07-07T06:07:34.014574Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 7 06:07:34.023992 waagent[1794]: 2025-07-07T06:07:34.023933Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jul 7 06:07:34.157354 waagent[1873]: 2025-07-07T06:07:34.156701Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jul 7 06:07:34.157354 waagent[1873]: 2025-07-07T06:07:34.156849Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.4 Jul 7 06:07:34.157354 waagent[1873]: 2025-07-07T06:07:34.156903Z INFO ExtHandler ExtHandler Python: 3.11.9 Jul 7 06:07:34.209272 waagent[1873]: 2025-07-07T06:07:34.209166Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 7 06:07:34.209597 waagent[1873]: 2025-07-07T06:07:34.209560Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 06:07:34.209737 waagent[1873]: 2025-07-07T06:07:34.209703Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 06:07:34.218359 waagent[1873]: 2025-07-07T06:07:34.218271Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 7 06:07:34.224844 waagent[1873]: 2025-07-07T06:07:34.224796Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 7 06:07:34.225556 waagent[1873]: 2025-07-07T06:07:34.225514Z INFO ExtHandler Jul 7 06:07:34.225717 waagent[1873]: 2025-07-07T06:07:34.225683Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4f8a1d86-8d05-4b0d-a8e2-db93425d4847 eTag: 4658257820558512225 source: Fabric] Jul 7 06:07:34.226111 waagent[1873]: 2025-07-07T06:07:34.226074Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 7 06:07:34.227258 waagent[1873]: 2025-07-07T06:07:34.226776Z INFO ExtHandler Jul 7 06:07:34.227258 waagent[1873]: 2025-07-07T06:07:34.226852Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 7 06:07:34.231093 waagent[1873]: 2025-07-07T06:07:34.231052Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 7 06:07:34.309193 waagent[1873]: 2025-07-07T06:07:34.309050Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E553C7BE7FA317926866EC483A8097F36056737B', 'hasPrivateKey': False} Jul 7 06:07:34.309588 waagent[1873]: 2025-07-07T06:07:34.309541Z INFO ExtHandler Downloaded certificate {'thumbprint': '7C89F33E20F2DBE6023FEB23D98BBA7A3FCD9D0D', 'hasPrivateKey': True} Jul 7 06:07:34.309995 waagent[1873]: 2025-07-07T06:07:34.309951Z INFO ExtHandler Fetch goal state completed Jul 7 06:07:34.324331 waagent[1873]: 2025-07-07T06:07:34.324277Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1873 Jul 7 06:07:34.324482 waagent[1873]: 2025-07-07T06:07:34.324448Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 7 06:07:34.326101 waagent[1873]: 2025-07-07T06:07:34.326057Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.4', '', 'Flatcar Container Linux by Kinvolk'] Jul 7 06:07:34.326508 waagent[1873]: 2025-07-07T06:07:34.326467Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 7 06:07:34.343540 waagent[1873]: 2025-07-07T06:07:34.343494Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 7 06:07:34.343750 waagent[1873]: 2025-07-07T06:07:34.343708Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 7 06:07:34.350393 waagent[1873]: 2025-07-07T06:07:34.350343Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 7 06:07:34.357013 systemd[1]: Reloading requested from client PID 1888 ('systemctl') (unit waagent.service)... Jul 7 06:07:34.357282 systemd[1]: Reloading... Jul 7 06:07:34.434377 zram_generator::config[1925]: No configuration found. Jul 7 06:07:34.540271 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:07:34.619944 systemd[1]: Reloading finished in 262 ms. Jul 7 06:07:34.644427 waagent[1873]: 2025-07-07T06:07:34.644339Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jul 7 06:07:34.650301 systemd[1]: Reloading requested from client PID 1976 ('systemctl') (unit waagent.service)... Jul 7 06:07:34.650316 systemd[1]: Reloading... Jul 7 06:07:34.721279 zram_generator::config[2009]: No configuration found. Jul 7 06:07:34.839296 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:07:34.914804 systemd[1]: Reloading finished in 264 ms. Jul 7 06:07:34.942246 waagent[1873]: 2025-07-07T06:07:34.939552Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 7 06:07:34.942246 waagent[1873]: 2025-07-07T06:07:34.939737Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 7 06:07:36.124124 waagent[1873]: 2025-07-07T06:07:36.124037Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 7 06:07:36.124728 waagent[1873]: 2025-07-07T06:07:36.124674Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 7 06:07:36.125628 waagent[1873]: 2025-07-07T06:07:36.125527Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 7 06:07:36.126124 waagent[1873]: 2025-07-07T06:07:36.125956Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 7 06:07:36.126421 waagent[1873]: 2025-07-07T06:07:36.126375Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 06:07:36.127287 waagent[1873]: 2025-07-07T06:07:36.126511Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 06:07:36.127287 waagent[1873]: 2025-07-07T06:07:36.126599Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 06:07:36.127287 waagent[1873]: 2025-07-07T06:07:36.126745Z INFO EnvHandler ExtHandler Configure routes Jul 7 06:07:36.127287 waagent[1873]: 2025-07-07T06:07:36.126804Z INFO EnvHandler ExtHandler Gateway:None Jul 7 06:07:36.127287 waagent[1873]: 2025-07-07T06:07:36.126845Z INFO EnvHandler ExtHandler Routes:None Jul 7 06:07:36.127924 waagent[1873]: 2025-07-07T06:07:36.127577Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 7 06:07:36.127924 waagent[1873]: 2025-07-07T06:07:36.127759Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 7 06:07:36.128209 waagent[1873]: 2025-07-07T06:07:36.128181Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 7 06:07:36.128350 waagent[1873]: 2025-07-07T06:07:36.128126Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 7 06:07:36.129055 waagent[1873]: 2025-07-07T06:07:36.129011Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 06:07:36.130248 waagent[1873]: 2025-07-07T06:07:36.129197Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 7 06:07:36.130690 waagent[1873]: 2025-07-07T06:07:36.130638Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 7 06:07:36.131142 waagent[1873]: 2025-07-07T06:07:36.131098Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 7 06:07:36.131142 waagent[1873]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 7 06:07:36.131142 waagent[1873]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 7 06:07:36.131142 waagent[1873]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 7 06:07:36.131142 waagent[1873]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 7 06:07:36.131142 waagent[1873]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 7 06:07:36.131142 waagent[1873]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 7 06:07:36.137977 waagent[1873]: 2025-07-07T06:07:36.137920Z INFO ExtHandler ExtHandler Jul 7 06:07:36.138216 waagent[1873]: 2025-07-07T06:07:36.138176Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 9f290aa2-d155-42cf-8464-03f5613ba1f4 correlation 29a87628-b335-48fa-805a-593b62e31cb6 created: 2025-07-07T06:06:16.186432Z] Jul 7 06:07:36.138718 waagent[1873]: 2025-07-07T06:07:36.138678Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 7 06:07:36.139404 waagent[1873]: 2025-07-07T06:07:36.139367Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 7 06:07:36.174033 waagent[1873]: 2025-07-07T06:07:36.173977Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: D85B839A-AD2E-410F-87DF-DF515BEF34C5;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jul 7 06:07:36.255412 waagent[1873]: 2025-07-07T06:07:36.255327Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jul 7 06:07:36.255412 waagent[1873]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:07:36.255412 waagent[1873]: pkts bytes target prot opt in out source destination Jul 7 06:07:36.255412 waagent[1873]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:07:36.255412 waagent[1873]: pkts bytes target prot opt in out source destination Jul 7 06:07:36.255412 waagent[1873]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:07:36.255412 waagent[1873]: pkts bytes target prot opt in out source destination Jul 7 06:07:36.255412 waagent[1873]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 7 06:07:36.255412 waagent[1873]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 7 06:07:36.255412 waagent[1873]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 7 06:07:36.258441 waagent[1873]: 2025-07-07T06:07:36.258373Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 7 06:07:36.258441 waagent[1873]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:07:36.258441 waagent[1873]: pkts bytes target prot opt in out source destination Jul 7 06:07:36.258441 waagent[1873]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:07:36.258441 waagent[1873]: pkts bytes target prot opt in out source destination Jul 7 06:07:36.258441 waagent[1873]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:07:36.258441 waagent[1873]: pkts bytes target prot opt in out source destination Jul 7 06:07:36.258441 waagent[1873]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 7 06:07:36.258441 waagent[1873]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 7 06:07:36.258441 waagent[1873]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 7 06:07:36.258693 waagent[1873]: 2025-07-07T06:07:36.258655Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 7 06:07:36.267261 waagent[1873]: 2025-07-07T06:07:36.266806Z INFO MonitorHandler ExtHandler Network interfaces: Jul 7 06:07:36.267261 waagent[1873]: Executing ['ip', '-a', '-o', 'link']: Jul 7 06:07:36.267261 waagent[1873]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 7 06:07:36.267261 waagent[1873]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:79:5c:5e brd ff:ff:ff:ff:ff:ff Jul 7 06:07:36.267261 waagent[1873]: 3: enP12779s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:79:5c:5e brd ff:ff:ff:ff:ff:ff\ altname enP12779p0s2 Jul 7 06:07:36.267261 waagent[1873]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 7 06:07:36.267261 waagent[1873]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 7 06:07:36.267261 waagent[1873]: 2: eth0 inet 10.200.20.11/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 7 06:07:36.267261 waagent[1873]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 7 06:07:36.267261 waagent[1873]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 7 06:07:36.267261 waagent[1873]: 2: eth0 inet6 fe80::222:48ff:fe79:5c5e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 7 06:07:36.267261 waagent[1873]: 3: enP12779s1 inet6 fe80::222:48ff:fe79:5c5e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 7 06:07:41.329163 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:07:41.336408 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:07:41.439835 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:07:41.444446 (kubelet)[2103]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:07:41.543363 kubelet[2103]: E0707 06:07:41.543320 2103 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:07:41.546615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:07:41.546767 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:07:51.797258 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 06:07:51.805488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:07:52.094664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:07:52.099538 (kubelet)[2117]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:07:52.140336 kubelet[2117]: E0707 06:07:52.140288 2117 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:07:52.142459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:07:52.142602 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:07:52.952193 chronyd[1647]: Selected source PHC0 Jul 7 06:08:02.301952 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 06:08:02.310437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:08:02.646274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:02.656499 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:08:02.693769 kubelet[2133]: E0707 06:08:02.693699 2133 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:08:02.696428 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:08:02.696710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:08:03.039350 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:08:03.040773 systemd[1]: Started sshd@0-10.200.20.11:22-10.200.16.10:34276.service - OpenSSH per-connection server daemon (10.200.16.10:34276). Jul 7 06:08:03.584640 sshd[2141]: Accepted publickey for core from 10.200.16.10 port 34276 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:08:03.585985 sshd[2141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:03.590044 systemd-logind[1663]: New session 3 of user core. Jul 7 06:08:03.601392 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:08:04.019768 systemd[1]: Started sshd@1-10.200.20.11:22-10.200.16.10:34292.service - OpenSSH per-connection server daemon (10.200.16.10:34292). Jul 7 06:08:04.487368 sshd[2146]: Accepted publickey for core from 10.200.16.10 port 34292 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:08:04.488701 sshd[2146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:04.492464 systemd-logind[1663]: New session 4 of user core. Jul 7 06:08:04.495420 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:08:04.940383 systemd[1]: Started sshd@2-10.200.20.11:22-10.200.16.10:34304.service - OpenSSH per-connection server daemon (10.200.16.10:34304). Jul 7 06:08:05.218818 sshd[2146]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:05.222785 systemd[1]: sshd@1-10.200.20.11:22-10.200.16.10:34292.service: Deactivated successfully. Jul 7 06:08:05.224650 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:08:05.225451 systemd-logind[1663]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:08:05.226515 systemd-logind[1663]: Removed session 4. Jul 7 06:08:05.436424 sshd[2151]: Accepted publickey for core from 10.200.16.10 port 34304 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:08:05.439121 sshd[2151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:05.442891 systemd-logind[1663]: New session 5 of user core. Jul 7 06:08:05.452646 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:08:05.880328 systemd[1]: Started sshd@3-10.200.20.11:22-10.200.16.10:34320.service - OpenSSH per-connection server daemon (10.200.16.10:34320). Jul 7 06:08:06.162786 sshd[2151]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:06.166115 systemd[1]: sshd@2-10.200.20.11:22-10.200.16.10:34304.service: Deactivated successfully. Jul 7 06:08:06.167771 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:08:06.170714 systemd-logind[1663]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:08:06.171610 systemd-logind[1663]: Removed session 5. Jul 7 06:08:06.362735 sshd[2158]: Accepted publickey for core from 10.200.16.10 port 34320 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:08:06.364060 sshd[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:06.368317 systemd-logind[1663]: New session 6 of user core. Jul 7 06:08:06.380407 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:08:06.703128 sshd[2158]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:06.707507 systemd[1]: sshd@3-10.200.20.11:22-10.200.16.10:34320.service: Deactivated successfully. Jul 7 06:08:06.709303 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:08:06.711505 systemd-logind[1663]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:08:06.712606 systemd-logind[1663]: Removed session 6. Jul 7 06:08:06.785135 systemd[1]: Started sshd@4-10.200.20.11:22-10.200.16.10:34332.service - OpenSSH per-connection server daemon (10.200.16.10:34332). Jul 7 06:08:07.237614 sshd[2167]: Accepted publickey for core from 10.200.16.10 port 34332 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:08:07.238940 sshd[2167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:07.242704 systemd-logind[1663]: New session 7 of user core. Jul 7 06:08:07.250409 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:08:07.670305 sudo[2170]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:08:07.670582 sudo[2170]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:08:07.699657 sudo[2170]: pam_unix(sudo:session): session closed for user root Jul 7 06:08:07.789079 sshd[2167]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:07.793240 systemd[1]: sshd@4-10.200.20.11:22-10.200.16.10:34332.service: Deactivated successfully. Jul 7 06:08:07.794969 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:08:07.795774 systemd-logind[1663]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:08:07.796862 systemd-logind[1663]: Removed session 7. Jul 7 06:08:07.873314 systemd[1]: Started sshd@5-10.200.20.11:22-10.200.16.10:34346.service - OpenSSH per-connection server daemon (10.200.16.10:34346). Jul 7 06:08:08.341468 sshd[2175]: Accepted publickey for core from 10.200.16.10 port 34346 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:08:08.342841 sshd[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:08.347619 systemd-logind[1663]: New session 8 of user core. Jul 7 06:08:08.353421 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:08:08.606389 sudo[2179]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:08:08.607468 sudo[2179]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:08:08.610777 sudo[2179]: pam_unix(sudo:session): session closed for user root Jul 7 06:08:08.615367 sudo[2178]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 06:08:08.615622 sudo[2178]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:08:08.633541 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 06:08:08.634968 auditctl[2182]: No rules Jul 7 06:08:08.635311 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:08:08.635484 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 06:08:08.638398 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 06:08:08.672252 augenrules[2200]: No rules Jul 7 06:08:08.673822 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 06:08:08.674835 sudo[2178]: pam_unix(sudo:session): session closed for user root Jul 7 06:08:08.765597 sshd[2175]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:08.769399 systemd[1]: sshd@5-10.200.20.11:22-10.200.16.10:34346.service: Deactivated successfully. Jul 7 06:08:08.771016 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:08:08.771716 systemd-logind[1663]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:08:08.772816 systemd-logind[1663]: Removed session 8. Jul 7 06:08:08.849590 systemd[1]: Started sshd@6-10.200.20.11:22-10.200.16.10:34358.service - OpenSSH per-connection server daemon (10.200.16.10:34358). Jul 7 06:08:09.305662 sshd[2208]: Accepted publickey for core from 10.200.16.10 port 34358 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:08:09.306966 sshd[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:09.310639 systemd-logind[1663]: New session 9 of user core. Jul 7 06:08:09.318377 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:08:09.564665 sudo[2211]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:08:09.564942 sudo[2211]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:08:10.535626 (dockerd)[2227]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:08:10.536020 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:08:11.097641 dockerd[2227]: time="2025-07-07T06:08:11.097578758Z" level=info msg="Starting up" Jul 7 06:08:11.478666 dockerd[2227]: time="2025-07-07T06:08:11.478552384Z" level=info msg="Loading containers: start." Jul 7 06:08:11.663253 kernel: Initializing XFRM netlink socket Jul 7 06:08:11.808072 systemd-networkd[1330]: docker0: Link UP Jul 7 06:08:11.837584 dockerd[2227]: time="2025-07-07T06:08:11.837535484Z" level=info msg="Loading containers: done." Jul 7 06:08:11.867491 dockerd[2227]: time="2025-07-07T06:08:11.867436506Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:08:11.867684 dockerd[2227]: time="2025-07-07T06:08:11.867551146Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 06:08:11.867684 dockerd[2227]: time="2025-07-07T06:08:11.867663786Z" level=info msg="Daemon has completed initialization" Jul 7 06:08:11.928757 dockerd[2227]: time="2025-07-07T06:08:11.928684472Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:08:11.929067 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:08:12.801878 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 7 06:08:12.813434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:08:12.834805 containerd[1690]: time="2025-07-07T06:08:12.834762580Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 7 06:08:12.969508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:12.970982 (kubelet)[2372]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:08:13.013258 kubelet[2372]: E0707 06:08:13.013196 2372 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:08:13.015988 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:08:13.016151 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:08:13.993456 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 7 06:08:14.122906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2333519086.mount: Deactivated successfully. Jul 7 06:08:14.342400 update_engine[1673]: I20250707 06:08:14.342262 1673 update_attempter.cc:509] Updating boot flags... Jul 7 06:08:14.404280 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2398) Jul 7 06:08:15.607731 containerd[1690]: time="2025-07-07T06:08:15.607684377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:15.612457 containerd[1690]: time="2025-07-07T06:08:15.612395147Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328194" Jul 7 06:08:15.619862 containerd[1690]: time="2025-07-07T06:08:15.619830402Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:15.626473 containerd[1690]: time="2025-07-07T06:08:15.626411296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:15.627756 containerd[1690]: time="2025-07-07T06:08:15.627569138Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 2.792764998s" Jul 7 06:08:15.627756 containerd[1690]: time="2025-07-07T06:08:15.627610258Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 7 06:08:15.628893 containerd[1690]: time="2025-07-07T06:08:15.628865941Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 7 06:08:17.263159 containerd[1690]: time="2025-07-07T06:08:17.263095622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:17.269773 containerd[1690]: time="2025-07-07T06:08:17.269722394Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529228" Jul 7 06:08:17.274691 containerd[1690]: time="2025-07-07T06:08:17.274638923Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:17.286373 containerd[1690]: time="2025-07-07T06:08:17.286318545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:17.287777 containerd[1690]: time="2025-07-07T06:08:17.287631188Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.658727327s" Jul 7 06:08:17.287777 containerd[1690]: time="2025-07-07T06:08:17.287671988Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 7 06:08:17.288420 containerd[1690]: time="2025-07-07T06:08:17.288213109Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 7 06:08:18.722283 containerd[1690]: time="2025-07-07T06:08:18.721762852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:18.725253 containerd[1690]: time="2025-07-07T06:08:18.725180139Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484141" Jul 7 06:08:18.732893 containerd[1690]: time="2025-07-07T06:08:18.732835433Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:18.740221 containerd[1690]: time="2025-07-07T06:08:18.740169007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:18.744240 containerd[1690]: time="2025-07-07T06:08:18.743374653Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.455113544s" Jul 7 06:08:18.744240 containerd[1690]: time="2025-07-07T06:08:18.743424333Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 7 06:08:18.745265 containerd[1690]: time="2025-07-07T06:08:18.745207256Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 7 06:08:20.101866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3275026178.mount: Deactivated successfully. Jul 7 06:08:20.465069 containerd[1690]: time="2025-07-07T06:08:20.464890059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:20.469599 containerd[1690]: time="2025-07-07T06:08:20.469441748Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378406" Jul 7 06:08:20.474869 containerd[1690]: time="2025-07-07T06:08:20.474805838Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:20.481245 containerd[1690]: time="2025-07-07T06:08:20.479547407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:20.481593 containerd[1690]: time="2025-07-07T06:08:20.481545930Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.736281594s" Jul 7 06:08:20.481632 containerd[1690]: time="2025-07-07T06:08:20.481599170Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 7 06:08:20.484007 containerd[1690]: time="2025-07-07T06:08:20.483963535Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 06:08:21.266026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1817478981.mount: Deactivated successfully. Jul 7 06:08:22.649267 containerd[1690]: time="2025-07-07T06:08:22.649000177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:22.654016 containerd[1690]: time="2025-07-07T06:08:22.652647544Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 7 06:08:22.661160 containerd[1690]: time="2025-07-07T06:08:22.661105320Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:22.667496 containerd[1690]: time="2025-07-07T06:08:22.667431612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:22.669020 containerd[1690]: time="2025-07-07T06:08:22.668641574Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.184631399s" Jul 7 06:08:22.669020 containerd[1690]: time="2025-07-07T06:08:22.668688094Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 7 06:08:22.669239 containerd[1690]: time="2025-07-07T06:08:22.669178135Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:08:23.051815 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 7 06:08:23.059506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:08:23.170641 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:23.180587 (kubelet)[2546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:08:23.585509 kubelet[2546]: E0707 06:08:23.585454 2546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:08:23.588114 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:08:23.588433 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:08:24.639419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3047107098.mount: Deactivated successfully. Jul 7 06:08:24.688079 containerd[1690]: time="2025-07-07T06:08:24.687275461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:24.693916 containerd[1690]: time="2025-07-07T06:08:24.693873073Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 7 06:08:24.704962 containerd[1690]: time="2025-07-07T06:08:24.704892454Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:24.715116 containerd[1690]: time="2025-07-07T06:08:24.715033353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:24.715811 containerd[1690]: time="2025-07-07T06:08:24.715669994Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 2.046455379s" Jul 7 06:08:24.715811 containerd[1690]: time="2025-07-07T06:08:24.715707874Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 7 06:08:24.716514 containerd[1690]: time="2025-07-07T06:08:24.716331275Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 7 06:08:25.646331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount85459872.mount: Deactivated successfully. Jul 7 06:08:28.344818 containerd[1690]: time="2025-07-07T06:08:28.344764004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:28.348619 containerd[1690]: time="2025-07-07T06:08:28.348580134Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Jul 7 06:08:28.353478 containerd[1690]: time="2025-07-07T06:08:28.353421226Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:28.359943 containerd[1690]: time="2025-07-07T06:08:28.359901442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:28.361275 containerd[1690]: time="2025-07-07T06:08:28.361117485Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.64475249s" Jul 7 06:08:28.361275 containerd[1690]: time="2025-07-07T06:08:28.361156765Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 7 06:08:33.059158 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:33.069509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:08:33.104520 systemd[1]: Reloading requested from client PID 2638 ('systemctl') (unit session-9.scope)... Jul 7 06:08:33.104537 systemd[1]: Reloading... Jul 7 06:08:33.223398 zram_generator::config[2684]: No configuration found. Jul 7 06:08:33.314244 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:08:33.392450 systemd[1]: Reloading finished in 287 ms. Jul 7 06:08:33.535913 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 06:08:33.536001 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 06:08:33.536698 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:33.544554 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:08:33.654612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:33.660540 (kubelet)[2742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:08:33.744872 kubelet[2742]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:08:33.744872 kubelet[2742]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:08:33.744872 kubelet[2742]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:08:33.745282 kubelet[2742]: I0707 06:08:33.744931 2742 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:08:34.810663 kubelet[2742]: I0707 06:08:34.810617 2742 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 06:08:34.810663 kubelet[2742]: I0707 06:08:34.810654 2742 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:08:34.811052 kubelet[2742]: I0707 06:08:34.810922 2742 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 06:08:34.832400 kubelet[2742]: E0707 06:08:34.832353 2742 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:34.835201 kubelet[2742]: I0707 06:08:34.835063 2742 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:08:34.840478 kubelet[2742]: E0707 06:08:34.840436 2742 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 06:08:34.840660 kubelet[2742]: I0707 06:08:34.840647 2742 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 06:08:34.844321 kubelet[2742]: I0707 06:08:34.844287 2742 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:08:34.844741 kubelet[2742]: I0707 06:08:34.844708 2742 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:08:34.845499 kubelet[2742]: I0707 06:08:34.844814 2742 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-a-d5356a388e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:08:34.845499 kubelet[2742]: I0707 06:08:34.845121 2742 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:08:34.845499 kubelet[2742]: I0707 06:08:34.845132 2742 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 06:08:34.845499 kubelet[2742]: I0707 06:08:34.845316 2742 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:08:34.848755 kubelet[2742]: I0707 06:08:34.848724 2742 kubelet.go:446] "Attempting to sync node with API server" Jul 7 06:08:34.848889 kubelet[2742]: I0707 06:08:34.848878 2742 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:08:34.848959 kubelet[2742]: I0707 06:08:34.848950 2742 kubelet.go:352] "Adding apiserver pod source" Jul 7 06:08:34.849020 kubelet[2742]: I0707 06:08:34.849010 2742 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:08:34.850438 kubelet[2742]: W0707 06:08:34.850383 2742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-d5356a388e&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 7 06:08:34.850534 kubelet[2742]: E0707 06:08:34.850449 2742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-d5356a388e&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:34.851614 kubelet[2742]: W0707 06:08:34.851434 2742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 7 06:08:34.851614 kubelet[2742]: E0707 06:08:34.851487 2742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:34.853241 kubelet[2742]: I0707 06:08:34.851952 2742 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 06:08:34.853241 kubelet[2742]: I0707 06:08:34.852446 2742 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:08:34.853241 kubelet[2742]: W0707 06:08:34.852506 2742 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:08:34.853567 kubelet[2742]: I0707 06:08:34.853541 2742 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:08:34.853609 kubelet[2742]: I0707 06:08:34.853582 2742 server.go:1287] "Started kubelet" Jul 7 06:08:34.858054 kubelet[2742]: I0707 06:08:34.857999 2742 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:08:34.859533 kubelet[2742]: E0707 06:08:34.859393 2742 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.4-a-d5356a388e.184fe321e909a509 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.4-a-d5356a388e,UID:ci-4081.3.4-a-d5356a388e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.4-a-d5356a388e,},FirstTimestamp:2025-07-07 06:08:34.853561609 +0000 UTC m=+1.189810067,LastTimestamp:2025-07-07 06:08:34.853561609 +0000 UTC m=+1.189810067,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.4-a-d5356a388e,}" Jul 7 06:08:34.862076 kubelet[2742]: I0707 06:08:34.862032 2742 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:08:34.863294 kubelet[2742]: I0707 06:08:34.863058 2742 server.go:479] "Adding debug handlers to kubelet server" Jul 7 06:08:34.864216 kubelet[2742]: I0707 06:08:34.864158 2742 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:08:34.864439 kubelet[2742]: I0707 06:08:34.864410 2742 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:08:34.864706 kubelet[2742]: E0707 06:08:34.864672 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-d5356a388e\" not found" Jul 7 06:08:34.864768 kubelet[2742]: I0707 06:08:34.864686 2742 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:08:34.865107 kubelet[2742]: I0707 06:08:34.865089 2742 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:08:34.866159 kubelet[2742]: I0707 06:08:34.865794 2742 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:08:34.866159 kubelet[2742]: I0707 06:08:34.865892 2742 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:08:34.868099 kubelet[2742]: E0707 06:08:34.868058 2742 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-d5356a388e?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="200ms" Jul 7 06:08:34.869067 kubelet[2742]: I0707 06:08:34.869044 2742 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:08:34.869404 kubelet[2742]: I0707 06:08:34.869374 2742 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:08:34.869459 kubelet[2742]: I0707 06:08:34.869436 2742 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:08:34.875425 kubelet[2742]: E0707 06:08:34.875393 2742 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:08:34.879082 kubelet[2742]: I0707 06:08:34.879021 2742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:08:34.880290 kubelet[2742]: I0707 06:08:34.880257 2742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:08:34.880332 kubelet[2742]: I0707 06:08:34.880299 2742 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 06:08:34.880332 kubelet[2742]: I0707 06:08:34.880321 2742 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:08:34.880332 kubelet[2742]: I0707 06:08:34.880327 2742 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 06:08:34.880400 kubelet[2742]: E0707 06:08:34.880371 2742 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:08:34.886261 kubelet[2742]: W0707 06:08:34.886116 2742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 7 06:08:34.886261 kubelet[2742]: E0707 06:08:34.886167 2742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:34.886261 kubelet[2742]: W0707 06:08:34.886261 2742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 7 06:08:34.886430 kubelet[2742]: E0707 06:08:34.886286 2742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:34.931752 kubelet[2742]: I0707 06:08:34.931440 2742 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:08:34.931752 kubelet[2742]: I0707 06:08:34.931480 2742 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:08:34.931752 kubelet[2742]: I0707 06:08:34.931503 2742 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:08:34.936952 kubelet[2742]: I0707 06:08:34.936925 2742 policy_none.go:49] "None policy: Start" Jul 7 06:08:34.937079 kubelet[2742]: I0707 06:08:34.937068 2742 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:08:34.937396 kubelet[2742]: I0707 06:08:34.937126 2742 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:08:34.946861 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:08:34.958333 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:08:34.961824 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:08:34.964778 kubelet[2742]: E0707 06:08:34.964747 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-d5356a388e\" not found" Jul 7 06:08:34.967252 kubelet[2742]: I0707 06:08:34.967214 2742 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:08:34.967457 kubelet[2742]: I0707 06:08:34.967439 2742 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:08:34.967496 kubelet[2742]: I0707 06:08:34.967459 2742 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:08:34.967756 kubelet[2742]: I0707 06:08:34.967735 2742 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:08:34.969191 kubelet[2742]: E0707 06:08:34.969108 2742 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:08:34.969191 kubelet[2742]: E0707 06:08:34.969155 2742 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.4-a-d5356a388e\" not found" Jul 7 06:08:34.991765 systemd[1]: Created slice kubepods-burstable-poda0410a7cf2a22e4e42db83f406666237.slice - libcontainer container kubepods-burstable-poda0410a7cf2a22e4e42db83f406666237.slice. Jul 7 06:08:35.009770 kubelet[2742]: E0707 06:08:35.009732 2742 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-d5356a388e\" not found" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:35.012986 systemd[1]: Created slice kubepods-burstable-poddcaafae02eb976dcc5498c67bd58accb.slice - libcontainer container kubepods-burstable-poddcaafae02eb976dcc5498c67bd58accb.slice. Jul 7 06:08:35.021795 kubelet[2742]: E0707 06:08:35.021596 2742 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-d5356a388e\" not found" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:35.024978 systemd[1]: Created slice kubepods-burstable-pod6f5360b7a1766e582ff90f6e8cd0a864.slice - libcontainer container kubepods-burstable-pod6f5360b7a1766e582ff90f6e8cd0a864.slice. Jul 7 06:08:35.026976 kubelet[2742]: E0707 06:08:35.026778 2742 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-d5356a388e\" not found" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:35.069848 kubelet[2742]: E0707 06:08:35.069560 2742 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-d5356a388e?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="400ms" Jul 7 06:08:35.070304 kubelet[2742]: I0707 06:08:35.069994 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dcaafae02eb976dcc5498c67bd58accb-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-d5356a388e\" (UID: \"dcaafae02eb976dcc5498c67bd58accb\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:35.070304 kubelet[2742]: I0707 06:08:35.070032 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dcaafae02eb976dcc5498c67bd58accb-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-a-d5356a388e\" (UID: \"dcaafae02eb976dcc5498c67bd58accb\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:35.070304 kubelet[2742]: I0707 06:08:35.070049 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dcaafae02eb976dcc5498c67bd58accb-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-d5356a388e\" (UID: \"dcaafae02eb976dcc5498c67bd58accb\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:35.070304 kubelet[2742]: I0707 06:08:35.070063 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dcaafae02eb976dcc5498c67bd58accb-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-a-d5356a388e\" (UID: \"dcaafae02eb976dcc5498c67bd58accb\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:35.070304 kubelet[2742]: I0707 06:08:35.070080 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f5360b7a1766e582ff90f6e8cd0a864-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-a-d5356a388e\" (UID: \"6f5360b7a1766e582ff90f6e8cd0a864\") " pod="kube-system/kube-scheduler-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:35.070531 kubelet[2742]: I0707 06:08:35.070097 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0410a7cf2a22e4e42db83f406666237-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-a-d5356a388e\" (UID: \"a0410a7cf2a22e4e42db83f406666237\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:35.070531 kubelet[2742]: I0707 06:08:35.070112 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0410a7cf2a22e4e42db83f406666237-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-a-d5356a388e\" (UID: \"a0410a7cf2a22e4e42db83f406666237\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:35.070531 kubelet[2742]: I0707 06:08:35.070136 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0410a7cf2a22e4e42db83f406666237-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-a-d5356a388e\" (UID: \"a0410a7cf2a22e4e42db83f406666237\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:35.070531 kubelet[2742]: I0707 06:08:35.070153 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dcaafae02eb976dcc5498c67bd58accb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-a-d5356a388e\" (UID: \"dcaafae02eb976dcc5498c67bd58accb\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:35.070531 kubelet[2742]: I0707 06:08:35.070511 2742 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:35.071253 kubelet[2742]: E0707 06:08:35.070888 2742 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:35.273456 kubelet[2742]: I0707 06:08:35.273373 2742 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:35.273960 kubelet[2742]: E0707 06:08:35.273934 2742 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:35.311822 containerd[1690]: time="2025-07-07T06:08:35.311780617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-a-d5356a388e,Uid:a0410a7cf2a22e4e42db83f406666237,Namespace:kube-system,Attempt:0,}" Jul 7 06:08:35.323091 containerd[1690]: time="2025-07-07T06:08:35.322927199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-a-d5356a388e,Uid:dcaafae02eb976dcc5498c67bd58accb,Namespace:kube-system,Attempt:0,}" Jul 7 06:08:35.328699 containerd[1690]: time="2025-07-07T06:08:35.328497730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-a-d5356a388e,Uid:6f5360b7a1766e582ff90f6e8cd0a864,Namespace:kube-system,Attempt:0,}" Jul 7 06:08:35.470696 kubelet[2742]: E0707 06:08:35.470569 2742 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-d5356a388e?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="800ms" Jul 7 06:08:35.676151 kubelet[2742]: I0707 06:08:35.675680 2742 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:35.676151 kubelet[2742]: E0707 06:08:35.676002 2742 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:35.756065 kubelet[2742]: W0707 06:08:35.755965 2742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 7 06:08:35.756065 kubelet[2742]: E0707 06:08:35.756033 2742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:35.758641 kubelet[2742]: W0707 06:08:35.758559 2742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 7 06:08:35.758641 kubelet[2742]: E0707 06:08:35.758614 2742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:35.933444 kubelet[2742]: W0707 06:08:35.928089 2742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 7 06:08:35.933444 kubelet[2742]: E0707 06:08:35.928159 2742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:36.271676 kubelet[2742]: E0707 06:08:36.271552 2742 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-d5356a388e?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="1.6s" Jul 7 06:08:36.345481 kubelet[2742]: W0707 06:08:36.345416 2742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-d5356a388e&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 7 06:08:36.345640 kubelet[2742]: E0707 06:08:36.345490 2742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-d5356a388e&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:36.453921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1615968226.mount: Deactivated successfully. Jul 7 06:08:36.478104 kubelet[2742]: I0707 06:08:36.478070 2742 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:36.478482 kubelet[2742]: E0707 06:08:36.478445 2742 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:36.492276 containerd[1690]: time="2025-07-07T06:08:36.491866585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:08:36.510394 containerd[1690]: time="2025-07-07T06:08:36.510333180Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 7 06:08:36.516367 containerd[1690]: time="2025-07-07T06:08:36.516328312Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:08:36.524028 containerd[1690]: time="2025-07-07T06:08:36.522864445Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:08:36.529064 containerd[1690]: time="2025-07-07T06:08:36.529003257Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:08:36.535591 containerd[1690]: time="2025-07-07T06:08:36.535475669Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 06:08:36.542473 containerd[1690]: time="2025-07-07T06:08:36.542423923Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 06:08:36.549882 containerd[1690]: time="2025-07-07T06:08:36.549829697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:08:36.550802 containerd[1690]: time="2025-07-07T06:08:36.550559378Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.227545739s" Jul 7 06:08:36.552964 containerd[1690]: time="2025-07-07T06:08:36.552916423Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.224344373s" Jul 7 06:08:36.572312 containerd[1690]: time="2025-07-07T06:08:36.572261140Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.260395003s" Jul 7 06:08:36.915312 kubelet[2742]: E0707 06:08:36.915268 2742 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:37.304890 containerd[1690]: time="2025-07-07T06:08:37.304776120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:08:37.305150 containerd[1690]: time="2025-07-07T06:08:37.305021121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:08:37.305341 containerd[1690]: time="2025-07-07T06:08:37.305213601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:08:37.305663 containerd[1690]: time="2025-07-07T06:08:37.305296641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:08:37.305663 containerd[1690]: time="2025-07-07T06:08:37.305416001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:37.305789 containerd[1690]: time="2025-07-07T06:08:37.305742522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:37.306141 containerd[1690]: time="2025-07-07T06:08:37.305976003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:37.306401 containerd[1690]: time="2025-07-07T06:08:37.306303203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:37.311655 containerd[1690]: time="2025-07-07T06:08:37.311396173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:08:37.311655 containerd[1690]: time="2025-07-07T06:08:37.311457813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:08:37.311655 containerd[1690]: time="2025-07-07T06:08:37.311477493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:37.311655 containerd[1690]: time="2025-07-07T06:08:37.311579173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:37.337447 systemd[1]: Started cri-containerd-6b8d7c40268b8f7d9996e05dd7177a355cf465bbc4b638abcf37dbcde5d5ff2b.scope - libcontainer container 6b8d7c40268b8f7d9996e05dd7177a355cf465bbc4b638abcf37dbcde5d5ff2b. Jul 7 06:08:37.339787 systemd[1]: Started cri-containerd-cf636f55441f0f3754c249cc68314d0e7d4305e9dcd85288f1f573b6b9e6f5c5.scope - libcontainer container cf636f55441f0f3754c249cc68314d0e7d4305e9dcd85288f1f573b6b9e6f5c5. Jul 7 06:08:37.343880 systemd[1]: Started cri-containerd-0546d328f2d793dedf3e26c8bf31efb8e7675dc19a35b892e63471a67723de41.scope - libcontainer container 0546d328f2d793dedf3e26c8bf31efb8e7675dc19a35b892e63471a67723de41. Jul 7 06:08:37.396542 containerd[1690]: time="2025-07-07T06:08:37.396408658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-a-d5356a388e,Uid:dcaafae02eb976dcc5498c67bd58accb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0546d328f2d793dedf3e26c8bf31efb8e7675dc19a35b892e63471a67723de41\"" Jul 7 06:08:37.400572 containerd[1690]: time="2025-07-07T06:08:37.399500744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-a-d5356a388e,Uid:a0410a7cf2a22e4e42db83f406666237,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf636f55441f0f3754c249cc68314d0e7d4305e9dcd85288f1f573b6b9e6f5c5\"" Jul 7 06:08:37.403196 containerd[1690]: time="2025-07-07T06:08:37.403042871Z" level=info msg="CreateContainer within sandbox \"0546d328f2d793dedf3e26c8bf31efb8e7675dc19a35b892e63471a67723de41\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:08:37.405972 containerd[1690]: time="2025-07-07T06:08:37.405933876Z" level=info msg="CreateContainer within sandbox \"cf636f55441f0f3754c249cc68314d0e7d4305e9dcd85288f1f573b6b9e6f5c5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:08:37.409748 containerd[1690]: time="2025-07-07T06:08:37.409625723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-a-d5356a388e,Uid:6f5360b7a1766e582ff90f6e8cd0a864,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b8d7c40268b8f7d9996e05dd7177a355cf465bbc4b638abcf37dbcde5d5ff2b\"" Jul 7 06:08:37.412909 containerd[1690]: time="2025-07-07T06:08:37.412872370Z" level=info msg="CreateContainer within sandbox \"6b8d7c40268b8f7d9996e05dd7177a355cf465bbc4b638abcf37dbcde5d5ff2b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:08:37.486834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount654255178.mount: Deactivated successfully. Jul 7 06:08:37.535106 containerd[1690]: time="2025-07-07T06:08:37.534845486Z" level=info msg="CreateContainer within sandbox \"0546d328f2d793dedf3e26c8bf31efb8e7675dc19a35b892e63471a67723de41\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d6980e998d7a0e162b71af23bdced39146946979534b8cf0f48875bb9de0d1b6\"" Jul 7 06:08:37.537145 containerd[1690]: time="2025-07-07T06:08:37.537100730Z" level=info msg="StartContainer for \"d6980e998d7a0e162b71af23bdced39146946979534b8cf0f48875bb9de0d1b6\"" Jul 7 06:08:37.538453 containerd[1690]: time="2025-07-07T06:08:37.538414133Z" level=info msg="CreateContainer within sandbox \"cf636f55441f0f3754c249cc68314d0e7d4305e9dcd85288f1f573b6b9e6f5c5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eead4045a680eec9c239450cdbba3c2b403a307a651995a2968dbc378d734861\"" Jul 7 06:08:37.539289 containerd[1690]: time="2025-07-07T06:08:37.539082734Z" level=info msg="StartContainer for \"eead4045a680eec9c239450cdbba3c2b403a307a651995a2968dbc378d734861\"" Jul 7 06:08:37.550991 containerd[1690]: time="2025-07-07T06:08:37.550760037Z" level=info msg="CreateContainer within sandbox \"6b8d7c40268b8f7d9996e05dd7177a355cf465bbc4b638abcf37dbcde5d5ff2b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bc5a4d7ef44991c55215146ba3668d3180d59d528f6a53ff77ae5a474f764bff\"" Jul 7 06:08:37.551901 containerd[1690]: time="2025-07-07T06:08:37.551872719Z" level=info msg="StartContainer for \"bc5a4d7ef44991c55215146ba3668d3180d59d528f6a53ff77ae5a474f764bff\"" Jul 7 06:08:37.570510 systemd[1]: Started cri-containerd-d6980e998d7a0e162b71af23bdced39146946979534b8cf0f48875bb9de0d1b6.scope - libcontainer container d6980e998d7a0e162b71af23bdced39146946979534b8cf0f48875bb9de0d1b6. Jul 7 06:08:37.580796 systemd[1]: Started cri-containerd-eead4045a680eec9c239450cdbba3c2b403a307a651995a2968dbc378d734861.scope - libcontainer container eead4045a680eec9c239450cdbba3c2b403a307a651995a2968dbc378d734861. Jul 7 06:08:37.598496 systemd[1]: Started cri-containerd-bc5a4d7ef44991c55215146ba3668d3180d59d528f6a53ff77ae5a474f764bff.scope - libcontainer container bc5a4d7ef44991c55215146ba3668d3180d59d528f6a53ff77ae5a474f764bff. Jul 7 06:08:37.652420 containerd[1690]: time="2025-07-07T06:08:37.651758513Z" level=info msg="StartContainer for \"d6980e998d7a0e162b71af23bdced39146946979534b8cf0f48875bb9de0d1b6\" returns successfully" Jul 7 06:08:37.665550 containerd[1690]: time="2025-07-07T06:08:37.665413579Z" level=info msg="StartContainer for \"eead4045a680eec9c239450cdbba3c2b403a307a651995a2968dbc378d734861\" returns successfully" Jul 7 06:08:37.665999 containerd[1690]: time="2025-07-07T06:08:37.665876180Z" level=info msg="StartContainer for \"bc5a4d7ef44991c55215146ba3668d3180d59d528f6a53ff77ae5a474f764bff\" returns successfully" Jul 7 06:08:37.901818 kubelet[2742]: E0707 06:08:37.901561 2742 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-d5356a388e\" not found" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:37.907800 kubelet[2742]: E0707 06:08:37.905978 2742 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-d5356a388e\" not found" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:37.908655 kubelet[2742]: E0707 06:08:37.908628 2742 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-d5356a388e\" not found" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:38.081869 kubelet[2742]: I0707 06:08:38.081088 2742 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:38.909062 kubelet[2742]: E0707 06:08:38.909031 2742 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-d5356a388e\" not found" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:38.910528 kubelet[2742]: E0707 06:08:38.909447 2742 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-d5356a388e\" not found" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:39.911983 kubelet[2742]: E0707 06:08:39.911799 2742 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-d5356a388e\" not found" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:39.961252 kubelet[2742]: E0707 06:08:39.961091 2742 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-d5356a388e\" not found" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:40.685162 kubelet[2742]: E0707 06:08:40.685118 2742 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.4-a-d5356a388e\" not found" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:40.795963 kubelet[2742]: I0707 06:08:40.795916 2742 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:40.795963 kubelet[2742]: E0707 06:08:40.795964 2742 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.4-a-d5356a388e\": node \"ci-4081.3.4-a-d5356a388e\" not found" Jul 7 06:08:40.854479 kubelet[2742]: I0707 06:08:40.854157 2742 apiserver.go:52] "Watching apiserver" Jul 7 06:08:40.867024 kubelet[2742]: I0707 06:08:40.866983 2742 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:40.870303 kubelet[2742]: I0707 06:08:40.870245 2742 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:08:40.885171 kubelet[2742]: E0707 06:08:40.884421 2742 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.4-a-d5356a388e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:40.885171 kubelet[2742]: I0707 06:08:40.884450 2742 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:40.888052 kubelet[2742]: E0707 06:08:40.887811 2742 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.4-a-d5356a388e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:40.888052 kubelet[2742]: I0707 06:08:40.887845 2742 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:40.890694 kubelet[2742]: E0707 06:08:40.890660 2742 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.4-a-d5356a388e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:41.414866 kubelet[2742]: I0707 06:08:41.414831 2742 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:41.417092 kubelet[2742]: E0707 06:08:41.417029 2742 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.4-a-d5356a388e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:43.165172 systemd[1]: Reloading requested from client PID 3016 ('systemctl') (unit session-9.scope)... Jul 7 06:08:43.165188 systemd[1]: Reloading... Jul 7 06:08:43.259273 zram_generator::config[3059]: No configuration found. Jul 7 06:08:43.370899 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:08:43.462628 systemd[1]: Reloading finished in 296 ms. Jul 7 06:08:43.500408 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:08:43.515561 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:08:43.515941 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:43.517329 systemd[1]: kubelet.service: Consumed 1.561s CPU time, 129.1M memory peak, 0B memory swap peak. Jul 7 06:08:43.525487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:08:43.637365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:43.647631 (kubelet)[3120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:08:43.687994 kubelet[3120]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:08:43.687994 kubelet[3120]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:08:43.687994 kubelet[3120]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:08:43.688402 kubelet[3120]: I0707 06:08:43.688109 3120 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:08:43.695619 kubelet[3120]: I0707 06:08:43.695551 3120 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 06:08:43.695619 kubelet[3120]: I0707 06:08:43.695584 3120 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:08:43.695897 kubelet[3120]: I0707 06:08:43.695872 3120 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 06:08:43.697437 kubelet[3120]: I0707 06:08:43.697411 3120 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 06:08:43.699902 kubelet[3120]: I0707 06:08:43.699866 3120 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:08:43.704264 kubelet[3120]: E0707 06:08:43.703896 3120 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 06:08:43.704264 kubelet[3120]: I0707 06:08:43.703933 3120 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 06:08:43.707127 kubelet[3120]: I0707 06:08:43.707094 3120 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:08:43.707368 kubelet[3120]: I0707 06:08:43.707331 3120 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:08:43.707541 kubelet[3120]: I0707 06:08:43.707365 3120 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-a-d5356a388e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:08:43.707643 kubelet[3120]: I0707 06:08:43.707546 3120 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:08:43.707643 kubelet[3120]: I0707 06:08:43.707555 3120 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 06:08:43.707643 kubelet[3120]: I0707 06:08:43.707594 3120 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:08:43.707744 kubelet[3120]: I0707 06:08:43.707728 3120 kubelet.go:446] "Attempting to sync node with API server" Jul 7 06:08:43.708609 kubelet[3120]: I0707 06:08:43.707817 3120 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:08:43.708609 kubelet[3120]: I0707 06:08:43.707851 3120 kubelet.go:352] "Adding apiserver pod source" Jul 7 06:08:43.708609 kubelet[3120]: I0707 06:08:43.707863 3120 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:08:43.709814 kubelet[3120]: I0707 06:08:43.709779 3120 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 06:08:43.712687 kubelet[3120]: I0707 06:08:43.712632 3120 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:08:43.715444 kubelet[3120]: I0707 06:08:43.715331 3120 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:08:43.715444 kubelet[3120]: I0707 06:08:43.715381 3120 server.go:1287] "Started kubelet" Jul 7 06:08:43.726149 kubelet[3120]: I0707 06:08:43.726106 3120 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:08:43.728404 kubelet[3120]: I0707 06:08:43.728343 3120 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:08:43.730497 kubelet[3120]: I0707 06:08:43.730469 3120 server.go:479] "Adding debug handlers to kubelet server" Jul 7 06:08:43.732691 kubelet[3120]: I0707 06:08:43.732620 3120 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:08:43.734269 kubelet[3120]: I0707 06:08:43.733493 3120 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:08:43.734269 kubelet[3120]: I0707 06:08:43.733707 3120 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:08:43.741428 kubelet[3120]: I0707 06:08:43.741382 3120 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:08:43.742152 kubelet[3120]: E0707 06:08:43.741685 3120 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-d5356a388e\" not found" Jul 7 06:08:43.742152 kubelet[3120]: I0707 06:08:43.741933 3120 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:08:43.742152 kubelet[3120]: I0707 06:08:43.742060 3120 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:08:43.745306 kubelet[3120]: I0707 06:08:43.745206 3120 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:08:43.759582 kubelet[3120]: I0707 06:08:43.759531 3120 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:08:43.764874 kubelet[3120]: I0707 06:08:43.764835 3120 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:08:43.766656 kubelet[3120]: I0707 06:08:43.766625 3120 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:08:43.766822 kubelet[3120]: I0707 06:08:43.766812 3120 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 06:08:43.766892 kubelet[3120]: I0707 06:08:43.766882 3120 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:08:43.766951 kubelet[3120]: I0707 06:08:43.766942 3120 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 06:08:43.767119 kubelet[3120]: E0707 06:08:43.767032 3120 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:08:43.767521 kubelet[3120]: E0707 06:08:43.767282 3120 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:08:43.772863 kubelet[3120]: I0707 06:08:43.772828 3120 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:08:43.830342 kubelet[3120]: I0707 06:08:43.829289 3120 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:08:43.830342 kubelet[3120]: I0707 06:08:43.829309 3120 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:08:43.830342 kubelet[3120]: I0707 06:08:43.829335 3120 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:08:43.830342 kubelet[3120]: I0707 06:08:43.829535 3120 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:08:43.830342 kubelet[3120]: I0707 06:08:43.829547 3120 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:08:43.830342 kubelet[3120]: I0707 06:08:43.829565 3120 policy_none.go:49] "None policy: Start" Jul 7 06:08:43.830342 kubelet[3120]: I0707 06:08:43.829573 3120 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:08:43.830342 kubelet[3120]: I0707 06:08:43.829583 3120 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:08:43.830342 kubelet[3120]: I0707 06:08:43.829678 3120 state_mem.go:75] "Updated machine memory state" Jul 7 06:08:43.834459 kubelet[3120]: I0707 06:08:43.834433 3120 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:08:43.835134 kubelet[3120]: I0707 06:08:43.835119 3120 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:08:43.835263 kubelet[3120]: I0707 06:08:43.835206 3120 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:08:43.835940 kubelet[3120]: I0707 06:08:43.835630 3120 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:08:43.837541 kubelet[3120]: E0707 06:08:43.837519 3120 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:08:43.867860 kubelet[3120]: I0707 06:08:43.867827 3120 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:43.868246 kubelet[3120]: I0707 06:08:43.867877 3120 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:43.868884 kubelet[3120]: I0707 06:08:43.867931 3120 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:43.877424 kubelet[3120]: W0707 06:08:43.877385 3120 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 06:08:43.882792 kubelet[3120]: W0707 06:08:43.882673 3120 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 06:08:43.882792 kubelet[3120]: W0707 06:08:43.882732 3120 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 06:08:43.938938 kubelet[3120]: I0707 06:08:43.938905 3120 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:43.958554 kubelet[3120]: I0707 06:08:43.958515 3120 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:43.958831 kubelet[3120]: I0707 06:08:43.958607 3120 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.4-a-d5356a388e" Jul 7 06:08:44.042763 kubelet[3120]: I0707 06:08:44.042634 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0410a7cf2a22e4e42db83f406666237-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-a-d5356a388e\" (UID: \"a0410a7cf2a22e4e42db83f406666237\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:44.042763 kubelet[3120]: I0707 06:08:44.042720 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dcaafae02eb976dcc5498c67bd58accb-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-a-d5356a388e\" (UID: \"dcaafae02eb976dcc5498c67bd58accb\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:44.042763 kubelet[3120]: I0707 06:08:44.042741 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dcaafae02eb976dcc5498c67bd58accb-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-d5356a388e\" (UID: \"dcaafae02eb976dcc5498c67bd58accb\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:44.042763 kubelet[3120]: I0707 06:08:44.042761 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dcaafae02eb976dcc5498c67bd58accb-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-a-d5356a388e\" (UID: \"dcaafae02eb976dcc5498c67bd58accb\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:44.042950 kubelet[3120]: I0707 06:08:44.042777 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dcaafae02eb976dcc5498c67bd58accb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-a-d5356a388e\" (UID: \"dcaafae02eb976dcc5498c67bd58accb\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:44.042950 kubelet[3120]: I0707 06:08:44.042797 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0410a7cf2a22e4e42db83f406666237-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-a-d5356a388e\" (UID: \"a0410a7cf2a22e4e42db83f406666237\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:44.042950 kubelet[3120]: I0707 06:08:44.042814 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0410a7cf2a22e4e42db83f406666237-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-a-d5356a388e\" (UID: \"a0410a7cf2a22e4e42db83f406666237\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:44.042950 kubelet[3120]: I0707 06:08:44.042829 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dcaafae02eb976dcc5498c67bd58accb-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-d5356a388e\" (UID: \"dcaafae02eb976dcc5498c67bd58accb\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:44.042950 kubelet[3120]: I0707 06:08:44.042844 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f5360b7a1766e582ff90f6e8cd0a864-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-a-d5356a388e\" (UID: \"6f5360b7a1766e582ff90f6e8cd0a864\") " pod="kube-system/kube-scheduler-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:44.708576 kubelet[3120]: I0707 06:08:44.708541 3120 apiserver.go:52] "Watching apiserver" Jul 7 06:08:44.742993 kubelet[3120]: I0707 06:08:44.742931 3120 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:08:44.811442 kubelet[3120]: I0707 06:08:44.810851 3120 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:44.823566 kubelet[3120]: W0707 06:08:44.823526 3120 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 06:08:44.823720 kubelet[3120]: E0707 06:08:44.823591 3120 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.4-a-d5356a388e\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.4-a-d5356a388e" Jul 7 06:08:44.869299 kubelet[3120]: I0707 06:08:44.869138 3120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.4-a-d5356a388e" podStartSLOduration=1.869116514 podStartE2EDuration="1.869116514s" podCreationTimestamp="2025-07-07 06:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:08:44.855592047 +0000 UTC m=+1.203423257" watchObservedRunningTime="2025-07-07 06:08:44.869116514 +0000 UTC m=+1.216947684" Jul 7 06:08:44.883047 kubelet[3120]: I0707 06:08:44.882566 3120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.4-a-d5356a388e" podStartSLOduration=1.8825444999999998 podStartE2EDuration="1.8825445s" podCreationTimestamp="2025-07-07 06:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:08:44.870036395 +0000 UTC m=+1.217867565" watchObservedRunningTime="2025-07-07 06:08:44.8825445 +0000 UTC m=+1.230375670" Jul 7 06:08:44.896496 kubelet[3120]: I0707 06:08:44.896421 3120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.4-a-d5356a388e" podStartSLOduration=1.896398247 podStartE2EDuration="1.896398247s" podCreationTimestamp="2025-07-07 06:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:08:44.88288658 +0000 UTC m=+1.230717750" watchObservedRunningTime="2025-07-07 06:08:44.896398247 +0000 UTC m=+1.244229417" Jul 7 06:08:48.829510 kubelet[3120]: I0707 06:08:48.829381 3120 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:08:48.830164 containerd[1690]: time="2025-07-07T06:08:48.830076803Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:08:48.830423 kubelet[3120]: I0707 06:08:48.830378 3120 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:08:49.605458 systemd[1]: Created slice kubepods-besteffort-pod97de0752_cb12_46cf_97c6_5511e6f61cd2.slice - libcontainer container kubepods-besteffort-pod97de0752_cb12_46cf_97c6_5511e6f61cd2.slice. Jul 7 06:08:49.679618 kubelet[3120]: I0707 06:08:49.679577 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/97de0752-cb12-46cf-97c6-5511e6f61cd2-kube-proxy\") pod \"kube-proxy-7fl5n\" (UID: \"97de0752-cb12-46cf-97c6-5511e6f61cd2\") " pod="kube-system/kube-proxy-7fl5n" Jul 7 06:08:49.679618 kubelet[3120]: I0707 06:08:49.679621 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97de0752-cb12-46cf-97c6-5511e6f61cd2-xtables-lock\") pod \"kube-proxy-7fl5n\" (UID: \"97de0752-cb12-46cf-97c6-5511e6f61cd2\") " pod="kube-system/kube-proxy-7fl5n" Jul 7 06:08:49.679789 kubelet[3120]: I0707 06:08:49.679661 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97de0752-cb12-46cf-97c6-5511e6f61cd2-lib-modules\") pod \"kube-proxy-7fl5n\" (UID: \"97de0752-cb12-46cf-97c6-5511e6f61cd2\") " pod="kube-system/kube-proxy-7fl5n" Jul 7 06:08:49.679789 kubelet[3120]: I0707 06:08:49.679681 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfbv9\" (UniqueName: \"kubernetes.io/projected/97de0752-cb12-46cf-97c6-5511e6f61cd2-kube-api-access-qfbv9\") pod \"kube-proxy-7fl5n\" (UID: \"97de0752-cb12-46cf-97c6-5511e6f61cd2\") " pod="kube-system/kube-proxy-7fl5n" Jul 7 06:08:49.787361 kubelet[3120]: E0707 06:08:49.787323 3120 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 7 06:08:49.787361 kubelet[3120]: E0707 06:08:49.787360 3120 projected.go:194] Error preparing data for projected volume kube-api-access-qfbv9 for pod kube-system/kube-proxy-7fl5n: configmap "kube-root-ca.crt" not found Jul 7 06:08:49.787550 kubelet[3120]: E0707 06:08:49.787430 3120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97de0752-cb12-46cf-97c6-5511e6f61cd2-kube-api-access-qfbv9 podName:97de0752-cb12-46cf-97c6-5511e6f61cd2 nodeName:}" failed. No retries permitted until 2025-07-07 06:08:50.287406145 +0000 UTC m=+6.635237315 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qfbv9" (UniqueName: "kubernetes.io/projected/97de0752-cb12-46cf-97c6-5511e6f61cd2-kube-api-access-qfbv9") pod "kube-proxy-7fl5n" (UID: "97de0752-cb12-46cf-97c6-5511e6f61cd2") : configmap "kube-root-ca.crt" not found Jul 7 06:08:50.033201 systemd[1]: Created slice kubepods-besteffort-pod98f6319d_5d18_4434_b46e_636f0d1c1df6.slice - libcontainer container kubepods-besteffort-pod98f6319d_5d18_4434_b46e_636f0d1c1df6.slice. Jul 7 06:08:50.081900 kubelet[3120]: I0707 06:08:50.081862 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/98f6319d-5d18-4434-b46e-636f0d1c1df6-var-lib-calico\") pod \"tigera-operator-747864d56d-x5zlm\" (UID: \"98f6319d-5d18-4434-b46e-636f0d1c1df6\") " pod="tigera-operator/tigera-operator-747864d56d-x5zlm" Jul 7 06:08:50.081900 kubelet[3120]: I0707 06:08:50.081917 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwcw6\" (UniqueName: \"kubernetes.io/projected/98f6319d-5d18-4434-b46e-636f0d1c1df6-kube-api-access-jwcw6\") pod \"tigera-operator-747864d56d-x5zlm\" (UID: \"98f6319d-5d18-4434-b46e-636f0d1c1df6\") " pod="tigera-operator/tigera-operator-747864d56d-x5zlm" Jul 7 06:08:50.336936 containerd[1690]: time="2025-07-07T06:08:50.336894444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-x5zlm,Uid:98f6319d-5d18-4434-b46e-636f0d1c1df6,Namespace:tigera-operator,Attempt:0,}" Jul 7 06:08:50.398985 containerd[1690]: time="2025-07-07T06:08:50.398800847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:08:50.399126 containerd[1690]: time="2025-07-07T06:08:50.399025208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:08:50.399546 containerd[1690]: time="2025-07-07T06:08:50.399434009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:50.399752 containerd[1690]: time="2025-07-07T06:08:50.399677889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:50.424529 systemd[1]: Started cri-containerd-6a1ed4620d5496d99cf18a723040661a9abbc103fd79701a4ec35210dc2a501c.scope - libcontainer container 6a1ed4620d5496d99cf18a723040661a9abbc103fd79701a4ec35210dc2a501c. Jul 7 06:08:50.454484 containerd[1690]: time="2025-07-07T06:08:50.454418558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-x5zlm,Uid:98f6319d-5d18-4434-b46e-636f0d1c1df6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6a1ed4620d5496d99cf18a723040661a9abbc103fd79701a4ec35210dc2a501c\"" Jul 7 06:08:50.457368 containerd[1690]: time="2025-07-07T06:08:50.457310004Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 06:08:50.516012 containerd[1690]: time="2025-07-07T06:08:50.515956961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7fl5n,Uid:97de0752-cb12-46cf-97c6-5511e6f61cd2,Namespace:kube-system,Attempt:0,}" Jul 7 06:08:50.570859 containerd[1690]: time="2025-07-07T06:08:50.570649231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:08:50.570859 containerd[1690]: time="2025-07-07T06:08:50.570741031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:08:50.570859 containerd[1690]: time="2025-07-07T06:08:50.570759151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:50.571198 containerd[1690]: time="2025-07-07T06:08:50.570885591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:50.589458 systemd[1]: Started cri-containerd-92cfce5c65c2fae782f71e112cdbec5f63798865af944ac2261006034392e8e7.scope - libcontainer container 92cfce5c65c2fae782f71e112cdbec5f63798865af944ac2261006034392e8e7. Jul 7 06:08:50.610879 containerd[1690]: time="2025-07-07T06:08:50.610806111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7fl5n,Uid:97de0752-cb12-46cf-97c6-5511e6f61cd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"92cfce5c65c2fae782f71e112cdbec5f63798865af944ac2261006034392e8e7\"" Jul 7 06:08:50.615325 containerd[1690]: time="2025-07-07T06:08:50.615261960Z" level=info msg="CreateContainer within sandbox \"92cfce5c65c2fae782f71e112cdbec5f63798865af944ac2261006034392e8e7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:08:50.677822 containerd[1690]: time="2025-07-07T06:08:50.677660725Z" level=info msg="CreateContainer within sandbox \"92cfce5c65c2fae782f71e112cdbec5f63798865af944ac2261006034392e8e7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"666d7a57cfee1775f937e0db7e17868b2725f501b10763b1ac95f0d3924d769f\"" Jul 7 06:08:50.679420 containerd[1690]: time="2025-07-07T06:08:50.678400366Z" level=info msg="StartContainer for \"666d7a57cfee1775f937e0db7e17868b2725f501b10763b1ac95f0d3924d769f\"" Jul 7 06:08:50.702442 systemd[1]: Started cri-containerd-666d7a57cfee1775f937e0db7e17868b2725f501b10763b1ac95f0d3924d769f.scope - libcontainer container 666d7a57cfee1775f937e0db7e17868b2725f501b10763b1ac95f0d3924d769f. Jul 7 06:08:50.732349 containerd[1690]: time="2025-07-07T06:08:50.732297234Z" level=info msg="StartContainer for \"666d7a57cfee1775f937e0db7e17868b2725f501b10763b1ac95f0d3924d769f\" returns successfully" Jul 7 06:08:50.848319 kubelet[3120]: I0707 06:08:50.847032 3120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7fl5n" podStartSLOduration=1.847013503 podStartE2EDuration="1.847013503s" podCreationTimestamp="2025-07-07 06:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:08:50.846428302 +0000 UTC m=+7.194259472" watchObservedRunningTime="2025-07-07 06:08:50.847013503 +0000 UTC m=+7.194844633" Jul 7 06:08:56.000854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3970379696.mount: Deactivated successfully. Jul 7 06:08:56.599787 containerd[1690]: time="2025-07-07T06:08:56.599727042Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:56.607679 containerd[1690]: time="2025-07-07T06:08:56.607398337Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 7 06:08:56.615282 containerd[1690]: time="2025-07-07T06:08:56.615244473Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:56.626502 containerd[1690]: time="2025-07-07T06:08:56.626415815Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:56.627396 containerd[1690]: time="2025-07-07T06:08:56.627209697Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 6.169860173s" Jul 7 06:08:56.627396 containerd[1690]: time="2025-07-07T06:08:56.627279017Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 7 06:08:56.630321 containerd[1690]: time="2025-07-07T06:08:56.630269383Z" level=info msg="CreateContainer within sandbox \"6a1ed4620d5496d99cf18a723040661a9abbc103fd79701a4ec35210dc2a501c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 06:08:56.686339 containerd[1690]: time="2025-07-07T06:08:56.686207935Z" level=info msg="CreateContainer within sandbox \"6a1ed4620d5496d99cf18a723040661a9abbc103fd79701a4ec35210dc2a501c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"276425477323fad1c278dfc3ef7fd564d92b3cf89447ad7d853c647c98ab4884\"" Jul 7 06:08:56.686876 containerd[1690]: time="2025-07-07T06:08:56.686838896Z" level=info msg="StartContainer for \"276425477323fad1c278dfc3ef7fd564d92b3cf89447ad7d853c647c98ab4884\"" Jul 7 06:08:56.713447 systemd[1]: Started cri-containerd-276425477323fad1c278dfc3ef7fd564d92b3cf89447ad7d853c647c98ab4884.scope - libcontainer container 276425477323fad1c278dfc3ef7fd564d92b3cf89447ad7d853c647c98ab4884. Jul 7 06:08:56.743369 containerd[1690]: time="2025-07-07T06:08:56.743312449Z" level=info msg="StartContainer for \"276425477323fad1c278dfc3ef7fd564d92b3cf89447ad7d853c647c98ab4884\" returns successfully" Jul 7 06:09:02.894675 sudo[2211]: pam_unix(sudo:session): session closed for user root Jul 7 06:09:02.985894 sshd[2208]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:02.994093 systemd[1]: sshd@6-10.200.20.11:22-10.200.16.10:34358.service: Deactivated successfully. Jul 7 06:09:02.994286 systemd-logind[1663]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:09:03.001507 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:09:03.001708 systemd[1]: session-9.scope: Consumed 6.109s CPU time, 153.4M memory peak, 0B memory swap peak. Jul 7 06:09:03.002511 systemd-logind[1663]: Removed session 9. Jul 7 06:09:11.347248 kubelet[3120]: I0707 06:09:11.346339 3120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-x5zlm" podStartSLOduration=16.174338421 podStartE2EDuration="22.346318557s" podCreationTimestamp="2025-07-07 06:08:49 +0000 UTC" firstStartedPulling="2025-07-07 06:08:50.456144642 +0000 UTC m=+6.803975812" lastFinishedPulling="2025-07-07 06:08:56.628124778 +0000 UTC m=+12.975955948" observedRunningTime="2025-07-07 06:08:56.858126358 +0000 UTC m=+13.205957568" watchObservedRunningTime="2025-07-07 06:09:11.346318557 +0000 UTC m=+27.694149727" Jul 7 06:09:11.355549 systemd[1]: Created slice kubepods-besteffort-podd959df08_edec_4069_8d85_e4f04c5e9936.slice - libcontainer container kubepods-besteffort-podd959df08_edec_4069_8d85_e4f04c5e9936.slice. Jul 7 06:09:11.415585 kubelet[3120]: I0707 06:09:11.415533 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njfj6\" (UniqueName: \"kubernetes.io/projected/d959df08-edec-4069-8d85-e4f04c5e9936-kube-api-access-njfj6\") pod \"calico-typha-7d98659665-llhfm\" (UID: \"d959df08-edec-4069-8d85-e4f04c5e9936\") " pod="calico-system/calico-typha-7d98659665-llhfm" Jul 7 06:09:11.415585 kubelet[3120]: I0707 06:09:11.415588 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d959df08-edec-4069-8d85-e4f04c5e9936-tigera-ca-bundle\") pod \"calico-typha-7d98659665-llhfm\" (UID: \"d959df08-edec-4069-8d85-e4f04c5e9936\") " pod="calico-system/calico-typha-7d98659665-llhfm" Jul 7 06:09:11.415771 kubelet[3120]: I0707 06:09:11.415606 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d959df08-edec-4069-8d85-e4f04c5e9936-typha-certs\") pod \"calico-typha-7d98659665-llhfm\" (UID: \"d959df08-edec-4069-8d85-e4f04c5e9936\") " pod="calico-system/calico-typha-7d98659665-llhfm" Jul 7 06:09:11.512306 systemd[1]: Created slice kubepods-besteffort-pod54acfa64_edb0_43a4_9149_11141724ce6a.slice - libcontainer container kubepods-besteffort-pod54acfa64_edb0_43a4_9149_11141724ce6a.slice. Jul 7 06:09:11.517776 kubelet[3120]: I0707 06:09:11.516660 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/54acfa64-edb0-43a4-9149-11141724ce6a-node-certs\") pod \"calico-node-h2h6b\" (UID: \"54acfa64-edb0-43a4-9149-11141724ce6a\") " pod="calico-system/calico-node-h2h6b" Jul 7 06:09:11.517776 kubelet[3120]: I0707 06:09:11.516698 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54acfa64-edb0-43a4-9149-11141724ce6a-tigera-ca-bundle\") pod \"calico-node-h2h6b\" (UID: \"54acfa64-edb0-43a4-9149-11141724ce6a\") " pod="calico-system/calico-node-h2h6b" Jul 7 06:09:11.517776 kubelet[3120]: I0707 06:09:11.516715 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/54acfa64-edb0-43a4-9149-11141724ce6a-var-run-calico\") pod \"calico-node-h2h6b\" (UID: \"54acfa64-edb0-43a4-9149-11141724ce6a\") " pod="calico-system/calico-node-h2h6b" Jul 7 06:09:11.517776 kubelet[3120]: I0707 06:09:11.516731 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/54acfa64-edb0-43a4-9149-11141724ce6a-var-lib-calico\") pod \"calico-node-h2h6b\" (UID: \"54acfa64-edb0-43a4-9149-11141724ce6a\") " pod="calico-system/calico-node-h2h6b" Jul 7 06:09:11.517776 kubelet[3120]: I0707 06:09:11.516746 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/54acfa64-edb0-43a4-9149-11141724ce6a-cni-log-dir\") pod \"calico-node-h2h6b\" (UID: \"54acfa64-edb0-43a4-9149-11141724ce6a\") " pod="calico-system/calico-node-h2h6b" Jul 7 06:09:11.518015 kubelet[3120]: I0707 06:09:11.516781 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54acfa64-edb0-43a4-9149-11141724ce6a-xtables-lock\") pod \"calico-node-h2h6b\" (UID: \"54acfa64-edb0-43a4-9149-11141724ce6a\") " pod="calico-system/calico-node-h2h6b" Jul 7 06:09:11.518015 kubelet[3120]: I0707 06:09:11.516798 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c94bq\" (UniqueName: \"kubernetes.io/projected/54acfa64-edb0-43a4-9149-11141724ce6a-kube-api-access-c94bq\") pod \"calico-node-h2h6b\" (UID: \"54acfa64-edb0-43a4-9149-11141724ce6a\") " pod="calico-system/calico-node-h2h6b" Jul 7 06:09:11.518015 kubelet[3120]: I0707 06:09:11.516816 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/54acfa64-edb0-43a4-9149-11141724ce6a-cni-bin-dir\") pod \"calico-node-h2h6b\" (UID: \"54acfa64-edb0-43a4-9149-11141724ce6a\") " pod="calico-system/calico-node-h2h6b" Jul 7 06:09:11.520815 kubelet[3120]: I0707 06:09:11.520599 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/54acfa64-edb0-43a4-9149-11141724ce6a-cni-net-dir\") pod \"calico-node-h2h6b\" (UID: \"54acfa64-edb0-43a4-9149-11141724ce6a\") " pod="calico-system/calico-node-h2h6b" Jul 7 06:09:11.520815 kubelet[3120]: I0707 06:09:11.520666 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/54acfa64-edb0-43a4-9149-11141724ce6a-policysync\") pod \"calico-node-h2h6b\" (UID: \"54acfa64-edb0-43a4-9149-11141724ce6a\") " pod="calico-system/calico-node-h2h6b" Jul 7 06:09:11.520815 kubelet[3120]: I0707 06:09:11.520689 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/54acfa64-edb0-43a4-9149-11141724ce6a-flexvol-driver-host\") pod \"calico-node-h2h6b\" (UID: \"54acfa64-edb0-43a4-9149-11141724ce6a\") " pod="calico-system/calico-node-h2h6b" Jul 7 06:09:11.520815 kubelet[3120]: I0707 06:09:11.520711 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54acfa64-edb0-43a4-9149-11141724ce6a-lib-modules\") pod \"calico-node-h2h6b\" (UID: \"54acfa64-edb0-43a4-9149-11141724ce6a\") " pod="calico-system/calico-node-h2h6b" Jul 7 06:09:11.624790 kubelet[3120]: E0707 06:09:11.623663 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.624790 kubelet[3120]: W0707 06:09:11.623697 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.624790 kubelet[3120]: E0707 06:09:11.623722 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.626966 kubelet[3120]: E0707 06:09:11.626936 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.627291 kubelet[3120]: W0707 06:09:11.627112 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.627291 kubelet[3120]: E0707 06:09:11.627144 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.630875 kubelet[3120]: E0707 06:09:11.630843 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.631120 kubelet[3120]: W0707 06:09:11.631030 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.632113 kubelet[3120]: E0707 06:09:11.632041 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.632503 kubelet[3120]: E0707 06:09:11.632343 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.632503 kubelet[3120]: W0707 06:09:11.632358 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.632654 kubelet[3120]: E0707 06:09:11.632633 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.633357 kubelet[3120]: E0707 06:09:11.633338 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.633476 kubelet[3120]: W0707 06:09:11.633463 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.633671 kubelet[3120]: E0707 06:09:11.633591 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.635826 kubelet[3120]: E0707 06:09:11.635799 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.636047 kubelet[3120]: W0707 06:09:11.635956 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.636446 kubelet[3120]: E0707 06:09:11.636203 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.637277 kubelet[3120]: E0707 06:09:11.636576 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.637529 kubelet[3120]: W0707 06:09:11.637395 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.637637 kubelet[3120]: E0707 06:09:11.637621 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.637754 kubelet[3120]: E0707 06:09:11.637745 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.637859 kubelet[3120]: W0707 06:09:11.637807 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.637859 kubelet[3120]: E0707 06:09:11.637842 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.638259 kubelet[3120]: E0707 06:09:11.638088 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.638259 kubelet[3120]: W0707 06:09:11.638099 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.638259 kubelet[3120]: E0707 06:09:11.638126 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.639465 kubelet[3120]: E0707 06:09:11.639395 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.639465 kubelet[3120]: W0707 06:09:11.639411 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.639465 kubelet[3120]: E0707 06:09:11.639439 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.639863 kubelet[3120]: E0707 06:09:11.639766 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.639863 kubelet[3120]: W0707 06:09:11.639779 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.639863 kubelet[3120]: E0707 06:09:11.639806 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.640250 kubelet[3120]: E0707 06:09:11.640137 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.640250 kubelet[3120]: W0707 06:09:11.640150 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.640250 kubelet[3120]: E0707 06:09:11.640175 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.641782 kubelet[3120]: E0707 06:09:11.641712 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.641782 kubelet[3120]: W0707 06:09:11.641728 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.641782 kubelet[3120]: E0707 06:09:11.641773 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.642187 kubelet[3120]: E0707 06:09:11.642093 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.642187 kubelet[3120]: W0707 06:09:11.642105 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.642187 kubelet[3120]: E0707 06:09:11.642130 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.642455 kubelet[3120]: E0707 06:09:11.642391 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.642455 kubelet[3120]: W0707 06:09:11.642407 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.642455 kubelet[3120]: E0707 06:09:11.642434 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.642758 kubelet[3120]: E0707 06:09:11.642703 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.642758 kubelet[3120]: W0707 06:09:11.642715 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.642758 kubelet[3120]: E0707 06:09:11.642738 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.643570 kubelet[3120]: E0707 06:09:11.643028 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.643570 kubelet[3120]: W0707 06:09:11.643040 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.643570 kubelet[3120]: E0707 06:09:11.643063 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.643808 kubelet[3120]: E0707 06:09:11.643751 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.643808 kubelet[3120]: W0707 06:09:11.643763 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.643808 kubelet[3120]: E0707 06:09:11.643788 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.644211 kubelet[3120]: E0707 06:09:11.644132 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.644211 kubelet[3120]: W0707 06:09:11.644146 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.644211 kubelet[3120]: E0707 06:09:11.644170 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.645433 kubelet[3120]: E0707 06:09:11.645317 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.645433 kubelet[3120]: W0707 06:09:11.645336 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.645433 kubelet[3120]: E0707 06:09:11.645373 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.646185 kubelet[3120]: E0707 06:09:11.645657 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.646185 kubelet[3120]: W0707 06:09:11.645674 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.646185 kubelet[3120]: E0707 06:09:11.645703 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.646875 kubelet[3120]: E0707 06:09:11.646760 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.646875 kubelet[3120]: W0707 06:09:11.646782 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.646875 kubelet[3120]: E0707 06:09:11.646817 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.647413 kubelet[3120]: E0707 06:09:11.647337 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.647413 kubelet[3120]: W0707 06:09:11.647357 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.647413 kubelet[3120]: E0707 06:09:11.647388 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.648673 kubelet[3120]: E0707 06:09:11.648532 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.648673 kubelet[3120]: W0707 06:09:11.648554 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.648673 kubelet[3120]: E0707 06:09:11.648588 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.650253 kubelet[3120]: E0707 06:09:11.648860 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.650253 kubelet[3120]: W0707 06:09:11.648872 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.650253 kubelet[3120]: E0707 06:09:11.648900 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.651675 kubelet[3120]: E0707 06:09:11.651437 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wq7zv" podUID="7c9907d5-346d-4929-b83e-924668157d8a" Jul 7 06:09:11.651675 kubelet[3120]: E0707 06:09:11.651543 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.651675 kubelet[3120]: W0707 06:09:11.651553 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.653682 kubelet[3120]: E0707 06:09:11.653398 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.653682 kubelet[3120]: W0707 06:09:11.653420 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.653871 kubelet[3120]: E0707 06:09:11.653858 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.654033 kubelet[3120]: W0707 06:09:11.653917 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.654136 kubelet[3120]: E0707 06:09:11.654126 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.654191 kubelet[3120]: W0707 06:09:11.654181 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.654827 kubelet[3120]: E0707 06:09:11.654609 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.654827 kubelet[3120]: E0707 06:09:11.654768 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.654827 kubelet[3120]: E0707 06:09:11.654793 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.654827 kubelet[3120]: E0707 06:09:11.654805 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.655021 kubelet[3120]: E0707 06:09:11.655005 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.655076 kubelet[3120]: W0707 06:09:11.655066 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.655130 kubelet[3120]: E0707 06:09:11.655119 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.660259 containerd[1690]: time="2025-07-07T06:09:11.659612991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d98659665-llhfm,Uid:d959df08-edec-4069-8d85-e4f04c5e9936,Namespace:calico-system,Attempt:0,}" Jul 7 06:09:11.693567 kubelet[3120]: E0707 06:09:11.693461 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.693567 kubelet[3120]: W0707 06:09:11.693495 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.693567 kubelet[3120]: E0707 06:09:11.693517 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.713491 kubelet[3120]: E0707 06:09:11.713421 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.713491 kubelet[3120]: W0707 06:09:11.713447 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.716007 kubelet[3120]: E0707 06:09:11.713467 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.716204 kubelet[3120]: E0707 06:09:11.716184 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.716475 kubelet[3120]: W0707 06:09:11.716336 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.716475 kubelet[3120]: E0707 06:09:11.716394 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.716974 kubelet[3120]: E0707 06:09:11.716956 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.717169 kubelet[3120]: W0707 06:09:11.717075 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.717169 kubelet[3120]: E0707 06:09:11.717098 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.719506 kubelet[3120]: E0707 06:09:11.719184 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.719506 kubelet[3120]: W0707 06:09:11.719207 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.719506 kubelet[3120]: E0707 06:09:11.719246 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.719807 kubelet[3120]: E0707 06:09:11.719793 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.719866 kubelet[3120]: W0707 06:09:11.719855 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.719929 kubelet[3120]: E0707 06:09:11.719918 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.720122 kubelet[3120]: E0707 06:09:11.720112 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.720193 kubelet[3120]: W0707 06:09:11.720180 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.720339 kubelet[3120]: E0707 06:09:11.720268 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.721250 kubelet[3120]: E0707 06:09:11.720482 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.721250 kubelet[3120]: W0707 06:09:11.720493 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.721250 kubelet[3120]: E0707 06:09:11.720503 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.721664 kubelet[3120]: E0707 06:09:11.721510 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.721664 kubelet[3120]: W0707 06:09:11.721551 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.721664 kubelet[3120]: E0707 06:09:11.721566 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.723749 kubelet[3120]: E0707 06:09:11.723605 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.723749 kubelet[3120]: W0707 06:09:11.723626 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.723749 kubelet[3120]: E0707 06:09:11.723644 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.725387 kubelet[3120]: E0707 06:09:11.723909 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.725387 kubelet[3120]: W0707 06:09:11.723918 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.725387 kubelet[3120]: E0707 06:09:11.723928 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.726199 kubelet[3120]: E0707 06:09:11.725628 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.726199 kubelet[3120]: W0707 06:09:11.725644 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.726199 kubelet[3120]: E0707 06:09:11.725659 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.726654 kubelet[3120]: E0707 06:09:11.726311 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.726654 kubelet[3120]: W0707 06:09:11.726326 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.726654 kubelet[3120]: E0707 06:09:11.726339 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.727728 kubelet[3120]: E0707 06:09:11.727702 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.727842 kubelet[3120]: W0707 06:09:11.727721 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.727842 kubelet[3120]: E0707 06:09:11.727757 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.728352 kubelet[3120]: E0707 06:09:11.728330 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.728352 kubelet[3120]: W0707 06:09:11.728347 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.728475 kubelet[3120]: E0707 06:09:11.728360 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.729271 kubelet[3120]: E0707 06:09:11.729244 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.729271 kubelet[3120]: W0707 06:09:11.729261 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.729512 kubelet[3120]: E0707 06:09:11.729273 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.729512 kubelet[3120]: E0707 06:09:11.729419 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.729512 kubelet[3120]: W0707 06:09:11.729425 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.729512 kubelet[3120]: E0707 06:09:11.729439 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.733421 kubelet[3120]: E0707 06:09:11.733392 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.733421 kubelet[3120]: W0707 06:09:11.733418 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.733538 kubelet[3120]: E0707 06:09:11.733439 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.733916 kubelet[3120]: E0707 06:09:11.733605 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.733916 kubelet[3120]: W0707 06:09:11.733618 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.733916 kubelet[3120]: E0707 06:09:11.733627 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.733916 kubelet[3120]: E0707 06:09:11.733741 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.733916 kubelet[3120]: W0707 06:09:11.733747 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.733916 kubelet[3120]: E0707 06:09:11.733754 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.733916 kubelet[3120]: E0707 06:09:11.733869 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.733916 kubelet[3120]: W0707 06:09:11.733875 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.733916 kubelet[3120]: E0707 06:09:11.733882 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.734183 kubelet[3120]: E0707 06:09:11.734129 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.734183 kubelet[3120]: W0707 06:09:11.734137 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.734183 kubelet[3120]: E0707 06:09:11.734147 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.734183 kubelet[3120]: I0707 06:09:11.734171 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7c9907d5-346d-4929-b83e-924668157d8a-registration-dir\") pod \"csi-node-driver-wq7zv\" (UID: \"7c9907d5-346d-4929-b83e-924668157d8a\") " pod="calico-system/csi-node-driver-wq7zv" Jul 7 06:09:11.734354 kubelet[3120]: E0707 06:09:11.734336 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.734354 kubelet[3120]: W0707 06:09:11.734349 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.734409 kubelet[3120]: E0707 06:09:11.734364 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.734409 kubelet[3120]: I0707 06:09:11.734379 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7c9907d5-346d-4929-b83e-924668157d8a-socket-dir\") pod \"csi-node-driver-wq7zv\" (UID: \"7c9907d5-346d-4929-b83e-924668157d8a\") " pod="calico-system/csi-node-driver-wq7zv" Jul 7 06:09:11.734541 kubelet[3120]: E0707 06:09:11.734526 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.734541 kubelet[3120]: W0707 06:09:11.734538 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.734814 kubelet[3120]: E0707 06:09:11.734552 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.734814 kubelet[3120]: I0707 06:09:11.734567 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvq52\" (UniqueName: \"kubernetes.io/projected/7c9907d5-346d-4929-b83e-924668157d8a-kube-api-access-hvq52\") pod \"csi-node-driver-wq7zv\" (UID: \"7c9907d5-346d-4929-b83e-924668157d8a\") " pod="calico-system/csi-node-driver-wq7zv" Jul 7 06:09:11.734814 kubelet[3120]: E0707 06:09:11.734702 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.734814 kubelet[3120]: W0707 06:09:11.734709 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.734814 kubelet[3120]: E0707 06:09:11.734717 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.734814 kubelet[3120]: I0707 06:09:11.734733 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c9907d5-346d-4929-b83e-924668157d8a-kubelet-dir\") pod \"csi-node-driver-wq7zv\" (UID: \"7c9907d5-346d-4929-b83e-924668157d8a\") " pod="calico-system/csi-node-driver-wq7zv" Jul 7 06:09:11.735291 kubelet[3120]: E0707 06:09:11.735082 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.735291 kubelet[3120]: W0707 06:09:11.735098 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.735291 kubelet[3120]: E0707 06:09:11.735121 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.735906 kubelet[3120]: E0707 06:09:11.735713 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.735906 kubelet[3120]: W0707 06:09:11.735733 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.735906 kubelet[3120]: E0707 06:09:11.735764 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.736245 kubelet[3120]: E0707 06:09:11.736127 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.736245 kubelet[3120]: W0707 06:09:11.736148 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.736245 kubelet[3120]: E0707 06:09:11.736163 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.738713 kubelet[3120]: E0707 06:09:11.738666 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.738713 kubelet[3120]: W0707 06:09:11.738700 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.738713 kubelet[3120]: E0707 06:09:11.738747 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.739701 kubelet[3120]: E0707 06:09:11.739170 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.739701 kubelet[3120]: W0707 06:09:11.739191 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.739701 kubelet[3120]: E0707 06:09:11.739250 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.739701 kubelet[3120]: I0707 06:09:11.739286 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7c9907d5-346d-4929-b83e-924668157d8a-varrun\") pod \"csi-node-driver-wq7zv\" (UID: \"7c9907d5-346d-4929-b83e-924668157d8a\") " pod="calico-system/csi-node-driver-wq7zv" Jul 7 06:09:11.740055 kubelet[3120]: E0707 06:09:11.740029 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.740055 kubelet[3120]: W0707 06:09:11.740050 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.740363 kubelet[3120]: E0707 06:09:11.740189 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.740363 kubelet[3120]: E0707 06:09:11.740212 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.740363 kubelet[3120]: W0707 06:09:11.740219 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.740363 kubelet[3120]: E0707 06:09:11.740296 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.740865 kubelet[3120]: E0707 06:09:11.740375 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.740865 kubelet[3120]: W0707 06:09:11.740382 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.740865 kubelet[3120]: E0707 06:09:11.740399 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.740865 kubelet[3120]: E0707 06:09:11.740522 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.740865 kubelet[3120]: W0707 06:09:11.740530 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.740865 kubelet[3120]: E0707 06:09:11.740539 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.742621 kubelet[3120]: E0707 06:09:11.742526 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.742621 kubelet[3120]: W0707 06:09:11.742550 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.742621 kubelet[3120]: E0707 06:09:11.742571 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.743148 kubelet[3120]: E0707 06:09:11.742759 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.743148 kubelet[3120]: W0707 06:09:11.742774 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.743148 kubelet[3120]: E0707 06:09:11.742784 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.758989 containerd[1690]: time="2025-07-07T06:09:11.758809352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:11.758989 containerd[1690]: time="2025-07-07T06:09:11.758900752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:11.758989 containerd[1690]: time="2025-07-07T06:09:11.758912952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:11.759624 containerd[1690]: time="2025-07-07T06:09:11.759533394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:11.795441 systemd[1]: Started cri-containerd-dc28046b49102fa1e47cbe19646ca01a9f7e7cc7211b8df63d556858a2a33107.scope - libcontainer container dc28046b49102fa1e47cbe19646ca01a9f7e7cc7211b8df63d556858a2a33107. Jul 7 06:09:11.816922 containerd[1690]: time="2025-07-07T06:09:11.816870190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h2h6b,Uid:54acfa64-edb0-43a4-9149-11141724ce6a,Namespace:calico-system,Attempt:0,}" Jul 7 06:09:11.841250 kubelet[3120]: E0707 06:09:11.841204 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.841250 kubelet[3120]: W0707 06:09:11.841241 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.841615 kubelet[3120]: E0707 06:09:11.841296 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.842238 kubelet[3120]: E0707 06:09:11.841942 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.842281 kubelet[3120]: W0707 06:09:11.842243 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.842281 kubelet[3120]: E0707 06:09:11.842262 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.842587 kubelet[3120]: E0707 06:09:11.842567 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.842587 kubelet[3120]: W0707 06:09:11.842583 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.842665 kubelet[3120]: E0707 06:09:11.842596 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.843030 kubelet[3120]: E0707 06:09:11.842987 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.843030 kubelet[3120]: W0707 06:09:11.843017 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.843115 kubelet[3120]: E0707 06:09:11.843037 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.843583 kubelet[3120]: E0707 06:09:11.843544 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.843583 kubelet[3120]: W0707 06:09:11.843564 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.843583 kubelet[3120]: E0707 06:09:11.843585 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.844082 kubelet[3120]: E0707 06:09:11.844057 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.844082 kubelet[3120]: W0707 06:09:11.844075 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.844182 kubelet[3120]: E0707 06:09:11.844148 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.844985 kubelet[3120]: E0707 06:09:11.844903 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.844985 kubelet[3120]: W0707 06:09:11.844979 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.845184 kubelet[3120]: E0707 06:09:11.845117 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.845618 kubelet[3120]: E0707 06:09:11.845597 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.845618 kubelet[3120]: W0707 06:09:11.845615 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.845712 kubelet[3120]: E0707 06:09:11.845685 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.846756 kubelet[3120]: E0707 06:09:11.846590 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.846756 kubelet[3120]: W0707 06:09:11.846610 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.846985 kubelet[3120]: E0707 06:09:11.846829 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.847505 kubelet[3120]: E0707 06:09:11.847481 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.847505 kubelet[3120]: W0707 06:09:11.847499 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.847953 kubelet[3120]: E0707 06:09:11.847767 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.848857 kubelet[3120]: E0707 06:09:11.848824 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.848857 kubelet[3120]: W0707 06:09:11.848848 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.848960 kubelet[3120]: E0707 06:09:11.848905 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.850768 kubelet[3120]: E0707 06:09:11.850570 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.850768 kubelet[3120]: W0707 06:09:11.850591 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.850768 kubelet[3120]: E0707 06:09:11.850722 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.851459 kubelet[3120]: E0707 06:09:11.851412 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.851459 kubelet[3120]: W0707 06:09:11.851428 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.852320 kubelet[3120]: E0707 06:09:11.852302 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.854423 kubelet[3120]: E0707 06:09:11.853573 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.854423 kubelet[3120]: W0707 06:09:11.853600 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.855399 kubelet[3120]: E0707 06:09:11.854728 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.855399 kubelet[3120]: W0707 06:09:11.854744 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.855399 kubelet[3120]: E0707 06:09:11.854960 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.855399 kubelet[3120]: W0707 06:09:11.854968 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.855399 kubelet[3120]: E0707 06:09:11.855289 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.855399 kubelet[3120]: W0707 06:09:11.855299 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.855399 kubelet[3120]: E0707 06:09:11.855315 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.856097 kubelet[3120]: E0707 06:09:11.855923 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.856097 kubelet[3120]: E0707 06:09:11.855932 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.856097 kubelet[3120]: E0707 06:09:11.855960 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.856097 kubelet[3120]: W0707 06:09:11.855960 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.856097 kubelet[3120]: E0707 06:09:11.855968 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.856097 kubelet[3120]: E0707 06:09:11.855977 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.856483 kubelet[3120]: E0707 06:09:11.856324 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.856483 kubelet[3120]: W0707 06:09:11.856341 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.856483 kubelet[3120]: E0707 06:09:11.856358 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.856883 kubelet[3120]: E0707 06:09:11.856775 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.856883 kubelet[3120]: W0707 06:09:11.856802 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.856883 kubelet[3120]: E0707 06:09:11.856816 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.857529 kubelet[3120]: E0707 06:09:11.857338 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.857529 kubelet[3120]: W0707 06:09:11.857354 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.857529 kubelet[3120]: E0707 06:09:11.857383 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.857814 kubelet[3120]: E0707 06:09:11.857708 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.857814 kubelet[3120]: W0707 06:09:11.857720 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.858202 kubelet[3120]: E0707 06:09:11.857992 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.858202 kubelet[3120]: E0707 06:09:11.858096 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.858202 kubelet[3120]: W0707 06:09:11.858118 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.858202 kubelet[3120]: E0707 06:09:11.858135 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.858912 kubelet[3120]: E0707 06:09:11.858765 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.858912 kubelet[3120]: W0707 06:09:11.858785 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.858912 kubelet[3120]: E0707 06:09:11.858800 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.859321 kubelet[3120]: E0707 06:09:11.859274 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.859321 kubelet[3120]: W0707 06:09:11.859289 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.859586 kubelet[3120]: E0707 06:09:11.859303 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.872840 kubelet[3120]: E0707 06:09:11.872809 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:11.872840 kubelet[3120]: W0707 06:09:11.872830 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:11.873393 kubelet[3120]: E0707 06:09:11.872851 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:11.886501 containerd[1690]: time="2025-07-07T06:09:11.886345330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d98659665-llhfm,Uid:d959df08-edec-4069-8d85-e4f04c5e9936,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc28046b49102fa1e47cbe19646ca01a9f7e7cc7211b8df63d556858a2a33107\"" Jul 7 06:09:11.894677 containerd[1690]: time="2025-07-07T06:09:11.894535427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 06:09:11.898256 containerd[1690]: time="2025-07-07T06:09:11.897286673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:11.898256 containerd[1690]: time="2025-07-07T06:09:11.897352513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:11.898256 containerd[1690]: time="2025-07-07T06:09:11.897386753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:11.898256 containerd[1690]: time="2025-07-07T06:09:11.897481873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:11.922468 systemd[1]: Started cri-containerd-3beb95f3d239e14f9524032ca5a612469ee513d9da3a4ee1c676239a846ff215.scope - libcontainer container 3beb95f3d239e14f9524032ca5a612469ee513d9da3a4ee1c676239a846ff215. Jul 7 06:09:11.967864 containerd[1690]: time="2025-07-07T06:09:11.967739815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h2h6b,Uid:54acfa64-edb0-43a4-9149-11141724ce6a,Namespace:calico-system,Attempt:0,} returns sandbox id \"3beb95f3d239e14f9524032ca5a612469ee513d9da3a4ee1c676239a846ff215\"" Jul 7 06:09:12.767839 kubelet[3120]: E0707 06:09:12.767791 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wq7zv" podUID="7c9907d5-346d-4929-b83e-924668157d8a" Jul 7 06:09:13.448430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1126114460.mount: Deactivated successfully. Jul 7 06:09:14.389694 containerd[1690]: time="2025-07-07T06:09:14.389631698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:14.394851 containerd[1690]: time="2025-07-07T06:09:14.394789748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 7 06:09:14.405457 containerd[1690]: time="2025-07-07T06:09:14.405381849Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:14.419765 containerd[1690]: time="2025-07-07T06:09:14.419678877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:14.421313 containerd[1690]: time="2025-07-07T06:09:14.420672559Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.526092772s" Jul 7 06:09:14.421313 containerd[1690]: time="2025-07-07T06:09:14.420724079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 7 06:09:14.421959 containerd[1690]: time="2025-07-07T06:09:14.421933242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 06:09:14.435193 containerd[1690]: time="2025-07-07T06:09:14.434788747Z" level=info msg="CreateContainer within sandbox \"dc28046b49102fa1e47cbe19646ca01a9f7e7cc7211b8df63d556858a2a33107\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 06:09:14.499971 containerd[1690]: time="2025-07-07T06:09:14.499914996Z" level=info msg="CreateContainer within sandbox \"dc28046b49102fa1e47cbe19646ca01a9f7e7cc7211b8df63d556858a2a33107\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1018b71e5e94cd59178276c21fb1101acfb933dd160cfc12da863beff0f80801\"" Jul 7 06:09:14.502200 containerd[1690]: time="2025-07-07T06:09:14.500823078Z" level=info msg="StartContainer for \"1018b71e5e94cd59178276c21fb1101acfb933dd160cfc12da863beff0f80801\"" Jul 7 06:09:14.533464 systemd[1]: Started cri-containerd-1018b71e5e94cd59178276c21fb1101acfb933dd160cfc12da863beff0f80801.scope - libcontainer container 1018b71e5e94cd59178276c21fb1101acfb933dd160cfc12da863beff0f80801. Jul 7 06:09:14.575405 containerd[1690]: time="2025-07-07T06:09:14.575083145Z" level=info msg="StartContainer for \"1018b71e5e94cd59178276c21fb1101acfb933dd160cfc12da863beff0f80801\" returns successfully" Jul 7 06:09:14.768417 kubelet[3120]: E0707 06:09:14.767731 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wq7zv" podUID="7c9907d5-346d-4929-b83e-924668157d8a" Jul 7 06:09:14.911583 kubelet[3120]: I0707 06:09:14.910750 3120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7d98659665-llhfm" podStartSLOduration=1.382008112 podStartE2EDuration="3.910730569s" podCreationTimestamp="2025-07-07 06:09:11 +0000 UTC" firstStartedPulling="2025-07-07 06:09:11.893093784 +0000 UTC m=+28.240924954" lastFinishedPulling="2025-07-07 06:09:14.421816281 +0000 UTC m=+30.769647411" observedRunningTime="2025-07-07 06:09:14.908819405 +0000 UTC m=+31.256650575" watchObservedRunningTime="2025-07-07 06:09:14.910730569 +0000 UTC m=+31.258561739" Jul 7 06:09:14.957403 kubelet[3120]: E0707 06:09:14.957369 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.957403 kubelet[3120]: W0707 06:09:14.957437 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.957403 kubelet[3120]: E0707 06:09:14.957465 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.958885 kubelet[3120]: E0707 06:09:14.958621 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.959096 kubelet[3120]: W0707 06:09:14.958642 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.959096 kubelet[3120]: E0707 06:09:14.958951 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.959816 kubelet[3120]: E0707 06:09:14.959570 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.959816 kubelet[3120]: W0707 06:09:14.959593 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.959816 kubelet[3120]: E0707 06:09:14.959607 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.960249 kubelet[3120]: E0707 06:09:14.960076 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.960249 kubelet[3120]: W0707 06:09:14.960089 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.960249 kubelet[3120]: E0707 06:09:14.960100 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.961078 kubelet[3120]: E0707 06:09:14.961053 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.961078 kubelet[3120]: W0707 06:09:14.961070 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.961180 kubelet[3120]: E0707 06:09:14.961092 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.961381 kubelet[3120]: E0707 06:09:14.961354 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.961381 kubelet[3120]: W0707 06:09:14.961377 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.961484 kubelet[3120]: E0707 06:09:14.961398 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.961610 kubelet[3120]: E0707 06:09:14.961593 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.961610 kubelet[3120]: W0707 06:09:14.961605 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.961672 kubelet[3120]: E0707 06:09:14.961614 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.961839 kubelet[3120]: E0707 06:09:14.961814 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.961839 kubelet[3120]: W0707 06:09:14.961828 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.961839 kubelet[3120]: E0707 06:09:14.961836 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.962047 kubelet[3120]: E0707 06:09:14.962030 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.962047 kubelet[3120]: W0707 06:09:14.962042 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.962117 kubelet[3120]: E0707 06:09:14.962051 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.962331 kubelet[3120]: E0707 06:09:14.962204 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.962331 kubelet[3120]: W0707 06:09:14.962217 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.962331 kubelet[3120]: E0707 06:09:14.962252 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.962793 kubelet[3120]: E0707 06:09:14.962419 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.962793 kubelet[3120]: W0707 06:09:14.962428 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.962793 kubelet[3120]: E0707 06:09:14.962436 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.962793 kubelet[3120]: E0707 06:09:14.962594 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.962793 kubelet[3120]: W0707 06:09:14.962602 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.962793 kubelet[3120]: E0707 06:09:14.962610 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.963070 kubelet[3120]: E0707 06:09:14.962838 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.963070 kubelet[3120]: W0707 06:09:14.962848 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.963070 kubelet[3120]: E0707 06:09:14.962860 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.963210 kubelet[3120]: E0707 06:09:14.963190 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.963210 kubelet[3120]: W0707 06:09:14.963203 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.963292 kubelet[3120]: E0707 06:09:14.963212 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.963664 kubelet[3120]: E0707 06:09:14.963638 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.963664 kubelet[3120]: W0707 06:09:14.963659 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.963746 kubelet[3120]: E0707 06:09:14.963677 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.969368 kubelet[3120]: E0707 06:09:14.969327 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.969368 kubelet[3120]: W0707 06:09:14.969354 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.969368 kubelet[3120]: E0707 06:09:14.969377 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.971252 kubelet[3120]: E0707 06:09:14.969631 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.971252 kubelet[3120]: W0707 06:09:14.969649 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.971252 kubelet[3120]: E0707 06:09:14.969669 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.971252 kubelet[3120]: E0707 06:09:14.969883 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.971252 kubelet[3120]: W0707 06:09:14.969894 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.971252 kubelet[3120]: E0707 06:09:14.969905 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.971252 kubelet[3120]: E0707 06:09:14.970109 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.971252 kubelet[3120]: W0707 06:09:14.970118 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.971252 kubelet[3120]: E0707 06:09:14.970133 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.971252 kubelet[3120]: E0707 06:09:14.970865 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.971611 kubelet[3120]: W0707 06:09:14.970879 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.971611 kubelet[3120]: E0707 06:09:14.970903 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.971611 kubelet[3120]: E0707 06:09:14.971128 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.971611 kubelet[3120]: W0707 06:09:14.971137 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.971611 kubelet[3120]: E0707 06:09:14.971192 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.971708 kubelet[3120]: E0707 06:09:14.971639 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.971708 kubelet[3120]: W0707 06:09:14.971651 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.971753 kubelet[3120]: E0707 06:09:14.971742 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.972013 kubelet[3120]: E0707 06:09:14.971984 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.972013 kubelet[3120]: W0707 06:09:14.972001 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.972179 kubelet[3120]: E0707 06:09:14.972152 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.972334 kubelet[3120]: E0707 06:09:14.972311 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.972334 kubelet[3120]: W0707 06:09:14.972327 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.972410 kubelet[3120]: E0707 06:09:14.972346 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.972729 kubelet[3120]: E0707 06:09:14.972693 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.972729 kubelet[3120]: W0707 06:09:14.972713 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.972729 kubelet[3120]: E0707 06:09:14.972726 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.972923 kubelet[3120]: E0707 06:09:14.972899 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.972923 kubelet[3120]: W0707 06:09:14.972913 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.973001 kubelet[3120]: E0707 06:09:14.972922 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.973138 kubelet[3120]: E0707 06:09:14.973118 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.973138 kubelet[3120]: W0707 06:09:14.973132 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.973194 kubelet[3120]: E0707 06:09:14.973151 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.973389 kubelet[3120]: E0707 06:09:14.973368 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.973389 kubelet[3120]: W0707 06:09:14.973385 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.973458 kubelet[3120]: E0707 06:09:14.973401 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.973680 kubelet[3120]: E0707 06:09:14.973638 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.973680 kubelet[3120]: W0707 06:09:14.973655 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.973759 kubelet[3120]: E0707 06:09:14.973709 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.974174 kubelet[3120]: E0707 06:09:14.974141 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.974174 kubelet[3120]: W0707 06:09:14.974163 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.974389 kubelet[3120]: E0707 06:09:14.974362 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.974563 kubelet[3120]: E0707 06:09:14.974537 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.974563 kubelet[3120]: W0707 06:09:14.974552 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.974627 kubelet[3120]: E0707 06:09:14.974616 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.974800 kubelet[3120]: E0707 06:09:14.974773 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.974800 kubelet[3120]: W0707 06:09:14.974788 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.974800 kubelet[3120]: E0707 06:09:14.974796 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.975389 kubelet[3120]: E0707 06:09:14.975362 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.975389 kubelet[3120]: W0707 06:09:14.975382 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.975470 kubelet[3120]: E0707 06:09:14.975395 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.886312 kubelet[3120]: I0707 06:09:15.886284 3120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:09:15.921902 containerd[1690]: time="2025-07-07T06:09:15.921847409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:15.925369 containerd[1690]: time="2025-07-07T06:09:15.925326856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 7 06:09:15.931095 containerd[1690]: time="2025-07-07T06:09:15.931047587Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:15.937585 containerd[1690]: time="2025-07-07T06:09:15.937509000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:15.938350 containerd[1690]: time="2025-07-07T06:09:15.938157801Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.516108319s" Jul 7 06:09:15.938350 containerd[1690]: time="2025-07-07T06:09:15.938197161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 7 06:09:15.942878 containerd[1690]: time="2025-07-07T06:09:15.942649770Z" level=info msg="CreateContainer within sandbox \"3beb95f3d239e14f9524032ca5a612469ee513d9da3a4ee1c676239a846ff215\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 06:09:15.970751 kubelet[3120]: E0707 06:09:15.970608 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.970751 kubelet[3120]: W0707 06:09:15.970631 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.970751 kubelet[3120]: E0707 06:09:15.970652 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.971139 kubelet[3120]: E0707 06:09:15.971043 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.971139 kubelet[3120]: W0707 06:09:15.971057 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.971139 kubelet[3120]: E0707 06:09:15.971071 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.971483 kubelet[3120]: E0707 06:09:15.971382 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.971483 kubelet[3120]: W0707 06:09:15.971397 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.971483 kubelet[3120]: E0707 06:09:15.971410 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.971808 kubelet[3120]: E0707 06:09:15.971683 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.971808 kubelet[3120]: W0707 06:09:15.971695 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.971808 kubelet[3120]: E0707 06:09:15.971705 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.972151 kubelet[3120]: E0707 06:09:15.972053 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.972151 kubelet[3120]: W0707 06:09:15.972066 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.972151 kubelet[3120]: E0707 06:09:15.972078 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.972499 kubelet[3120]: E0707 06:09:15.972389 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.972499 kubelet[3120]: W0707 06:09:15.972403 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.972499 kubelet[3120]: E0707 06:09:15.972437 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.972828 kubelet[3120]: E0707 06:09:15.972722 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.972828 kubelet[3120]: W0707 06:09:15.972734 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.972828 kubelet[3120]: E0707 06:09:15.972745 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.973148 kubelet[3120]: E0707 06:09:15.973046 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.973148 kubelet[3120]: W0707 06:09:15.973057 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.973148 kubelet[3120]: E0707 06:09:15.973067 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.973544 kubelet[3120]: E0707 06:09:15.973402 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.973544 kubelet[3120]: W0707 06:09:15.973416 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.973544 kubelet[3120]: E0707 06:09:15.973428 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.973774 kubelet[3120]: E0707 06:09:15.973692 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.973774 kubelet[3120]: W0707 06:09:15.973703 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.973774 kubelet[3120]: E0707 06:09:15.973713 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.974761 kubelet[3120]: E0707 06:09:15.974657 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.974761 kubelet[3120]: W0707 06:09:15.974671 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.974761 kubelet[3120]: E0707 06:09:15.974687 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.975128 kubelet[3120]: E0707 06:09:15.974982 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.975128 kubelet[3120]: W0707 06:09:15.974994 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.975128 kubelet[3120]: E0707 06:09:15.975004 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.975505 kubelet[3120]: E0707 06:09:15.975400 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.975505 kubelet[3120]: W0707 06:09:15.975412 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.975505 kubelet[3120]: E0707 06:09:15.975426 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.975715 kubelet[3120]: E0707 06:09:15.975702 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.975856 kubelet[3120]: W0707 06:09:15.975762 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.975856 kubelet[3120]: E0707 06:09:15.975780 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.976051 kubelet[3120]: E0707 06:09:15.976039 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.976754 kubelet[3120]: W0707 06:09:15.976102 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.976754 kubelet[3120]: E0707 06:09:15.976117 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.977604 kubelet[3120]: E0707 06:09:15.977574 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.977604 kubelet[3120]: W0707 06:09:15.977599 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.977892 kubelet[3120]: E0707 06:09:15.977863 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.978340 kubelet[3120]: E0707 06:09:15.978312 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.978340 kubelet[3120]: W0707 06:09:15.978332 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.978561 kubelet[3120]: E0707 06:09:15.978446 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.979050 kubelet[3120]: E0707 06:09:15.979021 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.979050 kubelet[3120]: W0707 06:09:15.979045 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.979296 kubelet[3120]: E0707 06:09:15.979269 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.979548 kubelet[3120]: E0707 06:09:15.979528 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.979548 kubelet[3120]: W0707 06:09:15.979543 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.980100 kubelet[3120]: E0707 06:09:15.980041 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.980590 kubelet[3120]: E0707 06:09:15.980560 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.980590 kubelet[3120]: W0707 06:09:15.980582 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.980708 kubelet[3120]: E0707 06:09:15.980606 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.981209 kubelet[3120]: E0707 06:09:15.981162 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.981209 kubelet[3120]: W0707 06:09:15.981207 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.981308 kubelet[3120]: E0707 06:09:15.981259 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.982378 kubelet[3120]: E0707 06:09:15.981638 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.982378 kubelet[3120]: W0707 06:09:15.981655 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.982378 kubelet[3120]: E0707 06:09:15.981675 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.982378 kubelet[3120]: E0707 06:09:15.982326 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.982378 kubelet[3120]: W0707 06:09:15.982343 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.982593 kubelet[3120]: E0707 06:09:15.982565 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.985295 kubelet[3120]: E0707 06:09:15.984204 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.985295 kubelet[3120]: W0707 06:09:15.984263 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.987558 kubelet[3120]: E0707 06:09:15.987524 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.987558 kubelet[3120]: W0707 06:09:15.987551 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.989486 kubelet[3120]: E0707 06:09:15.989443 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.989486 kubelet[3120]: W0707 06:09:15.989478 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.991665 kubelet[3120]: E0707 06:09:15.989503 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.991665 kubelet[3120]: E0707 06:09:15.989539 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.991665 kubelet[3120]: E0707 06:09:15.990029 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.991665 kubelet[3120]: W0707 06:09:15.990043 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.991665 kubelet[3120]: E0707 06:09:15.990267 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.991814 kubelet[3120]: E0707 06:09:15.991692 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.991814 kubelet[3120]: W0707 06:09:15.991706 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.991814 kubelet[3120]: E0707 06:09:15.991728 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.994156 kubelet[3120]: E0707 06:09:15.994122 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.995895 kubelet[3120]: E0707 06:09:15.994199 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.995895 kubelet[3120]: W0707 06:09:15.994208 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.995895 kubelet[3120]: E0707 06:09:15.994220 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.998310 kubelet[3120]: E0707 06:09:15.997156 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:15.998310 kubelet[3120]: W0707 06:09:15.997179 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:15.998310 kubelet[3120]: E0707 06:09:15.997201 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.000286 kubelet[3120]: E0707 06:09:15.998889 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.000460 kubelet[3120]: W0707 06:09:16.000433 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.000533 kubelet[3120]: E0707 06:09:16.000520 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.001033 kubelet[3120]: E0707 06:09:16.000960 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.001632 kubelet[3120]: W0707 06:09:16.001612 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.001738 kubelet[3120]: E0707 06:09:16.001725 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.002279 kubelet[3120]: E0707 06:09:16.002200 3120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.002388 kubelet[3120]: W0707 06:09:16.002366 3120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.002452 kubelet[3120]: E0707 06:09:16.002441 3120 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.022399 containerd[1690]: time="2025-07-07T06:09:16.022343168Z" level=info msg="CreateContainer within sandbox \"3beb95f3d239e14f9524032ca5a612469ee513d9da3a4ee1c676239a846ff215\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"68be63a9e052dbec67851eed96c76429e19787d84fe278fbcb4195fda88b61c6\"" Jul 7 06:09:16.024288 containerd[1690]: time="2025-07-07T06:09:16.023156889Z" level=info msg="StartContainer for \"68be63a9e052dbec67851eed96c76429e19787d84fe278fbcb4195fda88b61c6\"" Jul 7 06:09:16.077446 systemd[1]: Started cri-containerd-68be63a9e052dbec67851eed96c76429e19787d84fe278fbcb4195fda88b61c6.scope - libcontainer container 68be63a9e052dbec67851eed96c76429e19787d84fe278fbcb4195fda88b61c6. Jul 7 06:09:16.157670 containerd[1690]: time="2025-07-07T06:09:16.157549995Z" level=info msg="StartContainer for \"68be63a9e052dbec67851eed96c76429e19787d84fe278fbcb4195fda88b61c6\" returns successfully" Jul 7 06:09:16.183925 systemd[1]: cri-containerd-68be63a9e052dbec67851eed96c76429e19787d84fe278fbcb4195fda88b61c6.scope: Deactivated successfully. Jul 7 06:09:16.215531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68be63a9e052dbec67851eed96c76429e19787d84fe278fbcb4195fda88b61c6-rootfs.mount: Deactivated successfully. Jul 7 06:09:16.767608 kubelet[3120]: E0707 06:09:16.767554 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wq7zv" podUID="7c9907d5-346d-4929-b83e-924668157d8a" Jul 7 06:09:17.292404 containerd[1690]: time="2025-07-07T06:09:17.292342160Z" level=info msg="shim disconnected" id=68be63a9e052dbec67851eed96c76429e19787d84fe278fbcb4195fda88b61c6 namespace=k8s.io Jul 7 06:09:17.292404 containerd[1690]: time="2025-07-07T06:09:17.292397880Z" level=warning msg="cleaning up after shim disconnected" id=68be63a9e052dbec67851eed96c76429e19787d84fe278fbcb4195fda88b61c6 namespace=k8s.io Jul 7 06:09:17.292404 containerd[1690]: time="2025-07-07T06:09:17.292408000Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:09:17.662927 kubelet[3120]: I0707 06:09:17.662695 3120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:09:17.895478 containerd[1690]: time="2025-07-07T06:09:17.895399193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 06:09:18.768332 kubelet[3120]: E0707 06:09:18.768278 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wq7zv" podUID="7c9907d5-346d-4929-b83e-924668157d8a" Jul 7 06:09:20.767751 kubelet[3120]: E0707 06:09:20.767702 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wq7zv" podUID="7c9907d5-346d-4929-b83e-924668157d8a" Jul 7 06:09:21.273010 containerd[1690]: time="2025-07-07T06:09:21.272939741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:21.276011 containerd[1690]: time="2025-07-07T06:09:21.275837307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 7 06:09:21.282736 containerd[1690]: time="2025-07-07T06:09:21.282673161Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:21.289877 containerd[1690]: time="2025-07-07T06:09:21.289789336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:21.290927 containerd[1690]: time="2025-07-07T06:09:21.290525178Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.395072025s" Jul 7 06:09:21.290927 containerd[1690]: time="2025-07-07T06:09:21.290566178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 7 06:09:21.293128 containerd[1690]: time="2025-07-07T06:09:21.292984103Z" level=info msg="CreateContainer within sandbox \"3beb95f3d239e14f9524032ca5a612469ee513d9da3a4ee1c676239a846ff215\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 06:09:21.354291 containerd[1690]: time="2025-07-07T06:09:21.354192152Z" level=info msg="CreateContainer within sandbox \"3beb95f3d239e14f9524032ca5a612469ee513d9da3a4ee1c676239a846ff215\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"31309f7a1671c37e2962f287fbbf614729f89983203abc973d981e3de8e32346\"" Jul 7 06:09:21.355812 containerd[1690]: time="2025-07-07T06:09:21.355604115Z" level=info msg="StartContainer for \"31309f7a1671c37e2962f287fbbf614729f89983203abc973d981e3de8e32346\"" Jul 7 06:09:21.384435 systemd[1]: Started cri-containerd-31309f7a1671c37e2962f287fbbf614729f89983203abc973d981e3de8e32346.scope - libcontainer container 31309f7a1671c37e2962f287fbbf614729f89983203abc973d981e3de8e32346. Jul 7 06:09:21.416021 containerd[1690]: time="2025-07-07T06:09:21.415901563Z" level=info msg="StartContainer for \"31309f7a1671c37e2962f287fbbf614729f89983203abc973d981e3de8e32346\" returns successfully" Jul 7 06:09:22.768269 kubelet[3120]: E0707 06:09:22.768204 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wq7zv" podUID="7c9907d5-346d-4929-b83e-924668157d8a" Jul 7 06:09:22.927788 containerd[1690]: time="2025-07-07T06:09:22.927735315Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:09:22.930415 systemd[1]: cri-containerd-31309f7a1671c37e2962f287fbbf614729f89983203abc973d981e3de8e32346.scope: Deactivated successfully. Jul 7 06:09:22.949899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31309f7a1671c37e2962f287fbbf614729f89983203abc973d981e3de8e32346-rootfs.mount: Deactivated successfully. Jul 7 06:09:22.997561 kubelet[3120]: I0707 06:09:22.997208 3120 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 06:09:23.284254 kubelet[3120]: I0707 06:09:23.133045 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm77c\" (UniqueName: \"kubernetes.io/projected/ab60cfb0-6466-4ab8-aaf5-3dfba3e66388-kube-api-access-sm77c\") pod \"calico-kube-controllers-6b5f9856b7-2gvp4\" (UID: \"ab60cfb0-6466-4ab8-aaf5-3dfba3e66388\") " pod="calico-system/calico-kube-controllers-6b5f9856b7-2gvp4" Jul 7 06:09:23.284254 kubelet[3120]: I0707 06:09:23.133081 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk7wz\" (UniqueName: \"kubernetes.io/projected/536f7498-046f-4b77-a82d-d7619df81d7a-kube-api-access-zk7wz\") pod \"calico-apiserver-d8cb59fcd-czbmd\" (UID: \"536f7498-046f-4b77-a82d-d7619df81d7a\") " pod="calico-apiserver/calico-apiserver-d8cb59fcd-czbmd" Jul 7 06:09:23.284254 kubelet[3120]: I0707 06:09:23.133098 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c205245-76ab-43f8-9df0-ba526a26f50c-config-volume\") pod \"coredns-668d6bf9bc-s4xjt\" (UID: \"5c205245-76ab-43f8-9df0-ba526a26f50c\") " pod="kube-system/coredns-668d6bf9bc-s4xjt" Jul 7 06:09:23.284254 kubelet[3120]: I0707 06:09:23.133118 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab60cfb0-6466-4ab8-aaf5-3dfba3e66388-tigera-ca-bundle\") pod \"calico-kube-controllers-6b5f9856b7-2gvp4\" (UID: \"ab60cfb0-6466-4ab8-aaf5-3dfba3e66388\") " pod="calico-system/calico-kube-controllers-6b5f9856b7-2gvp4" Jul 7 06:09:23.284254 kubelet[3120]: I0707 06:09:23.133134 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn4lj\" (UniqueName: \"kubernetes.io/projected/aa20f3ca-cae5-4a79-b6b7-441c614e749e-kube-api-access-qn4lj\") pod \"goldmane-768f4c5c69-t795m\" (UID: \"aa20f3ca-cae5-4a79-b6b7-441c614e749e\") " pod="calico-system/goldmane-768f4c5c69-t795m" Jul 7 06:09:23.047343 systemd[1]: Created slice kubepods-burstable-podfa1b246f_73f9_4220_bb08_88842ebef68c.slice - libcontainer container kubepods-burstable-podfa1b246f_73f9_4220_bb08_88842ebef68c.slice. Jul 7 06:09:23.284534 kubelet[3120]: I0707 06:09:23.133152 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7bdc3f44-4958-4179-a88f-0dae8e77b540-whisker-backend-key-pair\") pod \"whisker-7b96468696-n7cks\" (UID: \"7bdc3f44-4958-4179-a88f-0dae8e77b540\") " pod="calico-system/whisker-7b96468696-n7cks" Jul 7 06:09:23.284534 kubelet[3120]: I0707 06:09:23.133169 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/536f7498-046f-4b77-a82d-d7619df81d7a-calico-apiserver-certs\") pod \"calico-apiserver-d8cb59fcd-czbmd\" (UID: \"536f7498-046f-4b77-a82d-d7619df81d7a\") " pod="calico-apiserver/calico-apiserver-d8cb59fcd-czbmd" Jul 7 06:09:23.284534 kubelet[3120]: I0707 06:09:23.133186 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fhtc\" (UniqueName: \"kubernetes.io/projected/5c205245-76ab-43f8-9df0-ba526a26f50c-kube-api-access-7fhtc\") pod \"coredns-668d6bf9bc-s4xjt\" (UID: \"5c205245-76ab-43f8-9df0-ba526a26f50c\") " pod="kube-system/coredns-668d6bf9bc-s4xjt" Jul 7 06:09:23.284534 kubelet[3120]: I0707 06:09:23.133268 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxl9d\" (UniqueName: \"kubernetes.io/projected/8e40e25b-cc4f-4bf0-9b64-76658b196296-kube-api-access-cxl9d\") pod \"calico-apiserver-d8cb59fcd-6bttb\" (UID: \"8e40e25b-cc4f-4bf0-9b64-76658b196296\") " pod="calico-apiserver/calico-apiserver-d8cb59fcd-6bttb" Jul 7 06:09:23.284534 kubelet[3120]: I0707 06:09:23.133291 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa20f3ca-cae5-4a79-b6b7-441c614e749e-config\") pod \"goldmane-768f4c5c69-t795m\" (UID: \"aa20f3ca-cae5-4a79-b6b7-441c614e749e\") " pod="calico-system/goldmane-768f4c5c69-t795m" Jul 7 06:09:23.068938 systemd[1]: Created slice kubepods-besteffort-pod8e40e25b_cc4f_4bf0_9b64_76658b196296.slice - libcontainer container kubepods-besteffort-pod8e40e25b_cc4f_4bf0_9b64_76658b196296.slice. Jul 7 06:09:23.284686 kubelet[3120]: I0707 06:09:23.133309 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa20f3ca-cae5-4a79-b6b7-441c614e749e-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-t795m\" (UID: \"aa20f3ca-cae5-4a79-b6b7-441c614e749e\") " pod="calico-system/goldmane-768f4c5c69-t795m" Jul 7 06:09:23.284686 kubelet[3120]: I0707 06:09:23.133324 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bdc3f44-4958-4179-a88f-0dae8e77b540-whisker-ca-bundle\") pod \"whisker-7b96468696-n7cks\" (UID: \"7bdc3f44-4958-4179-a88f-0dae8e77b540\") " pod="calico-system/whisker-7b96468696-n7cks" Jul 7 06:09:23.284686 kubelet[3120]: I0707 06:09:23.133356 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsntt\" (UniqueName: \"kubernetes.io/projected/fa1b246f-73f9-4220-bb08-88842ebef68c-kube-api-access-rsntt\") pod \"coredns-668d6bf9bc-shc5t\" (UID: \"fa1b246f-73f9-4220-bb08-88842ebef68c\") " pod="kube-system/coredns-668d6bf9bc-shc5t" Jul 7 06:09:23.284686 kubelet[3120]: I0707 06:09:23.133375 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8e40e25b-cc4f-4bf0-9b64-76658b196296-calico-apiserver-certs\") pod \"calico-apiserver-d8cb59fcd-6bttb\" (UID: \"8e40e25b-cc4f-4bf0-9b64-76658b196296\") " pod="calico-apiserver/calico-apiserver-d8cb59fcd-6bttb" Jul 7 06:09:23.284686 kubelet[3120]: I0707 06:09:23.133397 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/aa20f3ca-cae5-4a79-b6b7-441c614e749e-goldmane-key-pair\") pod \"goldmane-768f4c5c69-t795m\" (UID: \"aa20f3ca-cae5-4a79-b6b7-441c614e749e\") " pod="calico-system/goldmane-768f4c5c69-t795m" Jul 7 06:09:23.081876 systemd[1]: Created slice kubepods-besteffort-podab60cfb0_6466_4ab8_aaf5_3dfba3e66388.slice - libcontainer container kubepods-besteffort-podab60cfb0_6466_4ab8_aaf5_3dfba3e66388.slice. Jul 7 06:09:23.284834 kubelet[3120]: I0707 06:09:23.133417 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l67zp\" (UniqueName: \"kubernetes.io/projected/7bdc3f44-4958-4179-a88f-0dae8e77b540-kube-api-access-l67zp\") pod \"whisker-7b96468696-n7cks\" (UID: \"7bdc3f44-4958-4179-a88f-0dae8e77b540\") " pod="calico-system/whisker-7b96468696-n7cks" Jul 7 06:09:23.284834 kubelet[3120]: I0707 06:09:23.133434 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa1b246f-73f9-4220-bb08-88842ebef68c-config-volume\") pod \"coredns-668d6bf9bc-shc5t\" (UID: \"fa1b246f-73f9-4220-bb08-88842ebef68c\") " pod="kube-system/coredns-668d6bf9bc-shc5t" Jul 7 06:09:23.093027 systemd[1]: Created slice kubepods-besteffort-podaa20f3ca_cae5_4a79_b6b7_441c614e749e.slice - libcontainer container kubepods-besteffort-podaa20f3ca_cae5_4a79_b6b7_441c614e749e.slice. Jul 7 06:09:23.100375 systemd[1]: Created slice kubepods-besteffort-pod7bdc3f44_4958_4179_a88f_0dae8e77b540.slice - libcontainer container kubepods-besteffort-pod7bdc3f44_4958_4179_a88f_0dae8e77b540.slice. Jul 7 06:09:23.108274 systemd[1]: Created slice kubepods-besteffort-pod536f7498_046f_4b77_a82d_d7619df81d7a.slice - libcontainer container kubepods-besteffort-pod536f7498_046f_4b77_a82d_d7619df81d7a.slice. Jul 7 06:09:23.116648 systemd[1]: Created slice kubepods-burstable-pod5c205245_76ab_43f8_9df0_ba526a26f50c.slice - libcontainer container kubepods-burstable-pod5c205245_76ab_43f8_9df0_ba526a26f50c.slice. Jul 7 06:09:23.548833 containerd[1690]: time="2025-07-07T06:09:23.548675026Z" level=info msg="shim disconnected" id=31309f7a1671c37e2962f287fbbf614729f89983203abc973d981e3de8e32346 namespace=k8s.io Jul 7 06:09:23.548833 containerd[1690]: time="2025-07-07T06:09:23.548737026Z" level=warning msg="cleaning up after shim disconnected" id=31309f7a1671c37e2962f287fbbf614729f89983203abc973d981e3de8e32346 namespace=k8s.io Jul 7 06:09:23.548833 containerd[1690]: time="2025-07-07T06:09:23.548748506Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:09:23.560106 containerd[1690]: time="2025-07-07T06:09:23.560054490Z" level=warning msg="cleanup warnings time=\"2025-07-07T06:09:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 06:09:23.587970 containerd[1690]: time="2025-07-07T06:09:23.587882669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-shc5t,Uid:fa1b246f-73f9-4220-bb08-88842ebef68c,Namespace:kube-system,Attempt:0,}" Jul 7 06:09:23.592183 containerd[1690]: time="2025-07-07T06:09:23.591621796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b5f9856b7-2gvp4,Uid:ab60cfb0-6466-4ab8-aaf5-3dfba3e66388,Namespace:calico-system,Attempt:0,}" Jul 7 06:09:23.592654 containerd[1690]: time="2025-07-07T06:09:23.592479038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b96468696-n7cks,Uid:7bdc3f44-4958-4179-a88f-0dae8e77b540,Namespace:calico-system,Attempt:0,}" Jul 7 06:09:23.606625 containerd[1690]: time="2025-07-07T06:09:23.606577468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-t795m,Uid:aa20f3ca-cae5-4a79-b6b7-441c614e749e,Namespace:calico-system,Attempt:0,}" Jul 7 06:09:23.606625 containerd[1690]: time="2025-07-07T06:09:23.606744108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8cb59fcd-6bttb,Uid:8e40e25b-cc4f-4bf0-9b64-76658b196296,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:09:23.613029 containerd[1690]: time="2025-07-07T06:09:23.612980762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8cb59fcd-czbmd,Uid:536f7498-046f-4b77-a82d-d7619df81d7a,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:09:23.639085 containerd[1690]: time="2025-07-07T06:09:23.638846896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s4xjt,Uid:5c205245-76ab-43f8-9df0-ba526a26f50c,Namespace:kube-system,Attempt:0,}" Jul 7 06:09:23.910474 containerd[1690]: time="2025-07-07T06:09:23.910416110Z" level=error msg="Failed to destroy network for sandbox \"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:23.911044 containerd[1690]: time="2025-07-07T06:09:23.910740590Z" level=error msg="encountered an error cleaning up failed sandbox \"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:23.911044 containerd[1690]: time="2025-07-07T06:09:23.910794070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-shc5t,Uid:fa1b246f-73f9-4220-bb08-88842ebef68c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:23.911044 containerd[1690]: time="2025-07-07T06:09:23.910910991Z" level=error msg="Failed to destroy network for sandbox \"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:23.912109 containerd[1690]: time="2025-07-07T06:09:23.912062153Z" level=error msg="encountered an error cleaning up failed sandbox \"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:23.912241 containerd[1690]: time="2025-07-07T06:09:23.912134313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b96468696-n7cks,Uid:7bdc3f44-4958-4179-a88f-0dae8e77b540,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:23.914359 kubelet[3120]: E0707 06:09:23.913790 3120 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:23.914359 kubelet[3120]: E0707 06:09:23.913863 3120 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-shc5t" Jul 7 06:09:23.914359 kubelet[3120]: E0707 06:09:23.913886 3120 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-shc5t" Jul 7 06:09:23.914787 kubelet[3120]: E0707 06:09:23.913923 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-shc5t_kube-system(fa1b246f-73f9-4220-bb08-88842ebef68c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-shc5t_kube-system(fa1b246f-73f9-4220-bb08-88842ebef68c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-shc5t" podUID="fa1b246f-73f9-4220-bb08-88842ebef68c" Jul 7 06:09:23.914787 kubelet[3120]: E0707 06:09:23.914257 3120 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:23.914787 kubelet[3120]: E0707 06:09:23.914281 3120 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b96468696-n7cks" Jul 7 06:09:23.914891 kubelet[3120]: E0707 06:09:23.914296 3120 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b96468696-n7cks" Jul 7 06:09:23.914891 kubelet[3120]: E0707 06:09:23.914325 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b96468696-n7cks_calico-system(7bdc3f44-4958-4179-a88f-0dae8e77b540)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b96468696-n7cks_calico-system(7bdc3f44-4958-4179-a88f-0dae8e77b540)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b96468696-n7cks" podUID="7bdc3f44-4958-4179-a88f-0dae8e77b540" Jul 7 06:09:23.940098 kubelet[3120]: I0707 06:09:23.939291 3120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Jul 7 06:09:23.943269 containerd[1690]: time="2025-07-07T06:09:23.943097899Z" level=info msg="StopPodSandbox for \"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\"" Jul 7 06:09:23.943603 containerd[1690]: time="2025-07-07T06:09:23.943317139Z" level=info msg="Ensure that sandbox 644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd in task-service has been cleanup successfully" Jul 7 06:09:23.980928 containerd[1690]: time="2025-07-07T06:09:23.979981776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 06:09:23.984854 kubelet[3120]: I0707 06:09:23.984808 3120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Jul 7 06:09:23.992329 containerd[1690]: time="2025-07-07T06:09:23.989554597Z" level=info msg="StopPodSandbox for \"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\"" Jul 7 06:09:23.992329 containerd[1690]: time="2025-07-07T06:09:23.989748757Z" level=info msg="Ensure that sandbox ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e in task-service has been cleanup successfully" Jul 7 06:09:24.017838 containerd[1690]: time="2025-07-07T06:09:24.017681136Z" level=error msg="Failed to destroy network for sandbox \"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.020280 containerd[1690]: time="2025-07-07T06:09:24.019859781Z" level=error msg="encountered an error cleaning up failed sandbox \"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.020280 containerd[1690]: time="2025-07-07T06:09:24.019934741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b5f9856b7-2gvp4,Uid:ab60cfb0-6466-4ab8-aaf5-3dfba3e66388,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.021329 kubelet[3120]: E0707 06:09:24.020906 3120 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.021438 kubelet[3120]: E0707 06:09:24.021354 3120 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b5f9856b7-2gvp4" Jul 7 06:09:24.021438 kubelet[3120]: E0707 06:09:24.021377 3120 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b5f9856b7-2gvp4" Jul 7 06:09:24.022497 kubelet[3120]: E0707 06:09:24.021791 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b5f9856b7-2gvp4_calico-system(ab60cfb0-6466-4ab8-aaf5-3dfba3e66388)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b5f9856b7-2gvp4_calico-system(ab60cfb0-6466-4ab8-aaf5-3dfba3e66388)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b5f9856b7-2gvp4" podUID="ab60cfb0-6466-4ab8-aaf5-3dfba3e66388" Jul 7 06:09:24.022914 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0-shm.mount: Deactivated successfully. Jul 7 06:09:24.120190 containerd[1690]: time="2025-07-07T06:09:24.120103232Z" level=error msg="StopPodSandbox for \"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\" failed" error="failed to destroy network for sandbox \"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.120727 kubelet[3120]: E0707 06:09:24.120692 3120 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Jul 7 06:09:24.120999 kubelet[3120]: E0707 06:09:24.120949 3120 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd"} Jul 7 06:09:24.121106 kubelet[3120]: E0707 06:09:24.121089 3120 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fa1b246f-73f9-4220-bb08-88842ebef68c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:09:24.121237 kubelet[3120]: E0707 06:09:24.121198 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fa1b246f-73f9-4220-bb08-88842ebef68c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-shc5t" podUID="fa1b246f-73f9-4220-bb08-88842ebef68c" Jul 7 06:09:24.122557 containerd[1690]: time="2025-07-07T06:09:24.121471315Z" level=error msg="Failed to destroy network for sandbox \"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.125616 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d-shm.mount: Deactivated successfully. Jul 7 06:09:24.127665 containerd[1690]: time="2025-07-07T06:09:24.126192405Z" level=error msg="encountered an error cleaning up failed sandbox \"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.127912 containerd[1690]: time="2025-07-07T06:09:24.127874049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8cb59fcd-6bttb,Uid:8e40e25b-cc4f-4bf0-9b64-76658b196296,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.128349 kubelet[3120]: E0707 06:09:24.128311 3120 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.130635 containerd[1690]: time="2025-07-07T06:09:24.129150651Z" level=error msg="Failed to destroy network for sandbox \"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.130936 kubelet[3120]: E0707 06:09:24.129457 3120 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8cb59fcd-6bttb" Jul 7 06:09:24.130936 kubelet[3120]: E0707 06:09:24.129487 3120 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8cb59fcd-6bttb" Jul 7 06:09:24.130936 kubelet[3120]: E0707 06:09:24.129537 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d8cb59fcd-6bttb_calico-apiserver(8e40e25b-cc4f-4bf0-9b64-76658b196296)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d8cb59fcd-6bttb_calico-apiserver(8e40e25b-cc4f-4bf0-9b64-76658b196296)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d8cb59fcd-6bttb" podUID="8e40e25b-cc4f-4bf0-9b64-76658b196296" Jul 7 06:09:24.133429 containerd[1690]: time="2025-07-07T06:09:24.133358540Z" level=error msg="StopPodSandbox for \"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\" failed" error="failed to destroy network for sandbox \"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.133751 kubelet[3120]: E0707 06:09:24.133715 3120 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Jul 7 06:09:24.133991 kubelet[3120]: E0707 06:09:24.133882 3120 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e"} Jul 7 06:09:24.133991 kubelet[3120]: E0707 06:09:24.133921 3120 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7bdc3f44-4958-4179-a88f-0dae8e77b540\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:09:24.133991 kubelet[3120]: E0707 06:09:24.133963 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7bdc3f44-4958-4179-a88f-0dae8e77b540\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b96468696-n7cks" podUID="7bdc3f44-4958-4179-a88f-0dae8e77b540" Jul 7 06:09:24.134217 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c-shm.mount: Deactivated successfully. Jul 7 06:09:24.136942 containerd[1690]: time="2025-07-07T06:09:24.136879748Z" level=error msg="encountered an error cleaning up failed sandbox \"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.137063 containerd[1690]: time="2025-07-07T06:09:24.136971948Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-t795m,Uid:aa20f3ca-cae5-4a79-b6b7-441c614e749e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.137474 kubelet[3120]: E0707 06:09:24.137302 3120 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.137474 kubelet[3120]: E0707 06:09:24.137373 3120 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-t795m" Jul 7 06:09:24.137474 kubelet[3120]: E0707 06:09:24.137392 3120 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-t795m" Jul 7 06:09:24.137606 kubelet[3120]: E0707 06:09:24.137434 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-t795m_calico-system(aa20f3ca-cae5-4a79-b6b7-441c614e749e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-t795m_calico-system(aa20f3ca-cae5-4a79-b6b7-441c614e749e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-t795m" podUID="aa20f3ca-cae5-4a79-b6b7-441c614e749e" Jul 7 06:09:24.154084 containerd[1690]: time="2025-07-07T06:09:24.154015744Z" level=error msg="Failed to destroy network for sandbox \"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.154485 containerd[1690]: time="2025-07-07T06:09:24.154417705Z" level=error msg="encountered an error cleaning up failed sandbox \"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.154535 containerd[1690]: time="2025-07-07T06:09:24.154498145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s4xjt,Uid:5c205245-76ab-43f8-9df0-ba526a26f50c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.156854 kubelet[3120]: E0707 06:09:24.156372 3120 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.156854 kubelet[3120]: E0707 06:09:24.156437 3120 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s4xjt" Jul 7 06:09:24.156854 kubelet[3120]: E0707 06:09:24.156461 3120 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s4xjt" Jul 7 06:09:24.156947 kubelet[3120]: E0707 06:09:24.156496 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-s4xjt_kube-system(5c205245-76ab-43f8-9df0-ba526a26f50c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-s4xjt_kube-system(5c205245-76ab-43f8-9df0-ba526a26f50c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s4xjt" podUID="5c205245-76ab-43f8-9df0-ba526a26f50c" Jul 7 06:09:24.158742 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a-shm.mount: Deactivated successfully. Jul 7 06:09:24.163177 containerd[1690]: time="2025-07-07T06:09:24.163055363Z" level=error msg="Failed to destroy network for sandbox \"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.164132 containerd[1690]: time="2025-07-07T06:09:24.163674404Z" level=error msg="encountered an error cleaning up failed sandbox \"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.164132 containerd[1690]: time="2025-07-07T06:09:24.163731524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8cb59fcd-czbmd,Uid:536f7498-046f-4b77-a82d-d7619df81d7a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.164622 kubelet[3120]: E0707 06:09:24.164358 3120 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.164622 kubelet[3120]: E0707 06:09:24.164443 3120 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8cb59fcd-czbmd" Jul 7 06:09:24.164622 kubelet[3120]: E0707 06:09:24.164463 3120 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8cb59fcd-czbmd" Jul 7 06:09:24.164715 kubelet[3120]: E0707 06:09:24.164520 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d8cb59fcd-czbmd_calico-apiserver(536f7498-046f-4b77-a82d-d7619df81d7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d8cb59fcd-czbmd_calico-apiserver(536f7498-046f-4b77-a82d-d7619df81d7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d8cb59fcd-czbmd" podUID="536f7498-046f-4b77-a82d-d7619df81d7a" Jul 7 06:09:24.773438 systemd[1]: Created slice kubepods-besteffort-pod7c9907d5_346d_4929_b83e_924668157d8a.slice - libcontainer container kubepods-besteffort-pod7c9907d5_346d_4929_b83e_924668157d8a.slice. Jul 7 06:09:24.776280 containerd[1690]: time="2025-07-07T06:09:24.776181137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wq7zv,Uid:7c9907d5-346d-4929-b83e-924668157d8a,Namespace:calico-system,Attempt:0,}" Jul 7 06:09:24.902422 containerd[1690]: time="2025-07-07T06:09:24.902279244Z" level=error msg="Failed to destroy network for sandbox \"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.902673 containerd[1690]: time="2025-07-07T06:09:24.902639004Z" level=error msg="encountered an error cleaning up failed sandbox \"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.902733 containerd[1690]: time="2025-07-07T06:09:24.902700525Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wq7zv,Uid:7c9907d5-346d-4929-b83e-924668157d8a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.903300 kubelet[3120]: E0707 06:09:24.903011 3120 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:24.903300 kubelet[3120]: E0707 06:09:24.903071 3120 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wq7zv" Jul 7 06:09:24.903300 kubelet[3120]: E0707 06:09:24.903091 3120 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wq7zv" Jul 7 06:09:24.904431 kubelet[3120]: E0707 06:09:24.903135 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wq7zv_calico-system(7c9907d5-346d-4929-b83e-924668157d8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wq7zv_calico-system(7c9907d5-346d-4929-b83e-924668157d8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wq7zv" podUID="7c9907d5-346d-4929-b83e-924668157d8a" Jul 7 06:09:24.951445 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23-shm.mount: Deactivated successfully. Jul 7 06:09:24.987715 kubelet[3120]: I0707 06:09:24.987680 3120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Jul 7 06:09:24.988456 containerd[1690]: time="2025-07-07T06:09:24.988257145Z" level=info msg="StopPodSandbox for \"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\"" Jul 7 06:09:24.988456 containerd[1690]: time="2025-07-07T06:09:24.988447586Z" level=info msg="Ensure that sandbox e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d in task-service has been cleanup successfully" Jul 7 06:09:24.991008 kubelet[3120]: I0707 06:09:24.990266 3120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Jul 7 06:09:24.993510 containerd[1690]: time="2025-07-07T06:09:24.992573394Z" level=info msg="StopPodSandbox for \"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\"" Jul 7 06:09:24.994030 containerd[1690]: time="2025-07-07T06:09:24.993696917Z" level=info msg="Ensure that sandbox 08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0 in task-service has been cleanup successfully" Jul 7 06:09:24.995262 kubelet[3120]: I0707 06:09:24.995041 3120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Jul 7 06:09:24.996477 containerd[1690]: time="2025-07-07T06:09:24.995854321Z" level=info msg="StopPodSandbox for \"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\"" Jul 7 06:09:24.996477 containerd[1690]: time="2025-07-07T06:09:24.996033002Z" level=info msg="Ensure that sandbox 66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c in task-service has been cleanup successfully" Jul 7 06:09:25.001117 kubelet[3120]: I0707 06:09:25.000951 3120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Jul 7 06:09:25.003823 containerd[1690]: time="2025-07-07T06:09:25.003691218Z" level=info msg="StopPodSandbox for \"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\"" Jul 7 06:09:25.004377 containerd[1690]: time="2025-07-07T06:09:25.004250259Z" level=info msg="Ensure that sandbox 9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f in task-service has been cleanup successfully" Jul 7 06:09:25.009244 kubelet[3120]: I0707 06:09:25.008279 3120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Jul 7 06:09:25.009355 containerd[1690]: time="2025-07-07T06:09:25.009071789Z" level=info msg="StopPodSandbox for \"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\"" Jul 7 06:09:25.012485 containerd[1690]: time="2025-07-07T06:09:25.012431236Z" level=info msg="Ensure that sandbox fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23 in task-service has been cleanup successfully" Jul 7 06:09:25.018122 kubelet[3120]: I0707 06:09:25.017278 3120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Jul 7 06:09:25.019237 containerd[1690]: time="2025-07-07T06:09:25.019148210Z" level=info msg="StopPodSandbox for \"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\"" Jul 7 06:09:25.019596 containerd[1690]: time="2025-07-07T06:09:25.019445011Z" level=info msg="Ensure that sandbox 26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a in task-service has been cleanup successfully" Jul 7 06:09:25.076152 containerd[1690]: time="2025-07-07T06:09:25.076075171Z" level=error msg="StopPodSandbox for \"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\" failed" error="failed to destroy network for sandbox \"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:25.076440 kubelet[3120]: E0707 06:09:25.076365 3120 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Jul 7 06:09:25.076440 kubelet[3120]: E0707 06:09:25.076416 3120 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c"} Jul 7 06:09:25.076512 kubelet[3120]: E0707 06:09:25.076450 3120 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa20f3ca-cae5-4a79-b6b7-441c614e749e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:09:25.076512 kubelet[3120]: E0707 06:09:25.076472 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa20f3ca-cae5-4a79-b6b7-441c614e749e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-t795m" podUID="aa20f3ca-cae5-4a79-b6b7-441c614e749e" Jul 7 06:09:25.087609 containerd[1690]: time="2025-07-07T06:09:25.087061114Z" level=error msg="StopPodSandbox for \"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\" failed" error="failed to destroy network for sandbox \"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:25.087787 kubelet[3120]: E0707 06:09:25.087618 3120 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Jul 7 06:09:25.087787 kubelet[3120]: E0707 06:09:25.087749 3120 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f"} Jul 7 06:09:25.088303 kubelet[3120]: E0707 06:09:25.087785 3120 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7c9907d5-346d-4929-b83e-924668157d8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:09:25.088303 kubelet[3120]: E0707 06:09:25.087819 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7c9907d5-346d-4929-b83e-924668157d8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wq7zv" podUID="7c9907d5-346d-4929-b83e-924668157d8a" Jul 7 06:09:25.095165 containerd[1690]: time="2025-07-07T06:09:25.094014369Z" level=error msg="StopPodSandbox for \"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\" failed" error="failed to destroy network for sandbox \"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:25.095348 kubelet[3120]: E0707 06:09:25.094804 3120 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Jul 7 06:09:25.095348 kubelet[3120]: E0707 06:09:25.094856 3120 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d"} Jul 7 06:09:25.095348 kubelet[3120]: E0707 06:09:25.094889 3120 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e40e25b-cc4f-4bf0-9b64-76658b196296\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:09:25.095348 kubelet[3120]: E0707 06:09:25.094917 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e40e25b-cc4f-4bf0-9b64-76658b196296\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d8cb59fcd-6bttb" podUID="8e40e25b-cc4f-4bf0-9b64-76658b196296" Jul 7 06:09:25.100638 containerd[1690]: time="2025-07-07T06:09:25.100568782Z" level=error msg="StopPodSandbox for \"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\" failed" error="failed to destroy network for sandbox \"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:25.101414 kubelet[3120]: E0707 06:09:25.101365 3120 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Jul 7 06:09:25.101501 kubelet[3120]: E0707 06:09:25.101444 3120 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0"} Jul 7 06:09:25.101501 kubelet[3120]: E0707 06:09:25.101482 3120 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ab60cfb0-6466-4ab8-aaf5-3dfba3e66388\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:09:25.101592 kubelet[3120]: E0707 06:09:25.101514 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ab60cfb0-6466-4ab8-aaf5-3dfba3e66388\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b5f9856b7-2gvp4" podUID="ab60cfb0-6466-4ab8-aaf5-3dfba3e66388" Jul 7 06:09:25.102863 containerd[1690]: time="2025-07-07T06:09:25.102796067Z" level=error msg="StopPodSandbox for \"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\" failed" error="failed to destroy network for sandbox \"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:25.103084 kubelet[3120]: E0707 06:09:25.103035 3120 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Jul 7 06:09:25.103141 kubelet[3120]: E0707 06:09:25.103090 3120 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23"} Jul 7 06:09:25.103141 kubelet[3120]: E0707 06:09:25.103124 3120 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"536f7498-046f-4b77-a82d-d7619df81d7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:09:25.103266 kubelet[3120]: E0707 06:09:25.103144 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"536f7498-046f-4b77-a82d-d7619df81d7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d8cb59fcd-czbmd" podUID="536f7498-046f-4b77-a82d-d7619df81d7a" Jul 7 06:09:25.110162 containerd[1690]: time="2025-07-07T06:09:25.110110443Z" level=error msg="StopPodSandbox for \"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\" failed" error="failed to destroy network for sandbox \"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:25.110638 kubelet[3120]: E0707 06:09:25.110588 3120 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Jul 7 06:09:25.110705 kubelet[3120]: E0707 06:09:25.110659 3120 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a"} Jul 7 06:09:25.110705 kubelet[3120]: E0707 06:09:25.110696 3120 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5c205245-76ab-43f8-9df0-ba526a26f50c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:09:25.110791 kubelet[3120]: E0707 06:09:25.110726 3120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5c205245-76ab-43f8-9df0-ba526a26f50c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s4xjt" podUID="5c205245-76ab-43f8-9df0-ba526a26f50c" Jul 7 06:09:30.636408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1273504216.mount: Deactivated successfully. Jul 7 06:09:30.704007 containerd[1690]: time="2025-07-07T06:09:30.703783760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:30.709906 containerd[1690]: time="2025-07-07T06:09:30.709852012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 7 06:09:30.720260 containerd[1690]: time="2025-07-07T06:09:30.720173273Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:30.727616 containerd[1690]: time="2025-07-07T06:09:30.727539528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:30.728486 containerd[1690]: time="2025-07-07T06:09:30.728318809Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 6.748291473s" Jul 7 06:09:30.728486 containerd[1690]: time="2025-07-07T06:09:30.728368089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 7 06:09:30.744076 containerd[1690]: time="2025-07-07T06:09:30.744010121Z" level=info msg="CreateContainer within sandbox \"3beb95f3d239e14f9524032ca5a612469ee513d9da3a4ee1c676239a846ff215\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 06:09:30.810517 containerd[1690]: time="2025-07-07T06:09:30.810415694Z" level=info msg="CreateContainer within sandbox \"3beb95f3d239e14f9524032ca5a612469ee513d9da3a4ee1c676239a846ff215\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d697ffedd2a4af00353b5868e48a58fe4d29e8c2c845950bbb03dbcdbfc5dd66\"" Jul 7 06:09:30.811871 containerd[1690]: time="2025-07-07T06:09:30.811357736Z" level=info msg="StartContainer for \"d697ffedd2a4af00353b5868e48a58fe4d29e8c2c845950bbb03dbcdbfc5dd66\"" Jul 7 06:09:30.844438 systemd[1]: Started cri-containerd-d697ffedd2a4af00353b5868e48a58fe4d29e8c2c845950bbb03dbcdbfc5dd66.scope - libcontainer container d697ffedd2a4af00353b5868e48a58fe4d29e8c2c845950bbb03dbcdbfc5dd66. Jul 7 06:09:30.891208 containerd[1690]: time="2025-07-07T06:09:30.890237934Z" level=info msg="StartContainer for \"d697ffedd2a4af00353b5868e48a58fe4d29e8c2c845950bbb03dbcdbfc5dd66\" returns successfully" Jul 7 06:09:31.055459 kubelet[3120]: I0707 06:09:31.055254 3120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-h2h6b" podStartSLOduration=1.296125594 podStartE2EDuration="20.055213345s" podCreationTimestamp="2025-07-07 06:09:11 +0000 UTC" firstStartedPulling="2025-07-07 06:09:11.97011978 +0000 UTC m=+28.317950950" lastFinishedPulling="2025-07-07 06:09:30.729207531 +0000 UTC m=+47.077038701" observedRunningTime="2025-07-07 06:09:31.05293094 +0000 UTC m=+47.400762110" watchObservedRunningTime="2025-07-07 06:09:31.055213345 +0000 UTC m=+47.403044515" Jul 7 06:09:31.243602 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 06:09:31.243754 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 06:09:31.396178 containerd[1690]: time="2025-07-07T06:09:31.396120588Z" level=info msg="StopPodSandbox for \"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\"" Jul 7 06:09:31.558528 containerd[1690]: 2025-07-07 06:09:31.516 [INFO][4413] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Jul 7 06:09:31.558528 containerd[1690]: 2025-07-07 06:09:31.516 [INFO][4413] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" iface="eth0" netns="/var/run/netns/cni-2e895572-4166-d4c6-4bb0-c0155f1b4248" Jul 7 06:09:31.558528 containerd[1690]: 2025-07-07 06:09:31.517 [INFO][4413] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" iface="eth0" netns="/var/run/netns/cni-2e895572-4166-d4c6-4bb0-c0155f1b4248" Jul 7 06:09:31.558528 containerd[1690]: 2025-07-07 06:09:31.517 [INFO][4413] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" iface="eth0" netns="/var/run/netns/cni-2e895572-4166-d4c6-4bb0-c0155f1b4248" Jul 7 06:09:31.558528 containerd[1690]: 2025-07-07 06:09:31.517 [INFO][4413] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Jul 7 06:09:31.558528 containerd[1690]: 2025-07-07 06:09:31.519 [INFO][4413] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Jul 7 06:09:31.558528 containerd[1690]: 2025-07-07 06:09:31.544 [INFO][4422] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" HandleID="k8s-pod-network.ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Workload="ci--4081.3.4--a--d5356a388e-k8s-whisker--7b96468696--n7cks-eth0" Jul 7 06:09:31.558528 containerd[1690]: 2025-07-07 06:09:31.545 [INFO][4422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:31.558528 containerd[1690]: 2025-07-07 06:09:31.545 [INFO][4422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:31.558528 containerd[1690]: 2025-07-07 06:09:31.553 [WARNING][4422] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" HandleID="k8s-pod-network.ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Workload="ci--4081.3.4--a--d5356a388e-k8s-whisker--7b96468696--n7cks-eth0" Jul 7 06:09:31.558528 containerd[1690]: 2025-07-07 06:09:31.553 [INFO][4422] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" HandleID="k8s-pod-network.ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Workload="ci--4081.3.4--a--d5356a388e-k8s-whisker--7b96468696--n7cks-eth0" Jul 7 06:09:31.558528 containerd[1690]: 2025-07-07 06:09:31.555 [INFO][4422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:31.558528 containerd[1690]: 2025-07-07 06:09:31.556 [INFO][4413] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Jul 7 06:09:31.558973 containerd[1690]: time="2025-07-07T06:09:31.558700594Z" level=info msg="TearDown network for sandbox \"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\" successfully" Jul 7 06:09:31.558973 containerd[1690]: time="2025-07-07T06:09:31.558797754Z" level=info msg="StopPodSandbox for \"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\" returns successfully" Jul 7 06:09:31.596615 kubelet[3120]: I0707 06:09:31.596577 3120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bdc3f44-4958-4179-a88f-0dae8e77b540-whisker-ca-bundle\") pod \"7bdc3f44-4958-4179-a88f-0dae8e77b540\" (UID: \"7bdc3f44-4958-4179-a88f-0dae8e77b540\") " Jul 7 06:09:31.596972 kubelet[3120]: I0707 06:09:31.596946 3120 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bdc3f44-4958-4179-a88f-0dae8e77b540-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7bdc3f44-4958-4179-a88f-0dae8e77b540" (UID: "7bdc3f44-4958-4179-a88f-0dae8e77b540"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 06:09:31.597782 kubelet[3120]: I0707 06:09:31.597707 3120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7bdc3f44-4958-4179-a88f-0dae8e77b540-whisker-backend-key-pair\") pod \"7bdc3f44-4958-4179-a88f-0dae8e77b540\" (UID: \"7bdc3f44-4958-4179-a88f-0dae8e77b540\") " Jul 7 06:09:31.597896 kubelet[3120]: I0707 06:09:31.597796 3120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l67zp\" (UniqueName: \"kubernetes.io/projected/7bdc3f44-4958-4179-a88f-0dae8e77b540-kube-api-access-l67zp\") pod \"7bdc3f44-4958-4179-a88f-0dae8e77b540\" (UID: \"7bdc3f44-4958-4179-a88f-0dae8e77b540\") " Jul 7 06:09:31.597926 kubelet[3120]: I0707 06:09:31.597898 3120 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bdc3f44-4958-4179-a88f-0dae8e77b540-whisker-ca-bundle\") on node \"ci-4081.3.4-a-d5356a388e\" DevicePath \"\"" Jul 7 06:09:31.602090 kubelet[3120]: I0707 06:09:31.601999 3120 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bdc3f44-4958-4179-a88f-0dae8e77b540-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7bdc3f44-4958-4179-a88f-0dae8e77b540" (UID: "7bdc3f44-4958-4179-a88f-0dae8e77b540"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 06:09:31.602469 kubelet[3120]: I0707 06:09:31.602421 3120 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bdc3f44-4958-4179-a88f-0dae8e77b540-kube-api-access-l67zp" (OuterVolumeSpecName: "kube-api-access-l67zp") pod "7bdc3f44-4958-4179-a88f-0dae8e77b540" (UID: "7bdc3f44-4958-4179-a88f-0dae8e77b540"). InnerVolumeSpecName "kube-api-access-l67zp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:09:31.637591 systemd[1]: run-netns-cni\x2d2e895572\x2d4166\x2dd4c6\x2d4bb0\x2dc0155f1b4248.mount: Deactivated successfully. Jul 7 06:09:31.637732 systemd[1]: var-lib-kubelet-pods-7bdc3f44\x2d4958\x2d4179\x2da88f\x2d0dae8e77b540-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl67zp.mount: Deactivated successfully. Jul 7 06:09:31.637789 systemd[1]: var-lib-kubelet-pods-7bdc3f44\x2d4958\x2d4179\x2da88f\x2d0dae8e77b540-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 06:09:31.698572 kubelet[3120]: I0707 06:09:31.698421 3120 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7bdc3f44-4958-4179-a88f-0dae8e77b540-whisker-backend-key-pair\") on node \"ci-4081.3.4-a-d5356a388e\" DevicePath \"\"" Jul 7 06:09:31.698572 kubelet[3120]: I0707 06:09:31.698453 3120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l67zp\" (UniqueName: \"kubernetes.io/projected/7bdc3f44-4958-4179-a88f-0dae8e77b540-kube-api-access-l67zp\") on node \"ci-4081.3.4-a-d5356a388e\" DevicePath \"\"" Jul 7 06:09:31.774781 systemd[1]: Removed slice kubepods-besteffort-pod7bdc3f44_4958_4179_a88f_0dae8e77b540.slice - libcontainer container kubepods-besteffort-pod7bdc3f44_4958_4179_a88f_0dae8e77b540.slice. Jul 7 06:09:32.125855 systemd[1]: Created slice kubepods-besteffort-podd48250a1_5629_4683_b52e_aacea7b8a275.slice - libcontainer container kubepods-besteffort-podd48250a1_5629_4683_b52e_aacea7b8a275.slice. Jul 7 06:09:32.201441 kubelet[3120]: I0707 06:09:32.201397 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4hp8\" (UniqueName: \"kubernetes.io/projected/d48250a1-5629-4683-b52e-aacea7b8a275-kube-api-access-p4hp8\") pod \"whisker-d8d8bcb56-9cch4\" (UID: \"d48250a1-5629-4683-b52e-aacea7b8a275\") " pod="calico-system/whisker-d8d8bcb56-9cch4" Jul 7 06:09:32.202067 kubelet[3120]: I0707 06:09:32.201961 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d48250a1-5629-4683-b52e-aacea7b8a275-whisker-ca-bundle\") pod \"whisker-d8d8bcb56-9cch4\" (UID: \"d48250a1-5629-4683-b52e-aacea7b8a275\") " pod="calico-system/whisker-d8d8bcb56-9cch4" Jul 7 06:09:32.202067 kubelet[3120]: I0707 06:09:32.202024 3120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d48250a1-5629-4683-b52e-aacea7b8a275-whisker-backend-key-pair\") pod \"whisker-d8d8bcb56-9cch4\" (UID: \"d48250a1-5629-4683-b52e-aacea7b8a275\") " pod="calico-system/whisker-d8d8bcb56-9cch4" Jul 7 06:09:32.430475 containerd[1690]: time="2025-07-07T06:09:32.430333101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d8d8bcb56-9cch4,Uid:d48250a1-5629-4683-b52e-aacea7b8a275,Namespace:calico-system,Attempt:0,}" Jul 7 06:09:32.607091 systemd-networkd[1330]: cali0611830c7fd: Link UP Jul 7 06:09:32.607360 systemd-networkd[1330]: cali0611830c7fd: Gained carrier Jul 7 06:09:32.625512 containerd[1690]: 2025-07-07 06:09:32.518 [INFO][4464] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:09:32.625512 containerd[1690]: 2025-07-07 06:09:32.534 [INFO][4464] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--d5356a388e-k8s-whisker--d8d8bcb56--9cch4-eth0 whisker-d8d8bcb56- calico-system d48250a1-5629-4683-b52e-aacea7b8a275 930 0 2025-07-07 06:09:32 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:d8d8bcb56 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.4-a-d5356a388e whisker-d8d8bcb56-9cch4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0611830c7fd [] [] }} ContainerID="3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba" Namespace="calico-system" Pod="whisker-d8d8bcb56-9cch4" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-whisker--d8d8bcb56--9cch4-" Jul 7 06:09:32.625512 containerd[1690]: 2025-07-07 06:09:32.534 [INFO][4464] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba" Namespace="calico-system" Pod="whisker-d8d8bcb56-9cch4" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-whisker--d8d8bcb56--9cch4-eth0" Jul 7 06:09:32.625512 containerd[1690]: 2025-07-07 06:09:32.557 [INFO][4476] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba" HandleID="k8s-pod-network.3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba" Workload="ci--4081.3.4--a--d5356a388e-k8s-whisker--d8d8bcb56--9cch4-eth0" Jul 7 06:09:32.625512 containerd[1690]: 2025-07-07 06:09:32.557 [INFO][4476] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba" HandleID="k8s-pod-network.3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba" Workload="ci--4081.3.4--a--d5356a388e-k8s-whisker--d8d8bcb56--9cch4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b290), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-d5356a388e", "pod":"whisker-d8d8bcb56-9cch4", "timestamp":"2025-07-07 06:09:32.557769237 +0000 UTC"}, Hostname:"ci-4081.3.4-a-d5356a388e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:09:32.625512 containerd[1690]: 2025-07-07 06:09:32.557 [INFO][4476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:32.625512 containerd[1690]: 2025-07-07 06:09:32.558 [INFO][4476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:32.625512 containerd[1690]: 2025-07-07 06:09:32.558 [INFO][4476] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-d5356a388e' Jul 7 06:09:32.625512 containerd[1690]: 2025-07-07 06:09:32.566 [INFO][4476] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:32.625512 containerd[1690]: 2025-07-07 06:09:32.571 [INFO][4476] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:32.625512 containerd[1690]: 2025-07-07 06:09:32.576 [INFO][4476] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:32.625512 containerd[1690]: 2025-07-07 06:09:32.577 [INFO][4476] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:32.625512 containerd[1690]: 2025-07-07 06:09:32.580 [INFO][4476] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:32.625512 containerd[1690]: 2025-07-07 06:09:32.580 [INFO][4476] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:32.625512 containerd[1690]: 2025-07-07 06:09:32.582 [INFO][4476] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba Jul 7 06:09:32.625512 containerd[1690]: 2025-07-07 06:09:32.587 [INFO][4476] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:32.625512 containerd[1690]: 2025-07-07 06:09:32.598 [INFO][4476] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.65/26] block=192.168.82.64/26 handle="k8s-pod-network.3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:32.625512 containerd[1690]: 2025-07-07 06:09:32.598 [INFO][4476] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.65/26] handle="k8s-pod-network.3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:32.625512 containerd[1690]: 2025-07-07 06:09:32.598 [INFO][4476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:32.625512 containerd[1690]: 2025-07-07 06:09:32.598 [INFO][4476] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.65/26] IPv6=[] ContainerID="3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba" HandleID="k8s-pod-network.3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba" Workload="ci--4081.3.4--a--d5356a388e-k8s-whisker--d8d8bcb56--9cch4-eth0" Jul 7 06:09:32.626105 containerd[1690]: 2025-07-07 06:09:32.600 [INFO][4464] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba" Namespace="calico-system" Pod="whisker-d8d8bcb56-9cch4" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-whisker--d8d8bcb56--9cch4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-whisker--d8d8bcb56--9cch4-eth0", GenerateName:"whisker-d8d8bcb56-", Namespace:"calico-system", SelfLink:"", UID:"d48250a1-5629-4683-b52e-aacea7b8a275", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d8d8bcb56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"", Pod:"whisker-d8d8bcb56-9cch4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0611830c7fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:32.626105 containerd[1690]: 2025-07-07 06:09:32.600 [INFO][4464] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.65/32] ContainerID="3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba" Namespace="calico-system" Pod="whisker-d8d8bcb56-9cch4" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-whisker--d8d8bcb56--9cch4-eth0" Jul 7 06:09:32.626105 containerd[1690]: 2025-07-07 06:09:32.600 [INFO][4464] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0611830c7fd ContainerID="3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba" Namespace="calico-system" Pod="whisker-d8d8bcb56-9cch4" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-whisker--d8d8bcb56--9cch4-eth0" Jul 7 06:09:32.626105 containerd[1690]: 2025-07-07 06:09:32.606 [INFO][4464] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba" Namespace="calico-system" Pod="whisker-d8d8bcb56-9cch4" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-whisker--d8d8bcb56--9cch4-eth0" Jul 7 06:09:32.626105 containerd[1690]: 2025-07-07 06:09:32.606 [INFO][4464] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba" Namespace="calico-system" Pod="whisker-d8d8bcb56-9cch4" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-whisker--d8d8bcb56--9cch4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-whisker--d8d8bcb56--9cch4-eth0", GenerateName:"whisker-d8d8bcb56-", Namespace:"calico-system", SelfLink:"", UID:"d48250a1-5629-4683-b52e-aacea7b8a275", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d8d8bcb56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba", Pod:"whisker-d8d8bcb56-9cch4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0611830c7fd", MAC:"96:89:46:17:b2:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:32.626105 containerd[1690]: 2025-07-07 06:09:32.623 [INFO][4464] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba" Namespace="calico-system" Pod="whisker-d8d8bcb56-9cch4" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-whisker--d8d8bcb56--9cch4-eth0" Jul 7 06:09:32.651473 containerd[1690]: time="2025-07-07T06:09:32.651321704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:32.652032 containerd[1690]: time="2025-07-07T06:09:32.651790705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:32.652032 containerd[1690]: time="2025-07-07T06:09:32.651814505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:32.652032 containerd[1690]: time="2025-07-07T06:09:32.651915306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:32.681454 systemd[1]: Started cri-containerd-3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba.scope - libcontainer container 3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba. Jul 7 06:09:32.732676 containerd[1690]: time="2025-07-07T06:09:32.732629907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d8d8bcb56-9cch4,Uid:d48250a1-5629-4683-b52e-aacea7b8a275,Namespace:calico-system,Attempt:0,} returns sandbox id \"3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba\"" Jul 7 06:09:32.737772 containerd[1690]: time="2025-07-07T06:09:32.737723398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 06:09:33.144397 kernel: bpftool[4650]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 06:09:33.375515 systemd-networkd[1330]: vxlan.calico: Link UP Jul 7 06:09:33.375524 systemd-networkd[1330]: vxlan.calico: Gained carrier Jul 7 06:09:33.785926 kubelet[3120]: I0707 06:09:33.783756 3120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bdc3f44-4958-4179-a88f-0dae8e77b540" path="/var/lib/kubelet/pods/7bdc3f44-4958-4179-a88f-0dae8e77b540/volumes" Jul 7 06:09:33.812462 systemd-networkd[1330]: cali0611830c7fd: Gained IPv6LL Jul 7 06:09:34.483029 containerd[1690]: time="2025-07-07T06:09:34.482933376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:34.488718 containerd[1690]: time="2025-07-07T06:09:34.488553988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 7 06:09:34.495133 containerd[1690]: time="2025-07-07T06:09:34.495056441Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:34.503288 containerd[1690]: time="2025-07-07T06:09:34.503037177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:34.504035 containerd[1690]: time="2025-07-07T06:09:34.503606058Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.76583662s" Jul 7 06:09:34.504035 containerd[1690]: time="2025-07-07T06:09:34.503641938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 7 06:09:34.518200 containerd[1690]: time="2025-07-07T06:09:34.518149687Z" level=info msg="CreateContainer within sandbox \"3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 06:09:34.570143 containerd[1690]: time="2025-07-07T06:09:34.570051431Z" level=info msg="CreateContainer within sandbox \"3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b45d4923c76c7117073871950e619accba70ac932290cc9001886aadd54b8990\"" Jul 7 06:09:34.572147 containerd[1690]: time="2025-07-07T06:09:34.572084355Z" level=info msg="StartContainer for \"b45d4923c76c7117073871950e619accba70ac932290cc9001886aadd54b8990\"" Jul 7 06:09:34.580589 systemd-networkd[1330]: vxlan.calico: Gained IPv6LL Jul 7 06:09:34.609462 systemd[1]: Started cri-containerd-b45d4923c76c7117073871950e619accba70ac932290cc9001886aadd54b8990.scope - libcontainer container b45d4923c76c7117073871950e619accba70ac932290cc9001886aadd54b8990. Jul 7 06:09:34.646490 containerd[1690]: time="2025-07-07T06:09:34.646365504Z" level=info msg="StartContainer for \"b45d4923c76c7117073871950e619accba70ac932290cc9001886aadd54b8990\" returns successfully" Jul 7 06:09:34.652100 containerd[1690]: time="2025-07-07T06:09:34.652051715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 06:09:35.773054 containerd[1690]: time="2025-07-07T06:09:35.773000523Z" level=info msg="StopPodSandbox for \"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\"" Jul 7 06:09:35.876208 containerd[1690]: 2025-07-07 06:09:35.827 [INFO][4775] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Jul 7 06:09:35.876208 containerd[1690]: 2025-07-07 06:09:35.828 [INFO][4775] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" iface="eth0" netns="/var/run/netns/cni-8894b426-e960-d0d3-e5ac-3a7d103e36af" Jul 7 06:09:35.876208 containerd[1690]: 2025-07-07 06:09:35.829 [INFO][4775] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" iface="eth0" netns="/var/run/netns/cni-8894b426-e960-d0d3-e5ac-3a7d103e36af" Jul 7 06:09:35.876208 containerd[1690]: 2025-07-07 06:09:35.830 [INFO][4775] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" iface="eth0" netns="/var/run/netns/cni-8894b426-e960-d0d3-e5ac-3a7d103e36af" Jul 7 06:09:35.876208 containerd[1690]: 2025-07-07 06:09:35.830 [INFO][4775] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Jul 7 06:09:35.876208 containerd[1690]: 2025-07-07 06:09:35.830 [INFO][4775] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Jul 7 06:09:35.876208 containerd[1690]: 2025-07-07 06:09:35.859 [INFO][4782] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" HandleID="k8s-pod-network.26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0" Jul 7 06:09:35.876208 containerd[1690]: 2025-07-07 06:09:35.859 [INFO][4782] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:35.876208 containerd[1690]: 2025-07-07 06:09:35.859 [INFO][4782] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:35.876208 containerd[1690]: 2025-07-07 06:09:35.868 [WARNING][4782] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" HandleID="k8s-pod-network.26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0" Jul 7 06:09:35.876208 containerd[1690]: 2025-07-07 06:09:35.868 [INFO][4782] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" HandleID="k8s-pod-network.26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0" Jul 7 06:09:35.876208 containerd[1690]: 2025-07-07 06:09:35.870 [INFO][4782] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:35.876208 containerd[1690]: 2025-07-07 06:09:35.872 [INFO][4775] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Jul 7 06:09:35.876208 containerd[1690]: time="2025-07-07T06:09:35.874055565Z" level=info msg="TearDown network for sandbox \"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\" successfully" Jul 7 06:09:35.876208 containerd[1690]: time="2025-07-07T06:09:35.874086645Z" level=info msg="StopPodSandbox for \"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\" returns successfully" Jul 7 06:09:35.876491 systemd[1]: run-netns-cni\x2d8894b426\x2de960\x2dd0d3\x2de5ac\x2d3a7d103e36af.mount: Deactivated successfully. Jul 7 06:09:35.878915 containerd[1690]: time="2025-07-07T06:09:35.878873375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s4xjt,Uid:5c205245-76ab-43f8-9df0-ba526a26f50c,Namespace:kube-system,Attempt:1,}" Jul 7 06:09:36.058777 systemd-networkd[1330]: cali8883cff950b: Link UP Jul 7 06:09:36.060098 systemd-networkd[1330]: cali8883cff950b: Gained carrier Jul 7 06:09:36.082634 containerd[1690]: 2025-07-07 06:09:35.976 [INFO][4789] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0 coredns-668d6bf9bc- kube-system 5c205245-76ab-43f8-9df0-ba526a26f50c 951 0 2025-07-07 06:08:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-a-d5356a388e coredns-668d6bf9bc-s4xjt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8883cff950b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4xjt" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-" Jul 7 06:09:36.082634 containerd[1690]: 2025-07-07 06:09:35.976 [INFO][4789] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4xjt" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0" Jul 7 06:09:36.082634 containerd[1690]: 2025-07-07 06:09:36.005 [INFO][4801] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42" HandleID="k8s-pod-network.95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0" Jul 7 06:09:36.082634 containerd[1690]: 2025-07-07 06:09:36.005 [INFO][4801] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42" HandleID="k8s-pod-network.95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b060), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-a-d5356a388e", "pod":"coredns-668d6bf9bc-s4xjt", "timestamp":"2025-07-07 06:09:36.005691469 +0000 UTC"}, Hostname:"ci-4081.3.4-a-d5356a388e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:09:36.082634 containerd[1690]: 2025-07-07 06:09:36.005 [INFO][4801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:36.082634 containerd[1690]: 2025-07-07 06:09:36.005 [INFO][4801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:36.082634 containerd[1690]: 2025-07-07 06:09:36.005 [INFO][4801] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-d5356a388e' Jul 7 06:09:36.082634 containerd[1690]: 2025-07-07 06:09:36.015 [INFO][4801] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:36.082634 containerd[1690]: 2025-07-07 06:09:36.020 [INFO][4801] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:36.082634 containerd[1690]: 2025-07-07 06:09:36.025 [INFO][4801] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:36.082634 containerd[1690]: 2025-07-07 06:09:36.027 [INFO][4801] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:36.082634 containerd[1690]: 2025-07-07 06:09:36.030 [INFO][4801] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:36.082634 containerd[1690]: 2025-07-07 06:09:36.030 [INFO][4801] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:36.082634 containerd[1690]: 2025-07-07 06:09:36.033 [INFO][4801] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42 Jul 7 06:09:36.082634 containerd[1690]: 2025-07-07 06:09:36.043 [INFO][4801] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:36.082634 containerd[1690]: 2025-07-07 06:09:36.053 [INFO][4801] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.66/26] block=192.168.82.64/26 handle="k8s-pod-network.95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:36.082634 containerd[1690]: 2025-07-07 06:09:36.053 [INFO][4801] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.66/26] handle="k8s-pod-network.95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:36.082634 containerd[1690]: 2025-07-07 06:09:36.053 [INFO][4801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:36.082634 containerd[1690]: 2025-07-07 06:09:36.053 [INFO][4801] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.66/26] IPv6=[] ContainerID="95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42" HandleID="k8s-pod-network.95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0" Jul 7 06:09:36.084437 containerd[1690]: 2025-07-07 06:09:36.055 [INFO][4789] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4xjt" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5c205245-76ab-43f8-9df0-ba526a26f50c", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"", Pod:"coredns-668d6bf9bc-s4xjt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8883cff950b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:36.084437 containerd[1690]: 2025-07-07 06:09:36.055 [INFO][4789] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.66/32] ContainerID="95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4xjt" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0" Jul 7 06:09:36.084437 containerd[1690]: 2025-07-07 06:09:36.055 [INFO][4789] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8883cff950b ContainerID="95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4xjt" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0" Jul 7 06:09:36.084437 containerd[1690]: 2025-07-07 06:09:36.060 [INFO][4789] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4xjt" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0" Jul 7 06:09:36.084437 containerd[1690]: 2025-07-07 06:09:36.061 [INFO][4789] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4xjt" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5c205245-76ab-43f8-9df0-ba526a26f50c", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42", Pod:"coredns-668d6bf9bc-s4xjt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8883cff950b", MAC:"e2:7a:30:e0:aa:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:36.084437 containerd[1690]: 2025-07-07 06:09:36.076 [INFO][4789] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4xjt" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0" Jul 7 06:09:36.110831 containerd[1690]: time="2025-07-07T06:09:36.110544159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:36.110831 containerd[1690]: time="2025-07-07T06:09:36.110600759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:36.110831 containerd[1690]: time="2025-07-07T06:09:36.110612159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:36.110831 containerd[1690]: time="2025-07-07T06:09:36.110692040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:36.135611 systemd[1]: Started cri-containerd-95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42.scope - libcontainer container 95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42. Jul 7 06:09:36.168817 containerd[1690]: time="2025-07-07T06:09:36.168775836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s4xjt,Uid:5c205245-76ab-43f8-9df0-ba526a26f50c,Namespace:kube-system,Attempt:1,} returns sandbox id \"95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42\"" Jul 7 06:09:36.178302 containerd[1690]: time="2025-07-07T06:09:36.178247855Z" level=info msg="CreateContainer within sandbox \"95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:09:36.230616 containerd[1690]: time="2025-07-07T06:09:36.230564800Z" level=info msg="CreateContainer within sandbox \"95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4c0902786b394544b21ce2752e7cc4a274f882e028e8d609268ea2b9b4808836\"" Jul 7 06:09:36.231144 containerd[1690]: time="2025-07-07T06:09:36.231115401Z" level=info msg="StartContainer for \"4c0902786b394544b21ce2752e7cc4a274f882e028e8d609268ea2b9b4808836\"" Jul 7 06:09:36.252481 systemd[1]: Started cri-containerd-4c0902786b394544b21ce2752e7cc4a274f882e028e8d609268ea2b9b4808836.scope - libcontainer container 4c0902786b394544b21ce2752e7cc4a274f882e028e8d609268ea2b9b4808836. Jul 7 06:09:36.293158 containerd[1690]: time="2025-07-07T06:09:36.292196123Z" level=info msg="StartContainer for \"4c0902786b394544b21ce2752e7cc4a274f882e028e8d609268ea2b9b4808836\" returns successfully" Jul 7 06:09:36.780343 containerd[1690]: time="2025-07-07T06:09:36.780054062Z" level=info msg="StopPodSandbox for \"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\"" Jul 7 06:09:36.885150 containerd[1690]: 2025-07-07 06:09:36.830 [INFO][4902] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Jul 7 06:09:36.885150 containerd[1690]: 2025-07-07 06:09:36.830 [INFO][4902] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" iface="eth0" netns="/var/run/netns/cni-cec500f2-863f-3dcc-4a1c-d9a9c1b806c1" Jul 7 06:09:36.885150 containerd[1690]: 2025-07-07 06:09:36.831 [INFO][4902] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" iface="eth0" netns="/var/run/netns/cni-cec500f2-863f-3dcc-4a1c-d9a9c1b806c1" Jul 7 06:09:36.885150 containerd[1690]: 2025-07-07 06:09:36.833 [INFO][4902] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" iface="eth0" netns="/var/run/netns/cni-cec500f2-863f-3dcc-4a1c-d9a9c1b806c1" Jul 7 06:09:36.885150 containerd[1690]: 2025-07-07 06:09:36.833 [INFO][4902] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Jul 7 06:09:36.885150 containerd[1690]: 2025-07-07 06:09:36.833 [INFO][4902] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Jul 7 06:09:36.885150 containerd[1690]: 2025-07-07 06:09:36.867 [INFO][4909] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" HandleID="k8s-pod-network.08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0" Jul 7 06:09:36.885150 containerd[1690]: 2025-07-07 06:09:36.867 [INFO][4909] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:36.885150 containerd[1690]: 2025-07-07 06:09:36.867 [INFO][4909] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:36.885150 containerd[1690]: 2025-07-07 06:09:36.878 [WARNING][4909] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" HandleID="k8s-pod-network.08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0" Jul 7 06:09:36.885150 containerd[1690]: 2025-07-07 06:09:36.878 [INFO][4909] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" HandleID="k8s-pod-network.08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0" Jul 7 06:09:36.885150 containerd[1690]: 2025-07-07 06:09:36.882 [INFO][4909] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:36.885150 containerd[1690]: 2025-07-07 06:09:36.883 [INFO][4902] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Jul 7 06:09:36.887463 containerd[1690]: time="2025-07-07T06:09:36.886538355Z" level=info msg="TearDown network for sandbox \"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\" successfully" Jul 7 06:09:36.887463 containerd[1690]: time="2025-07-07T06:09:36.886575675Z" level=info msg="StopPodSandbox for \"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\" returns successfully" Jul 7 06:09:36.888096 systemd[1]: run-netns-cni\x2dcec500f2\x2d863f\x2d3dcc\x2d4a1c\x2dd9a9c1b806c1.mount: Deactivated successfully. Jul 7 06:09:36.895338 containerd[1690]: time="2025-07-07T06:09:36.895132692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b5f9856b7-2gvp4,Uid:ab60cfb0-6466-4ab8-aaf5-3dfba3e66388,Namespace:calico-system,Attempt:1,}" Jul 7 06:09:37.115490 kubelet[3120]: I0707 06:09:37.115165 3120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s4xjt" podStartSLOduration=48.115147608 podStartE2EDuration="48.115147608s" podCreationTimestamp="2025-07-07 06:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:09:37.090714524 +0000 UTC m=+53.438545694" watchObservedRunningTime="2025-07-07 06:09:37.115147608 +0000 UTC m=+53.462978778" Jul 7 06:09:37.769258 containerd[1690]: time="2025-07-07T06:09:37.768424107Z" level=info msg="StopPodSandbox for \"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\"" Jul 7 06:09:37.769755 containerd[1690]: time="2025-07-07T06:09:37.769727470Z" level=info msg="StopPodSandbox for \"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\"" Jul 7 06:09:37.909616 containerd[1690]: 2025-07-07 06:09:37.838 [INFO][4938] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Jul 7 06:09:37.909616 containerd[1690]: 2025-07-07 06:09:37.838 [INFO][4938] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" iface="eth0" netns="/var/run/netns/cni-cb8a6064-ed81-889e-91c1-0c882036d637" Jul 7 06:09:37.909616 containerd[1690]: 2025-07-07 06:09:37.838 [INFO][4938] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" iface="eth0" netns="/var/run/netns/cni-cb8a6064-ed81-889e-91c1-0c882036d637" Jul 7 06:09:37.909616 containerd[1690]: 2025-07-07 06:09:37.844 [INFO][4938] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" iface="eth0" netns="/var/run/netns/cni-cb8a6064-ed81-889e-91c1-0c882036d637" Jul 7 06:09:37.909616 containerd[1690]: 2025-07-07 06:09:37.844 [INFO][4938] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Jul 7 06:09:37.909616 containerd[1690]: 2025-07-07 06:09:37.845 [INFO][4938] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Jul 7 06:09:37.909616 containerd[1690]: 2025-07-07 06:09:37.878 [INFO][4950] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" HandleID="k8s-pod-network.9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Workload="ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0" Jul 7 06:09:37.909616 containerd[1690]: 2025-07-07 06:09:37.879 [INFO][4950] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:37.909616 containerd[1690]: 2025-07-07 06:09:37.879 [INFO][4950] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:37.909616 containerd[1690]: 2025-07-07 06:09:37.897 [WARNING][4950] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" HandleID="k8s-pod-network.9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Workload="ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0" Jul 7 06:09:37.909616 containerd[1690]: 2025-07-07 06:09:37.897 [INFO][4950] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" HandleID="k8s-pod-network.9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Workload="ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0" Jul 7 06:09:37.909616 containerd[1690]: 2025-07-07 06:09:37.902 [INFO][4950] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:37.909616 containerd[1690]: 2025-07-07 06:09:37.907 [INFO][4938] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Jul 7 06:09:37.920985 containerd[1690]: time="2025-07-07T06:09:37.910483435Z" level=info msg="TearDown network for sandbox \"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\" successfully" Jul 7 06:09:37.920985 containerd[1690]: time="2025-07-07T06:09:37.910513715Z" level=info msg="StopPodSandbox for \"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\" returns successfully" Jul 7 06:09:37.920985 containerd[1690]: time="2025-07-07T06:09:37.912540559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wq7zv,Uid:7c9907d5-346d-4929-b83e-924668157d8a,Namespace:calico-system,Attempt:1,}" Jul 7 06:09:37.914351 systemd[1]: run-netns-cni\x2dcb8a6064\x2ded81\x2d889e\x2d91c1\x2d0c882036d637.mount: Deactivated successfully. Jul 7 06:09:37.937042 containerd[1690]: 2025-07-07 06:09:37.849 [INFO][4937] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Jul 7 06:09:37.937042 containerd[1690]: 2025-07-07 06:09:37.849 [INFO][4937] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" iface="eth0" netns="/var/run/netns/cni-ffb543d2-e8fd-8645-4ba6-511dd275e6e3" Jul 7 06:09:37.937042 containerd[1690]: 2025-07-07 06:09:37.850 [INFO][4937] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" iface="eth0" netns="/var/run/netns/cni-ffb543d2-e8fd-8645-4ba6-511dd275e6e3" Jul 7 06:09:37.937042 containerd[1690]: 2025-07-07 06:09:37.850 [INFO][4937] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" iface="eth0" netns="/var/run/netns/cni-ffb543d2-e8fd-8645-4ba6-511dd275e6e3" Jul 7 06:09:37.937042 containerd[1690]: 2025-07-07 06:09:37.850 [INFO][4937] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Jul 7 06:09:37.937042 containerd[1690]: 2025-07-07 06:09:37.850 [INFO][4937] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Jul 7 06:09:37.937042 containerd[1690]: 2025-07-07 06:09:37.878 [INFO][4956] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" HandleID="k8s-pod-network.fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0" Jul 7 06:09:37.937042 containerd[1690]: 2025-07-07 06:09:37.879 [INFO][4956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:37.937042 containerd[1690]: 2025-07-07 06:09:37.902 [INFO][4956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:37.937042 containerd[1690]: 2025-07-07 06:09:37.927 [WARNING][4956] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" HandleID="k8s-pod-network.fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0" Jul 7 06:09:37.937042 containerd[1690]: 2025-07-07 06:09:37.927 [INFO][4956] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" HandleID="k8s-pod-network.fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0" Jul 7 06:09:37.937042 containerd[1690]: 2025-07-07 06:09:37.930 [INFO][4956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:37.937042 containerd[1690]: 2025-07-07 06:09:37.933 [INFO][4937] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Jul 7 06:09:37.939794 containerd[1690]: time="2025-07-07T06:09:37.939120565Z" level=info msg="TearDown network for sandbox \"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\" successfully" Jul 7 06:09:37.939794 containerd[1690]: time="2025-07-07T06:09:37.939644966Z" level=info msg="StopPodSandbox for \"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\" returns successfully" Jul 7 06:09:37.941325 systemd[1]: run-netns-cni\x2dffb543d2\x2de8fd\x2d8645\x2d4ba6\x2d511dd275e6e3.mount: Deactivated successfully. Jul 7 06:09:37.983151 containerd[1690]: time="2025-07-07T06:09:37.943706013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8cb59fcd-czbmd,Uid:536f7498-046f-4b77-a82d-d7619df81d7a,Namespace:calico-apiserver,Attempt:1,}" Jul 7 06:09:38.037302 systemd-networkd[1330]: cali8883cff950b: Gained IPv6LL Jul 7 06:09:38.124473 systemd-networkd[1330]: cali9352f152fae: Link UP Jul 7 06:09:38.129569 systemd-networkd[1330]: cali9352f152fae: Gained carrier Jul 7 06:09:38.168625 containerd[1690]: 2025-07-07 06:09:38.023 [INFO][4965] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0 calico-kube-controllers-6b5f9856b7- calico-system ab60cfb0-6466-4ab8-aaf5-3dfba3e66388 960 0 2025-07-07 06:09:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b5f9856b7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.4-a-d5356a388e calico-kube-controllers-6b5f9856b7-2gvp4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9352f152fae [] [] }} ContainerID="c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645" Namespace="calico-system" Pod="calico-kube-controllers-6b5f9856b7-2gvp4" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-" Jul 7 06:09:38.168625 containerd[1690]: 2025-07-07 06:09:38.024 [INFO][4965] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645" Namespace="calico-system" Pod="calico-kube-controllers-6b5f9856b7-2gvp4" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0" Jul 7 06:09:38.168625 containerd[1690]: 2025-07-07 06:09:38.057 [INFO][4977] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645" HandleID="k8s-pod-network.c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0" Jul 7 06:09:38.168625 containerd[1690]: 2025-07-07 06:09:38.057 [INFO][4977] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645" HandleID="k8s-pod-network.c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b050), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-d5356a388e", "pod":"calico-kube-controllers-6b5f9856b7-2gvp4", "timestamp":"2025-07-07 06:09:38.05702409 +0000 UTC"}, Hostname:"ci-4081.3.4-a-d5356a388e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:09:38.168625 containerd[1690]: 2025-07-07 06:09:38.057 [INFO][4977] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:38.168625 containerd[1690]: 2025-07-07 06:09:38.057 [INFO][4977] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:38.168625 containerd[1690]: 2025-07-07 06:09:38.057 [INFO][4977] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-d5356a388e' Jul 7 06:09:38.168625 containerd[1690]: 2025-07-07 06:09:38.069 [INFO][4977] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.168625 containerd[1690]: 2025-07-07 06:09:38.079 [INFO][4977] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.168625 containerd[1690]: 2025-07-07 06:09:38.086 [INFO][4977] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.168625 containerd[1690]: 2025-07-07 06:09:38.088 [INFO][4977] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.168625 containerd[1690]: 2025-07-07 06:09:38.091 [INFO][4977] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.168625 containerd[1690]: 2025-07-07 06:09:38.092 [INFO][4977] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.168625 containerd[1690]: 2025-07-07 06:09:38.094 [INFO][4977] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645 Jul 7 06:09:38.168625 containerd[1690]: 2025-07-07 06:09:38.101 [INFO][4977] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.168625 containerd[1690]: 2025-07-07 06:09:38.108 [INFO][4977] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.67/26] block=192.168.82.64/26 handle="k8s-pod-network.c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.168625 containerd[1690]: 2025-07-07 06:09:38.109 [INFO][4977] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.67/26] handle="k8s-pod-network.c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.168625 containerd[1690]: 2025-07-07 06:09:38.109 [INFO][4977] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:38.168625 containerd[1690]: 2025-07-07 06:09:38.109 [INFO][4977] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.67/26] IPv6=[] ContainerID="c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645" HandleID="k8s-pod-network.c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0" Jul 7 06:09:38.170172 containerd[1690]: 2025-07-07 06:09:38.111 [INFO][4965] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645" Namespace="calico-system" Pod="calico-kube-controllers-6b5f9856b7-2gvp4" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0", GenerateName:"calico-kube-controllers-6b5f9856b7-", Namespace:"calico-system", SelfLink:"", UID:"ab60cfb0-6466-4ab8-aaf5-3dfba3e66388", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b5f9856b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"", Pod:"calico-kube-controllers-6b5f9856b7-2gvp4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9352f152fae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:38.170172 containerd[1690]: 2025-07-07 06:09:38.111 [INFO][4965] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.67/32] ContainerID="c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645" Namespace="calico-system" Pod="calico-kube-controllers-6b5f9856b7-2gvp4" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0" Jul 7 06:09:38.170172 containerd[1690]: 2025-07-07 06:09:38.111 [INFO][4965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9352f152fae ContainerID="c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645" Namespace="calico-system" Pod="calico-kube-controllers-6b5f9856b7-2gvp4" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0" Jul 7 06:09:38.170172 containerd[1690]: 2025-07-07 06:09:38.129 [INFO][4965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645" Namespace="calico-system" Pod="calico-kube-controllers-6b5f9856b7-2gvp4" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0" Jul 7 06:09:38.170172 containerd[1690]: 2025-07-07 06:09:38.131 [INFO][4965] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645" Namespace="calico-system" Pod="calico-kube-controllers-6b5f9856b7-2gvp4" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0", GenerateName:"calico-kube-controllers-6b5f9856b7-", Namespace:"calico-system", SelfLink:"", UID:"ab60cfb0-6466-4ab8-aaf5-3dfba3e66388", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b5f9856b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645", Pod:"calico-kube-controllers-6b5f9856b7-2gvp4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9352f152fae", MAC:"26:b0:37:63:8e:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:38.170172 containerd[1690]: 2025-07-07 06:09:38.165 [INFO][4965] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645" Namespace="calico-system" Pod="calico-kube-controllers-6b5f9856b7-2gvp4" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0" Jul 7 06:09:38.250906 containerd[1690]: time="2025-07-07T06:09:38.249351906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:38.250906 containerd[1690]: time="2025-07-07T06:09:38.249551186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:38.250906 containerd[1690]: time="2025-07-07T06:09:38.249563626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:38.250906 containerd[1690]: time="2025-07-07T06:09:38.249671746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:38.284454 systemd[1]: Started cri-containerd-c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645.scope - libcontainer container c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645. Jul 7 06:09:38.364418 containerd[1690]: time="2025-07-07T06:09:38.364358066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b5f9856b7-2gvp4,Uid:ab60cfb0-6466-4ab8-aaf5-3dfba3e66388,Namespace:calico-system,Attempt:1,} returns sandbox id \"c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645\"" Jul 7 06:09:38.541827 systemd-networkd[1330]: cali253eec1ac13: Link UP Jul 7 06:09:38.542042 systemd-networkd[1330]: cali253eec1ac13: Gained carrier Jul 7 06:09:38.582851 containerd[1690]: 2025-07-07 06:09:38.396 [INFO][5040] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0 csi-node-driver- calico-system 7c9907d5-346d-4929-b83e-924668157d8a 973 0 2025-07-07 06:09:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.4-a-d5356a388e csi-node-driver-wq7zv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali253eec1ac13 [] [] }} ContainerID="bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2" Namespace="calico-system" Pod="csi-node-driver-wq7zv" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-" Jul 7 06:09:38.582851 containerd[1690]: 2025-07-07 06:09:38.396 [INFO][5040] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2" Namespace="calico-system" Pod="csi-node-driver-wq7zv" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0" Jul 7 06:09:38.582851 containerd[1690]: 2025-07-07 06:09:38.467 [INFO][5063] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2" HandleID="k8s-pod-network.bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2" Workload="ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0" Jul 7 06:09:38.582851 containerd[1690]: 2025-07-07 06:09:38.467 [INFO][5063] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2" HandleID="k8s-pod-network.bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2" Workload="ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3730), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-d5356a388e", "pod":"csi-node-driver-wq7zv", "timestamp":"2025-07-07 06:09:38.467241646 +0000 UTC"}, Hostname:"ci-4081.3.4-a-d5356a388e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:09:38.582851 containerd[1690]: 2025-07-07 06:09:38.467 [INFO][5063] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:38.582851 containerd[1690]: 2025-07-07 06:09:38.467 [INFO][5063] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:38.582851 containerd[1690]: 2025-07-07 06:09:38.467 [INFO][5063] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-d5356a388e' Jul 7 06:09:38.582851 containerd[1690]: 2025-07-07 06:09:38.481 [INFO][5063] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.582851 containerd[1690]: 2025-07-07 06:09:38.490 [INFO][5063] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.582851 containerd[1690]: 2025-07-07 06:09:38.496 [INFO][5063] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.582851 containerd[1690]: 2025-07-07 06:09:38.500 [INFO][5063] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.582851 containerd[1690]: 2025-07-07 06:09:38.504 [INFO][5063] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.582851 containerd[1690]: 2025-07-07 06:09:38.504 [INFO][5063] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.582851 containerd[1690]: 2025-07-07 06:09:38.506 [INFO][5063] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2 Jul 7 06:09:38.582851 containerd[1690]: 2025-07-07 06:09:38.516 [INFO][5063] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.582851 containerd[1690]: 2025-07-07 06:09:38.531 [INFO][5063] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.68/26] block=192.168.82.64/26 handle="k8s-pod-network.bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.582851 containerd[1690]: 2025-07-07 06:09:38.531 [INFO][5063] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.68/26] handle="k8s-pod-network.bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.582851 containerd[1690]: 2025-07-07 06:09:38.531 [INFO][5063] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:38.582851 containerd[1690]: 2025-07-07 06:09:38.531 [INFO][5063] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.68/26] IPv6=[] ContainerID="bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2" HandleID="k8s-pod-network.bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2" Workload="ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0" Jul 7 06:09:38.584283 containerd[1690]: 2025-07-07 06:09:38.533 [INFO][5040] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2" Namespace="calico-system" Pod="csi-node-driver-wq7zv" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7c9907d5-346d-4929-b83e-924668157d8a", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"", Pod:"csi-node-driver-wq7zv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali253eec1ac13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:38.584283 containerd[1690]: 2025-07-07 06:09:38.534 [INFO][5040] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.68/32] ContainerID="bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2" Namespace="calico-system" Pod="csi-node-driver-wq7zv" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0" Jul 7 06:09:38.584283 containerd[1690]: 2025-07-07 06:09:38.534 [INFO][5040] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali253eec1ac13 ContainerID="bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2" Namespace="calico-system" Pod="csi-node-driver-wq7zv" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0" Jul 7 06:09:38.584283 containerd[1690]: 2025-07-07 06:09:38.542 [INFO][5040] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2" Namespace="calico-system" Pod="csi-node-driver-wq7zv" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0" Jul 7 06:09:38.584283 containerd[1690]: 2025-07-07 06:09:38.551 [INFO][5040] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2" Namespace="calico-system" Pod="csi-node-driver-wq7zv" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7c9907d5-346d-4929-b83e-924668157d8a", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2", Pod:"csi-node-driver-wq7zv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali253eec1ac13", MAC:"9e:4e:4a:b9:2a:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:38.584283 containerd[1690]: 2025-07-07 06:09:38.578 [INFO][5040] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2" Namespace="calico-system" Pod="csi-node-driver-wq7zv" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0" Jul 7 06:09:38.621146 containerd[1690]: time="2025-07-07T06:09:38.620785153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:38.621146 containerd[1690]: time="2025-07-07T06:09:38.620849153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:38.621146 containerd[1690]: time="2025-07-07T06:09:38.620864634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:38.621146 containerd[1690]: time="2025-07-07T06:09:38.620950714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:38.654836 systemd[1]: Started cri-containerd-bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2.scope - libcontainer container bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2. Jul 7 06:09:38.669070 systemd-networkd[1330]: cali3b5f2235191: Link UP Jul 7 06:09:38.673666 systemd-networkd[1330]: cali3b5f2235191: Gained carrier Jul 7 06:09:38.709658 containerd[1690]: 2025-07-07 06:09:38.404 [INFO][5029] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0 calico-apiserver-d8cb59fcd- calico-apiserver 536f7498-046f-4b77-a82d-d7619df81d7a 974 0 2025-07-07 06:09:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d8cb59fcd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-a-d5356a388e calico-apiserver-d8cb59fcd-czbmd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3b5f2235191 [] [] }} ContainerID="5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b" Namespace="calico-apiserver" Pod="calico-apiserver-d8cb59fcd-czbmd" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-" Jul 7 06:09:38.709658 containerd[1690]: 2025-07-07 06:09:38.404 [INFO][5029] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b" Namespace="calico-apiserver" Pod="calico-apiserver-d8cb59fcd-czbmd" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0" Jul 7 06:09:38.709658 containerd[1690]: 2025-07-07 06:09:38.466 [INFO][5065] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b" HandleID="k8s-pod-network.5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0" Jul 7 06:09:38.709658 containerd[1690]: 2025-07-07 06:09:38.467 [INFO][5065] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b" HandleID="k8s-pod-network.5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001036f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-a-d5356a388e", "pod":"calico-apiserver-d8cb59fcd-czbmd", "timestamp":"2025-07-07 06:09:38.466804805 +0000 UTC"}, Hostname:"ci-4081.3.4-a-d5356a388e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:09:38.709658 containerd[1690]: 2025-07-07 06:09:38.467 [INFO][5065] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:38.709658 containerd[1690]: 2025-07-07 06:09:38.531 [INFO][5065] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:38.709658 containerd[1690]: 2025-07-07 06:09:38.531 [INFO][5065] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-d5356a388e' Jul 7 06:09:38.709658 containerd[1690]: 2025-07-07 06:09:38.589 [INFO][5065] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.709658 containerd[1690]: 2025-07-07 06:09:38.598 [INFO][5065] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.709658 containerd[1690]: 2025-07-07 06:09:38.606 [INFO][5065] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.709658 containerd[1690]: 2025-07-07 06:09:38.611 [INFO][5065] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.709658 containerd[1690]: 2025-07-07 06:09:38.617 [INFO][5065] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.709658 containerd[1690]: 2025-07-07 06:09:38.617 [INFO][5065] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.709658 containerd[1690]: 2025-07-07 06:09:38.622 [INFO][5065] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b Jul 7 06:09:38.709658 containerd[1690]: 2025-07-07 06:09:38.637 [INFO][5065] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.709658 containerd[1690]: 2025-07-07 06:09:38.658 [INFO][5065] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.69/26] block=192.168.82.64/26 handle="k8s-pod-network.5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.709658 containerd[1690]: 2025-07-07 06:09:38.659 [INFO][5065] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.69/26] handle="k8s-pod-network.5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:38.709658 containerd[1690]: 2025-07-07 06:09:38.659 [INFO][5065] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:38.709658 containerd[1690]: 2025-07-07 06:09:38.659 [INFO][5065] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.69/26] IPv6=[] ContainerID="5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b" HandleID="k8s-pod-network.5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0" Jul 7 06:09:38.710598 containerd[1690]: 2025-07-07 06:09:38.662 [INFO][5029] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b" Namespace="calico-apiserver" Pod="calico-apiserver-d8cb59fcd-czbmd" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0", GenerateName:"calico-apiserver-d8cb59fcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"536f7498-046f-4b77-a82d-d7619df81d7a", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8cb59fcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"", Pod:"calico-apiserver-d8cb59fcd-czbmd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b5f2235191", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:38.710598 containerd[1690]: 2025-07-07 06:09:38.662 [INFO][5029] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.69/32] ContainerID="5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b" Namespace="calico-apiserver" Pod="calico-apiserver-d8cb59fcd-czbmd" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0" Jul 7 06:09:38.710598 containerd[1690]: 2025-07-07 06:09:38.662 [INFO][5029] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b5f2235191 ContainerID="5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b" Namespace="calico-apiserver" Pod="calico-apiserver-d8cb59fcd-czbmd" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0" Jul 7 06:09:38.710598 containerd[1690]: 2025-07-07 06:09:38.679 [INFO][5029] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b" Namespace="calico-apiserver" Pod="calico-apiserver-d8cb59fcd-czbmd" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0" Jul 7 06:09:38.710598 containerd[1690]: 2025-07-07 06:09:38.681 [INFO][5029] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b" Namespace="calico-apiserver" Pod="calico-apiserver-d8cb59fcd-czbmd" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0", GenerateName:"calico-apiserver-d8cb59fcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"536f7498-046f-4b77-a82d-d7619df81d7a", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8cb59fcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b", Pod:"calico-apiserver-d8cb59fcd-czbmd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b5f2235191", MAC:"f6:9d:43:65:fa:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:38.710598 containerd[1690]: 2025-07-07 06:09:38.701 [INFO][5029] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b" Namespace="calico-apiserver" Pod="calico-apiserver-d8cb59fcd-czbmd" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0" Jul 7 06:09:38.717785 containerd[1690]: time="2025-07-07T06:09:38.717630522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wq7zv,Uid:7c9907d5-346d-4929-b83e-924668157d8a,Namespace:calico-system,Attempt:1,} returns sandbox id \"bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2\"" Jul 7 06:09:38.759907 containerd[1690]: time="2025-07-07T06:09:38.759801116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:38.759907 containerd[1690]: time="2025-07-07T06:09:38.759860716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:38.759907 containerd[1690]: time="2025-07-07T06:09:38.759871436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:38.760571 containerd[1690]: time="2025-07-07T06:09:38.760352117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:38.774329 containerd[1690]: time="2025-07-07T06:09:38.774221541Z" level=info msg="StopPodSandbox for \"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\"" Jul 7 06:09:38.777488 containerd[1690]: time="2025-07-07T06:09:38.777342466Z" level=info msg="StopPodSandbox for \"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\"" Jul 7 06:09:38.778862 containerd[1690]: time="2025-07-07T06:09:38.778809189Z" level=info msg="StopPodSandbox for \"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\"" Jul 7 06:09:38.795323 systemd[1]: Started cri-containerd-5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b.scope - libcontainer container 5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b. Jul 7 06:09:38.930085 systemd[1]: run-containerd-runc-k8s.io-c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645-runc.vDsPe3.mount: Deactivated successfully. Jul 7 06:09:38.974471 containerd[1690]: time="2025-07-07T06:09:38.974334050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8cb59fcd-czbmd,Uid:536f7498-046f-4b77-a82d-d7619df81d7a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b\"" Jul 7 06:09:39.059327 containerd[1690]: 2025-07-07 06:09:38.939 [INFO][5188] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Jul 7 06:09:39.059327 containerd[1690]: 2025-07-07 06:09:38.940 [INFO][5188] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" iface="eth0" netns="/var/run/netns/cni-34102f19-fe24-31ed-5514-aac53a81daef" Jul 7 06:09:39.059327 containerd[1690]: 2025-07-07 06:09:38.941 [INFO][5188] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" iface="eth0" netns="/var/run/netns/cni-34102f19-fe24-31ed-5514-aac53a81daef" Jul 7 06:09:39.059327 containerd[1690]: 2025-07-07 06:09:38.942 [INFO][5188] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" iface="eth0" netns="/var/run/netns/cni-34102f19-fe24-31ed-5514-aac53a81daef" Jul 7 06:09:39.059327 containerd[1690]: 2025-07-07 06:09:38.942 [INFO][5188] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Jul 7 06:09:39.059327 containerd[1690]: 2025-07-07 06:09:38.942 [INFO][5188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Jul 7 06:09:39.059327 containerd[1690]: 2025-07-07 06:09:39.026 [INFO][5223] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" HandleID="k8s-pod-network.66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Workload="ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0" Jul 7 06:09:39.059327 containerd[1690]: 2025-07-07 06:09:39.027 [INFO][5223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:39.059327 containerd[1690]: 2025-07-07 06:09:39.028 [INFO][5223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:39.059327 containerd[1690]: 2025-07-07 06:09:39.046 [WARNING][5223] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" HandleID="k8s-pod-network.66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Workload="ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0" Jul 7 06:09:39.059327 containerd[1690]: 2025-07-07 06:09:39.046 [INFO][5223] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" HandleID="k8s-pod-network.66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Workload="ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0" Jul 7 06:09:39.059327 containerd[1690]: 2025-07-07 06:09:39.050 [INFO][5223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:39.059327 containerd[1690]: 2025-07-07 06:09:39.054 [INFO][5188] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Jul 7 06:09:39.060429 containerd[1690]: time="2025-07-07T06:09:39.060067479Z" level=info msg="TearDown network for sandbox \"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\" successfully" Jul 7 06:09:39.060429 containerd[1690]: time="2025-07-07T06:09:39.060191119Z" level=info msg="StopPodSandbox for \"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\" returns successfully" Jul 7 06:09:39.064729 containerd[1690]: time="2025-07-07T06:09:39.064299247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-t795m,Uid:aa20f3ca-cae5-4a79-b6b7-441c614e749e,Namespace:calico-system,Attempt:1,}" Jul 7 06:09:39.068426 systemd[1]: run-netns-cni\x2d34102f19\x2dfe24\x2d31ed\x2d5514\x2daac53a81daef.mount: Deactivated successfully. Jul 7 06:09:39.093982 containerd[1690]: 2025-07-07 06:09:38.971 [INFO][5187] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Jul 7 06:09:39.093982 containerd[1690]: 2025-07-07 06:09:38.973 [INFO][5187] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" iface="eth0" netns="/var/run/netns/cni-8c4c19b8-271d-1fcf-6b6c-c69c6e5cd6e2" Jul 7 06:09:39.093982 containerd[1690]: 2025-07-07 06:09:38.977 [INFO][5187] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" iface="eth0" netns="/var/run/netns/cni-8c4c19b8-271d-1fcf-6b6c-c69c6e5cd6e2" Jul 7 06:09:39.093982 containerd[1690]: 2025-07-07 06:09:38.984 [INFO][5187] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" iface="eth0" netns="/var/run/netns/cni-8c4c19b8-271d-1fcf-6b6c-c69c6e5cd6e2" Jul 7 06:09:39.093982 containerd[1690]: 2025-07-07 06:09:38.984 [INFO][5187] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Jul 7 06:09:39.093982 containerd[1690]: 2025-07-07 06:09:38.984 [INFO][5187] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Jul 7 06:09:39.093982 containerd[1690]: 2025-07-07 06:09:39.053 [INFO][5236] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" HandleID="k8s-pod-network.e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0" Jul 7 06:09:39.093982 containerd[1690]: 2025-07-07 06:09:39.054 [INFO][5236] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:39.093982 containerd[1690]: 2025-07-07 06:09:39.054 [INFO][5236] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:39.093982 containerd[1690]: 2025-07-07 06:09:39.083 [WARNING][5236] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" HandleID="k8s-pod-network.e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0" Jul 7 06:09:39.093982 containerd[1690]: 2025-07-07 06:09:39.084 [INFO][5236] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" HandleID="k8s-pod-network.e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0" Jul 7 06:09:39.093982 containerd[1690]: 2025-07-07 06:09:39.087 [INFO][5236] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:39.093982 containerd[1690]: 2025-07-07 06:09:39.091 [INFO][5187] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Jul 7 06:09:39.097693 containerd[1690]: time="2025-07-07T06:09:39.096297382Z" level=info msg="TearDown network for sandbox \"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\" successfully" Jul 7 06:09:39.097819 containerd[1690]: time="2025-07-07T06:09:39.097697705Z" level=info msg="StopPodSandbox for \"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\" returns successfully" Jul 7 06:09:39.100699 containerd[1690]: time="2025-07-07T06:09:39.100356109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8cb59fcd-6bttb,Uid:8e40e25b-cc4f-4bf0-9b64-76658b196296,Namespace:calico-apiserver,Attempt:1,}" Jul 7 06:09:39.102027 systemd[1]: run-netns-cni\x2d8c4c19b8\x2d271d\x2d1fcf\x2d6b6c\x2dc69c6e5cd6e2.mount: Deactivated successfully. Jul 7 06:09:39.126728 containerd[1690]: 2025-07-07 06:09:38.999 [INFO][5205] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Jul 7 06:09:39.126728 containerd[1690]: 2025-07-07 06:09:39.000 [INFO][5205] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" iface="eth0" netns="/var/run/netns/cni-acf2fed6-8328-cf5b-b0e4-525681314498" Jul 7 06:09:39.126728 containerd[1690]: 2025-07-07 06:09:39.000 [INFO][5205] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" iface="eth0" netns="/var/run/netns/cni-acf2fed6-8328-cf5b-b0e4-525681314498" Jul 7 06:09:39.126728 containerd[1690]: 2025-07-07 06:09:39.000 [INFO][5205] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" iface="eth0" netns="/var/run/netns/cni-acf2fed6-8328-cf5b-b0e4-525681314498" Jul 7 06:09:39.126728 containerd[1690]: 2025-07-07 06:09:39.000 [INFO][5205] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Jul 7 06:09:39.126728 containerd[1690]: 2025-07-07 06:09:39.000 [INFO][5205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Jul 7 06:09:39.126728 containerd[1690]: 2025-07-07 06:09:39.062 [INFO][5241] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" HandleID="k8s-pod-network.644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0" Jul 7 06:09:39.126728 containerd[1690]: 2025-07-07 06:09:39.062 [INFO][5241] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:39.126728 containerd[1690]: 2025-07-07 06:09:39.087 [INFO][5241] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:39.126728 containerd[1690]: 2025-07-07 06:09:39.110 [WARNING][5241] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" HandleID="k8s-pod-network.644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0" Jul 7 06:09:39.126728 containerd[1690]: 2025-07-07 06:09:39.111 [INFO][5241] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" HandleID="k8s-pod-network.644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0" Jul 7 06:09:39.126728 containerd[1690]: 2025-07-07 06:09:39.114 [INFO][5241] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:39.126728 containerd[1690]: 2025-07-07 06:09:39.116 [INFO][5205] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Jul 7 06:09:39.128522 containerd[1690]: time="2025-07-07T06:09:39.127993878Z" level=info msg="TearDown network for sandbox \"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\" successfully" Jul 7 06:09:39.128522 containerd[1690]: time="2025-07-07T06:09:39.128340918Z" level=info msg="StopPodSandbox for \"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\" returns successfully" Jul 7 06:09:39.130511 containerd[1690]: time="2025-07-07T06:09:39.130173241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-shc5t,Uid:fa1b246f-73f9-4220-bb08-88842ebef68c,Namespace:kube-system,Attempt:1,}" Jul 7 06:09:39.330117 systemd-networkd[1330]: calidae1cc2ffee: Link UP Jul 7 06:09:39.331991 systemd-networkd[1330]: calidae1cc2ffee: Gained carrier Jul 7 06:09:39.368714 containerd[1690]: 2025-07-07 06:09:39.187 [INFO][5253] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0 goldmane-768f4c5c69- calico-system aa20f3ca-cae5-4a79-b6b7-441c614e749e 990 0 2025-07-07 06:09:12 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.4-a-d5356a388e goldmane-768f4c5c69-t795m eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidae1cc2ffee [] [] }} ContainerID="73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5" Namespace="calico-system" Pod="goldmane-768f4c5c69-t795m" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-" Jul 7 06:09:39.368714 containerd[1690]: 2025-07-07 06:09:39.187 [INFO][5253] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5" Namespace="calico-system" Pod="goldmane-768f4c5c69-t795m" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0" Jul 7 06:09:39.368714 containerd[1690]: 2025-07-07 06:09:39.223 [INFO][5266] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5" HandleID="k8s-pod-network.73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5" Workload="ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0" Jul 7 06:09:39.368714 containerd[1690]: 2025-07-07 06:09:39.223 [INFO][5266] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5" HandleID="k8s-pod-network.73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5" Workload="ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2f50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-d5356a388e", "pod":"goldmane-768f4c5c69-t795m", "timestamp":"2025-07-07 06:09:39.223396604 +0000 UTC"}, Hostname:"ci-4081.3.4-a-d5356a388e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:09:39.368714 containerd[1690]: 2025-07-07 06:09:39.223 [INFO][5266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:39.368714 containerd[1690]: 2025-07-07 06:09:39.224 [INFO][5266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:39.368714 containerd[1690]: 2025-07-07 06:09:39.224 [INFO][5266] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-d5356a388e' Jul 7 06:09:39.368714 containerd[1690]: 2025-07-07 06:09:39.241 [INFO][5266] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.368714 containerd[1690]: 2025-07-07 06:09:39.252 [INFO][5266] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.368714 containerd[1690]: 2025-07-07 06:09:39.266 [INFO][5266] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.368714 containerd[1690]: 2025-07-07 06:09:39.271 [INFO][5266] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.368714 containerd[1690]: 2025-07-07 06:09:39.282 [INFO][5266] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.368714 containerd[1690]: 2025-07-07 06:09:39.283 [INFO][5266] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.368714 containerd[1690]: 2025-07-07 06:09:39.290 [INFO][5266] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5 Jul 7 06:09:39.368714 containerd[1690]: 2025-07-07 06:09:39.297 [INFO][5266] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.368714 containerd[1690]: 2025-07-07 06:09:39.322 [INFO][5266] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.70/26] block=192.168.82.64/26 handle="k8s-pod-network.73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.368714 containerd[1690]: 2025-07-07 06:09:39.322 [INFO][5266] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.70/26] handle="k8s-pod-network.73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.368714 containerd[1690]: 2025-07-07 06:09:39.322 [INFO][5266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:39.368714 containerd[1690]: 2025-07-07 06:09:39.322 [INFO][5266] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.70/26] IPv6=[] ContainerID="73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5" HandleID="k8s-pod-network.73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5" Workload="ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0" Jul 7 06:09:39.369729 containerd[1690]: 2025-07-07 06:09:39.327 [INFO][5253] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5" Namespace="calico-system" Pod="goldmane-768f4c5c69-t795m" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"aa20f3ca-cae5-4a79-b6b7-441c614e749e", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"", Pod:"goldmane-768f4c5c69-t795m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidae1cc2ffee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:39.369729 containerd[1690]: 2025-07-07 06:09:39.327 [INFO][5253] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.70/32] ContainerID="73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5" Namespace="calico-system" Pod="goldmane-768f4c5c69-t795m" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0" Jul 7 06:09:39.369729 containerd[1690]: 2025-07-07 06:09:39.327 [INFO][5253] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidae1cc2ffee ContainerID="73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5" Namespace="calico-system" Pod="goldmane-768f4c5c69-t795m" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0" Jul 7 06:09:39.369729 containerd[1690]: 2025-07-07 06:09:39.331 [INFO][5253] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5" Namespace="calico-system" Pod="goldmane-768f4c5c69-t795m" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0" Jul 7 06:09:39.369729 containerd[1690]: 2025-07-07 06:09:39.336 [INFO][5253] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5" Namespace="calico-system" Pod="goldmane-768f4c5c69-t795m" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"aa20f3ca-cae5-4a79-b6b7-441c614e749e", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5", Pod:"goldmane-768f4c5c69-t795m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidae1cc2ffee", MAC:"6e:8c:dd:e7:11:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:39.369729 containerd[1690]: 2025-07-07 06:09:39.363 [INFO][5253] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5" Namespace="calico-system" Pod="goldmane-768f4c5c69-t795m" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0" Jul 7 06:09:39.434740 containerd[1690]: time="2025-07-07T06:09:39.434493612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:39.436319 containerd[1690]: time="2025-07-07T06:09:39.434245492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:39.436319 containerd[1690]: time="2025-07-07T06:09:39.434298012Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:39.436319 containerd[1690]: time="2025-07-07T06:09:39.434320172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:39.436319 containerd[1690]: time="2025-07-07T06:09:39.434413332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:39.437443 containerd[1690]: time="2025-07-07T06:09:39.437395057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 7 06:09:39.441958 systemd-networkd[1330]: calib06b0b36129: Link UP Jul 7 06:09:39.447039 systemd-networkd[1330]: cali9352f152fae: Gained IPv6LL Jul 7 06:09:39.453421 systemd-networkd[1330]: calib06b0b36129: Gained carrier Jul 7 06:09:39.458865 containerd[1690]: time="2025-07-07T06:09:39.458732974Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:39.468462 systemd[1]: Started cri-containerd-73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5.scope - libcontainer container 73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5. Jul 7 06:09:39.482783 containerd[1690]: time="2025-07-07T06:09:39.482714976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:39.484403 containerd[1690]: time="2025-07-07T06:09:39.483571938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 4.831466063s" Jul 7 06:09:39.484403 containerd[1690]: time="2025-07-07T06:09:39.483617498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 7 06:09:39.490156 containerd[1690]: time="2025-07-07T06:09:39.490111749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 06:09:39.495637 containerd[1690]: 2025-07-07 06:09:39.294 [INFO][5271] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0 calico-apiserver-d8cb59fcd- calico-apiserver 8e40e25b-cc4f-4bf0-9b64-76658b196296 991 0 2025-07-07 06:09:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d8cb59fcd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-a-d5356a388e calico-apiserver-d8cb59fcd-6bttb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib06b0b36129 [] [] }} ContainerID="e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0" Namespace="calico-apiserver" Pod="calico-apiserver-d8cb59fcd-6bttb" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-" Jul 7 06:09:39.495637 containerd[1690]: 2025-07-07 06:09:39.294 [INFO][5271] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0" Namespace="calico-apiserver" Pod="calico-apiserver-d8cb59fcd-6bttb" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0" Jul 7 06:09:39.495637 containerd[1690]: 2025-07-07 06:09:39.340 [INFO][5295] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0" HandleID="k8s-pod-network.e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0" Jul 7 06:09:39.495637 containerd[1690]: 2025-07-07 06:09:39.340 [INFO][5295] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0" HandleID="k8s-pod-network.e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-a-d5356a388e", "pod":"calico-apiserver-d8cb59fcd-6bttb", "timestamp":"2025-07-07 06:09:39.339991127 +0000 UTC"}, Hostname:"ci-4081.3.4-a-d5356a388e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:09:39.495637 containerd[1690]: 2025-07-07 06:09:39.340 [INFO][5295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:39.495637 containerd[1690]: 2025-07-07 06:09:39.340 [INFO][5295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:39.495637 containerd[1690]: 2025-07-07 06:09:39.340 [INFO][5295] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-d5356a388e' Jul 7 06:09:39.495637 containerd[1690]: 2025-07-07 06:09:39.351 [INFO][5295] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.495637 containerd[1690]: 2025-07-07 06:09:39.365 [INFO][5295] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.495637 containerd[1690]: 2025-07-07 06:09:39.376 [INFO][5295] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.495637 containerd[1690]: 2025-07-07 06:09:39.380 [INFO][5295] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.495637 containerd[1690]: 2025-07-07 06:09:39.386 [INFO][5295] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.495637 containerd[1690]: 2025-07-07 06:09:39.386 [INFO][5295] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.495637 containerd[1690]: 2025-07-07 06:09:39.388 [INFO][5295] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0 Jul 7 06:09:39.495637 containerd[1690]: 2025-07-07 06:09:39.400 [INFO][5295] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.495637 containerd[1690]: 2025-07-07 06:09:39.423 [INFO][5295] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.71/26] block=192.168.82.64/26 handle="k8s-pod-network.e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.495637 containerd[1690]: 2025-07-07 06:09:39.423 [INFO][5295] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.71/26] handle="k8s-pod-network.e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.495637 containerd[1690]: 2025-07-07 06:09:39.423 [INFO][5295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:39.495637 containerd[1690]: 2025-07-07 06:09:39.423 [INFO][5295] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.71/26] IPv6=[] ContainerID="e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0" HandleID="k8s-pod-network.e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0" Jul 7 06:09:39.497169 containerd[1690]: 2025-07-07 06:09:39.427 [INFO][5271] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0" Namespace="calico-apiserver" Pod="calico-apiserver-d8cb59fcd-6bttb" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0", GenerateName:"calico-apiserver-d8cb59fcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e40e25b-cc4f-4bf0-9b64-76658b196296", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8cb59fcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"", Pod:"calico-apiserver-d8cb59fcd-6bttb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib06b0b36129", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:39.497169 containerd[1690]: 2025-07-07 06:09:39.427 [INFO][5271] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.71/32] ContainerID="e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0" Namespace="calico-apiserver" Pod="calico-apiserver-d8cb59fcd-6bttb" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0" Jul 7 06:09:39.497169 containerd[1690]: 2025-07-07 06:09:39.427 [INFO][5271] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib06b0b36129 ContainerID="e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0" Namespace="calico-apiserver" Pod="calico-apiserver-d8cb59fcd-6bttb" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0" Jul 7 06:09:39.497169 containerd[1690]: 2025-07-07 06:09:39.457 [INFO][5271] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0" Namespace="calico-apiserver" Pod="calico-apiserver-d8cb59fcd-6bttb" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0" Jul 7 06:09:39.497169 containerd[1690]: 2025-07-07 06:09:39.459 [INFO][5271] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0" Namespace="calico-apiserver" Pod="calico-apiserver-d8cb59fcd-6bttb" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0", GenerateName:"calico-apiserver-d8cb59fcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e40e25b-cc4f-4bf0-9b64-76658b196296", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8cb59fcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0", Pod:"calico-apiserver-d8cb59fcd-6bttb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib06b0b36129", MAC:"e6:ec:25:90:20:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:39.497169 containerd[1690]: 2025-07-07 06:09:39.483 [INFO][5271] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0" Namespace="calico-apiserver" Pod="calico-apiserver-d8cb59fcd-6bttb" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0" Jul 7 06:09:39.498478 containerd[1690]: time="2025-07-07T06:09:39.498400363Z" level=info msg="CreateContainer within sandbox \"3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 06:09:39.566784 systemd-networkd[1330]: cali9a816c0b0b7: Link UP Jul 7 06:09:39.574781 containerd[1690]: time="2025-07-07T06:09:39.571546211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:39.574781 containerd[1690]: time="2025-07-07T06:09:39.572076652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:39.574781 containerd[1690]: time="2025-07-07T06:09:39.572093852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:39.574781 containerd[1690]: time="2025-07-07T06:09:39.572658653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:39.573424 systemd-networkd[1330]: cali9a816c0b0b7: Gained carrier Jul 7 06:09:39.597937 containerd[1690]: time="2025-07-07T06:09:39.597832777Z" level=info msg="CreateContainer within sandbox \"3f4eef77bf51b121747ee555aa344660afa95a10720911011db043b074eb21ba\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3fd36bb7b379e132611695449a835c1ddc0dc0e136b99de247c86dcda642b1c2\"" Jul 7 06:09:39.598969 containerd[1690]: time="2025-07-07T06:09:39.598935019Z" level=info msg="StartContainer for \"3fd36bb7b379e132611695449a835c1ddc0dc0e136b99de247c86dcda642b1c2\"" Jul 7 06:09:39.603243 containerd[1690]: time="2025-07-07T06:09:39.601860784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-t795m,Uid:aa20f3ca-cae5-4a79-b6b7-441c614e749e,Namespace:calico-system,Attempt:1,} returns sandbox id \"73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5\"" Jul 7 06:09:39.628436 containerd[1690]: 2025-07-07 06:09:39.333 [INFO][5283] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0 coredns-668d6bf9bc- kube-system fa1b246f-73f9-4220-bb08-88842ebef68c 993 0 2025-07-07 06:08:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-a-d5356a388e coredns-668d6bf9bc-shc5t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9a816c0b0b7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786" Namespace="kube-system" Pod="coredns-668d6bf9bc-shc5t" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-" Jul 7 06:09:39.628436 containerd[1690]: 2025-07-07 06:09:39.335 [INFO][5283] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786" Namespace="kube-system" Pod="coredns-668d6bf9bc-shc5t" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0" Jul 7 06:09:39.628436 containerd[1690]: 2025-07-07 06:09:39.408 [INFO][5307] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786" HandleID="k8s-pod-network.94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0" Jul 7 06:09:39.628436 containerd[1690]: 2025-07-07 06:09:39.409 [INFO][5307] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786" HandleID="k8s-pod-network.94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003cce70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-a-d5356a388e", "pod":"coredns-668d6bf9bc-shc5t", "timestamp":"2025-07-07 06:09:39.408677687 +0000 UTC"}, Hostname:"ci-4081.3.4-a-d5356a388e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:09:39.628436 containerd[1690]: 2025-07-07 06:09:39.409 [INFO][5307] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:39.628436 containerd[1690]: 2025-07-07 06:09:39.423 [INFO][5307] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:39.628436 containerd[1690]: 2025-07-07 06:09:39.423 [INFO][5307] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-d5356a388e' Jul 7 06:09:39.628436 containerd[1690]: 2025-07-07 06:09:39.468 [INFO][5307] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.628436 containerd[1690]: 2025-07-07 06:09:39.488 [INFO][5307] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.628436 containerd[1690]: 2025-07-07 06:09:39.504 [INFO][5307] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.628436 containerd[1690]: 2025-07-07 06:09:39.509 [INFO][5307] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.628436 containerd[1690]: 2025-07-07 06:09:39.513 [INFO][5307] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.628436 containerd[1690]: 2025-07-07 06:09:39.514 [INFO][5307] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.628436 containerd[1690]: 2025-07-07 06:09:39.518 [INFO][5307] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786 Jul 7 06:09:39.628436 containerd[1690]: 2025-07-07 06:09:39.530 [INFO][5307] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.628436 containerd[1690]: 2025-07-07 06:09:39.551 [INFO][5307] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.72/26] block=192.168.82.64/26 handle="k8s-pod-network.94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.628436 containerd[1690]: 2025-07-07 06:09:39.551 [INFO][5307] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.72/26] handle="k8s-pod-network.94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786" host="ci-4081.3.4-a-d5356a388e" Jul 7 06:09:39.628436 containerd[1690]: 2025-07-07 06:09:39.551 [INFO][5307] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:39.628436 containerd[1690]: 2025-07-07 06:09:39.551 [INFO][5307] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.72/26] IPv6=[] ContainerID="94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786" HandleID="k8s-pod-network.94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0" Jul 7 06:09:39.629168 containerd[1690]: 2025-07-07 06:09:39.557 [INFO][5283] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786" Namespace="kube-system" Pod="coredns-668d6bf9bc-shc5t" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fa1b246f-73f9-4220-bb08-88842ebef68c", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"", Pod:"coredns-668d6bf9bc-shc5t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a816c0b0b7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:39.629168 containerd[1690]: 2025-07-07 06:09:39.558 [INFO][5283] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.72/32] ContainerID="94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786" Namespace="kube-system" Pod="coredns-668d6bf9bc-shc5t" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0" Jul 7 06:09:39.629168 containerd[1690]: 2025-07-07 06:09:39.558 [INFO][5283] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a816c0b0b7 ContainerID="94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786" Namespace="kube-system" Pod="coredns-668d6bf9bc-shc5t" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0" Jul 7 06:09:39.629168 containerd[1690]: 2025-07-07 06:09:39.584 [INFO][5283] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786" Namespace="kube-system" Pod="coredns-668d6bf9bc-shc5t" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0" Jul 7 06:09:39.629168 containerd[1690]: 2025-07-07 06:09:39.587 [INFO][5283] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786" Namespace="kube-system" Pod="coredns-668d6bf9bc-shc5t" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fa1b246f-73f9-4220-bb08-88842ebef68c", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786", Pod:"coredns-668d6bf9bc-shc5t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a816c0b0b7", MAC:"82:d4:36:5c:1c:02", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:39.629168 containerd[1690]: 2025-07-07 06:09:39.617 [INFO][5283] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786" Namespace="kube-system" Pod="coredns-668d6bf9bc-shc5t" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0" Jul 7 06:09:39.629555 systemd[1]: Started cri-containerd-e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0.scope - libcontainer container e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0. Jul 7 06:09:39.654526 systemd[1]: Started cri-containerd-3fd36bb7b379e132611695449a835c1ddc0dc0e136b99de247c86dcda642b1c2.scope - libcontainer container 3fd36bb7b379e132611695449a835c1ddc0dc0e136b99de247c86dcda642b1c2. Jul 7 06:09:39.689718 containerd[1690]: time="2025-07-07T06:09:39.689322416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:39.689718 containerd[1690]: time="2025-07-07T06:09:39.689471097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:39.689718 containerd[1690]: time="2025-07-07T06:09:39.689486257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:39.689718 containerd[1690]: time="2025-07-07T06:09:39.689592777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:39.728544 systemd[1]: Started cri-containerd-94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786.scope - libcontainer container 94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786. Jul 7 06:09:39.743931 containerd[1690]: time="2025-07-07T06:09:39.743884231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8cb59fcd-6bttb,Uid:8e40e25b-cc4f-4bf0-9b64-76658b196296,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0\"" Jul 7 06:09:39.803707 containerd[1690]: time="2025-07-07T06:09:39.803559456Z" level=info msg="StartContainer for \"3fd36bb7b379e132611695449a835c1ddc0dc0e136b99de247c86dcda642b1c2\" returns successfully" Jul 7 06:09:39.815641 containerd[1690]: time="2025-07-07T06:09:39.815602916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-shc5t,Uid:fa1b246f-73f9-4220-bb08-88842ebef68c,Namespace:kube-system,Attempt:1,} returns sandbox id \"94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786\"" Jul 7 06:09:39.819929 containerd[1690]: time="2025-07-07T06:09:39.819789564Z" level=info msg="CreateContainer within sandbox \"94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:09:39.880685 containerd[1690]: time="2025-07-07T06:09:39.880016229Z" level=info msg="CreateContainer within sandbox \"94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bf61a52a8ff338e81e0827919427dc0da35c2d171250efe958a93c07d06e4f4b\"" Jul 7 06:09:39.881694 containerd[1690]: time="2025-07-07T06:09:39.881653792Z" level=info msg="StartContainer for \"bf61a52a8ff338e81e0827919427dc0da35c2d171250efe958a93c07d06e4f4b\"" Jul 7 06:09:39.892475 systemd-networkd[1330]: cali3b5f2235191: Gained IPv6LL Jul 7 06:09:39.908459 systemd[1]: Started cri-containerd-bf61a52a8ff338e81e0827919427dc0da35c2d171250efe958a93c07d06e4f4b.scope - libcontainer container bf61a52a8ff338e81e0827919427dc0da35c2d171250efe958a93c07d06e4f4b. Jul 7 06:09:39.921152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount251366155.mount: Deactivated successfully. Jul 7 06:09:39.921399 systemd[1]: run-netns-cni\x2dacf2fed6\x2d8328\x2dcf5b\x2db0e4\x2d525681314498.mount: Deactivated successfully. Jul 7 06:09:39.958968 containerd[1690]: time="2025-07-07T06:09:39.958922686Z" level=info msg="StartContainer for \"bf61a52a8ff338e81e0827919427dc0da35c2d171250efe958a93c07d06e4f4b\" returns successfully" Jul 7 06:09:40.130962 kubelet[3120]: I0707 06:09:40.130740 3120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-shc5t" podStartSLOduration=51.130717346 podStartE2EDuration="51.130717346s" podCreationTimestamp="2025-07-07 06:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:09:40.111068512 +0000 UTC m=+56.458899682" watchObservedRunningTime="2025-07-07 06:09:40.130717346 +0000 UTC m=+56.478548516" Jul 7 06:09:40.149013 systemd-networkd[1330]: cali253eec1ac13: Gained IPv6LL Jul 7 06:09:40.596473 systemd-networkd[1330]: calidae1cc2ffee: Gained IPv6LL Jul 7 06:09:41.300413 systemd-networkd[1330]: calib06b0b36129: Gained IPv6LL Jul 7 06:09:41.556388 systemd-networkd[1330]: cali9a816c0b0b7: Gained IPv6LL Jul 7 06:09:42.548587 containerd[1690]: time="2025-07-07T06:09:42.548528921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:42.552263 containerd[1690]: time="2025-07-07T06:09:42.552177368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 7 06:09:42.561375 containerd[1690]: time="2025-07-07T06:09:42.561337824Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:42.569533 containerd[1690]: time="2025-07-07T06:09:42.569352918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:42.571048 containerd[1690]: time="2025-07-07T06:09:42.570896000Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 3.08039609s" Jul 7 06:09:42.571048 containerd[1690]: time="2025-07-07T06:09:42.570943880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 7 06:09:42.572784 containerd[1690]: time="2025-07-07T06:09:42.572531483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 06:09:42.605473 containerd[1690]: time="2025-07-07T06:09:42.605403620Z" level=info msg="CreateContainer within sandbox \"c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 06:09:43.493528 containerd[1690]: time="2025-07-07T06:09:43.493321115Z" level=info msg="CreateContainer within sandbox \"c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"df05deaefc753f05b2d9bf1d1451a84c3caabe08232f7c2fd4c7c5c98fd1b3cf\"" Jul 7 06:09:43.495595 containerd[1690]: time="2025-07-07T06:09:43.495457999Z" level=info msg="StartContainer for \"df05deaefc753f05b2d9bf1d1451a84c3caabe08232f7c2fd4c7c5c98fd1b3cf\"" Jul 7 06:09:43.530442 systemd[1]: Started cri-containerd-df05deaefc753f05b2d9bf1d1451a84c3caabe08232f7c2fd4c7c5c98fd1b3cf.scope - libcontainer container df05deaefc753f05b2d9bf1d1451a84c3caabe08232f7c2fd4c7c5c98fd1b3cf. Jul 7 06:09:43.569607 containerd[1690]: time="2025-07-07T06:09:43.569419946Z" level=info msg="StartContainer for \"df05deaefc753f05b2d9bf1d1451a84c3caabe08232f7c2fd4c7c5c98fd1b3cf\" returns successfully" Jul 7 06:09:43.746775 containerd[1690]: time="2025-07-07T06:09:43.746445938Z" level=info msg="StopPodSandbox for \"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\"" Jul 7 06:09:43.840530 containerd[1690]: 2025-07-07 06:09:43.792 [WARNING][5605] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5c205245-76ab-43f8-9df0-ba526a26f50c", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42", Pod:"coredns-668d6bf9bc-s4xjt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8883cff950b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:43.840530 containerd[1690]: 2025-07-07 06:09:43.792 [INFO][5605] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Jul 7 06:09:43.840530 containerd[1690]: 2025-07-07 06:09:43.793 [INFO][5605] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" iface="eth0" netns="" Jul 7 06:09:43.840530 containerd[1690]: 2025-07-07 06:09:43.793 [INFO][5605] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Jul 7 06:09:43.840530 containerd[1690]: 2025-07-07 06:09:43.793 [INFO][5605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Jul 7 06:09:43.840530 containerd[1690]: 2025-07-07 06:09:43.816 [INFO][5614] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" HandleID="k8s-pod-network.26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0" Jul 7 06:09:43.840530 containerd[1690]: 2025-07-07 06:09:43.816 [INFO][5614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:43.840530 containerd[1690]: 2025-07-07 06:09:43.816 [INFO][5614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:43.840530 containerd[1690]: 2025-07-07 06:09:43.826 [WARNING][5614] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" HandleID="k8s-pod-network.26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0" Jul 7 06:09:43.840530 containerd[1690]: 2025-07-07 06:09:43.827 [INFO][5614] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" HandleID="k8s-pod-network.26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0" Jul 7 06:09:43.840530 containerd[1690]: 2025-07-07 06:09:43.831 [INFO][5614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:43.840530 containerd[1690]: 2025-07-07 06:09:43.837 [INFO][5605] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Jul 7 06:09:43.840530 containerd[1690]: time="2025-07-07T06:09:43.840261445Z" level=info msg="TearDown network for sandbox \"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\" successfully" Jul 7 06:09:43.840530 containerd[1690]: time="2025-07-07T06:09:43.840293165Z" level=info msg="StopPodSandbox for \"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\" returns successfully" Jul 7 06:09:43.842752 containerd[1690]: time="2025-07-07T06:09:43.842077209Z" level=info msg="RemovePodSandbox for \"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\"" Jul 7 06:09:43.848344 containerd[1690]: time="2025-07-07T06:09:43.847190499Z" level=info msg="Forcibly stopping sandbox \"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\"" Jul 7 06:09:43.951956 containerd[1690]: 2025-07-07 06:09:43.914 [WARNING][5632] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5c205245-76ab-43f8-9df0-ba526a26f50c", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"95c93dcfc78d1e79e4171973ca3c03bcd9e074d2b0473106ead6b88b65bf1e42", Pod:"coredns-668d6bf9bc-s4xjt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8883cff950b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:43.951956 containerd[1690]: 2025-07-07 06:09:43.914 [INFO][5632] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Jul 7 06:09:43.951956 containerd[1690]: 2025-07-07 06:09:43.914 [INFO][5632] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" iface="eth0" netns="" Jul 7 06:09:43.951956 containerd[1690]: 2025-07-07 06:09:43.914 [INFO][5632] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Jul 7 06:09:43.951956 containerd[1690]: 2025-07-07 06:09:43.914 [INFO][5632] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Jul 7 06:09:43.951956 containerd[1690]: 2025-07-07 06:09:43.936 [INFO][5641] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" HandleID="k8s-pod-network.26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0" Jul 7 06:09:43.951956 containerd[1690]: 2025-07-07 06:09:43.936 [INFO][5641] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:43.951956 containerd[1690]: 2025-07-07 06:09:43.936 [INFO][5641] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:43.951956 containerd[1690]: 2025-07-07 06:09:43.946 [WARNING][5641] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" HandleID="k8s-pod-network.26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0" Jul 7 06:09:43.951956 containerd[1690]: 2025-07-07 06:09:43.947 [INFO][5641] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" HandleID="k8s-pod-network.26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--s4xjt-eth0" Jul 7 06:09:43.951956 containerd[1690]: 2025-07-07 06:09:43.948 [INFO][5641] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:43.951956 containerd[1690]: 2025-07-07 06:09:43.950 [INFO][5632] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a" Jul 7 06:09:43.952449 containerd[1690]: time="2025-07-07T06:09:43.952011187Z" level=info msg="TearDown network for sandbox \"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\" successfully" Jul 7 06:09:43.980321 containerd[1690]: time="2025-07-07T06:09:43.980268964Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:09:43.980479 containerd[1690]: time="2025-07-07T06:09:43.980350004Z" level=info msg="RemovePodSandbox \"26a4b7ecae2b04de56220419e1873327e3007fa142efe70d7c7f694d76d9e25a\" returns successfully" Jul 7 06:09:43.981149 containerd[1690]: time="2025-07-07T06:09:43.981071245Z" level=info msg="StopPodSandbox for \"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\"" Jul 7 06:09:44.074778 containerd[1690]: 2025-07-07 06:09:44.021 [WARNING][5655] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7c9907d5-346d-4929-b83e-924668157d8a", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2", Pod:"csi-node-driver-wq7zv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali253eec1ac13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:44.074778 containerd[1690]: 2025-07-07 06:09:44.022 [INFO][5655] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Jul 7 06:09:44.074778 containerd[1690]: 2025-07-07 06:09:44.022 [INFO][5655] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" iface="eth0" netns="" Jul 7 06:09:44.074778 containerd[1690]: 2025-07-07 06:09:44.022 [INFO][5655] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Jul 7 06:09:44.074778 containerd[1690]: 2025-07-07 06:09:44.022 [INFO][5655] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Jul 7 06:09:44.074778 containerd[1690]: 2025-07-07 06:09:44.049 [INFO][5662] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" HandleID="k8s-pod-network.9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Workload="ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0" Jul 7 06:09:44.074778 containerd[1690]: 2025-07-07 06:09:44.049 [INFO][5662] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:44.074778 containerd[1690]: 2025-07-07 06:09:44.049 [INFO][5662] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:44.074778 containerd[1690]: 2025-07-07 06:09:44.061 [WARNING][5662] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" HandleID="k8s-pod-network.9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Workload="ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0" Jul 7 06:09:44.074778 containerd[1690]: 2025-07-07 06:09:44.061 [INFO][5662] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" HandleID="k8s-pod-network.9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Workload="ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0" Jul 7 06:09:44.074778 containerd[1690]: 2025-07-07 06:09:44.065 [INFO][5662] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:44.074778 containerd[1690]: 2025-07-07 06:09:44.070 [INFO][5655] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Jul 7 06:09:44.076984 containerd[1690]: time="2025-07-07T06:09:44.074839472Z" level=info msg="TearDown network for sandbox \"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\" successfully" Jul 7 06:09:44.076984 containerd[1690]: time="2025-07-07T06:09:44.074873232Z" level=info msg="StopPodSandbox for \"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\" returns successfully" Jul 7 06:09:44.076984 containerd[1690]: time="2025-07-07T06:09:44.075383113Z" level=info msg="RemovePodSandbox for \"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\"" Jul 7 06:09:44.076984 containerd[1690]: time="2025-07-07T06:09:44.075413713Z" level=info msg="Forcibly stopping sandbox \"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\"" Jul 7 06:09:44.145040 kubelet[3120]: I0707 06:09:44.143015 3120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-d8d8bcb56-9cch4" podStartSLOduration=5.389400334 podStartE2EDuration="12.142996367s" podCreationTimestamp="2025-07-07 06:09:32 +0000 UTC" firstStartedPulling="2025-07-07 06:09:32.736176035 +0000 UTC m=+49.084007205" lastFinishedPulling="2025-07-07 06:09:39.489772068 +0000 UTC m=+55.837603238" observedRunningTime="2025-07-07 06:09:40.151350742 +0000 UTC m=+56.499181912" watchObservedRunningTime="2025-07-07 06:09:44.142996367 +0000 UTC m=+60.490827537" Jul 7 06:09:44.188618 kubelet[3120]: I0707 06:09:44.188473 3120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b5f9856b7-2gvp4" podStartSLOduration=28.982933165 podStartE2EDuration="33.188448738s" podCreationTimestamp="2025-07-07 06:09:11 +0000 UTC" firstStartedPulling="2025-07-07 06:09:38.366159829 +0000 UTC m=+54.713990999" lastFinishedPulling="2025-07-07 06:09:42.571675402 +0000 UTC m=+58.919506572" observedRunningTime="2025-07-07 06:09:44.143454688 +0000 UTC m=+60.491285858" watchObservedRunningTime="2025-07-07 06:09:44.188448738 +0000 UTC m=+60.536279908" Jul 7 06:09:44.194526 containerd[1690]: 2025-07-07 06:09:44.127 [WARNING][5680] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7c9907d5-346d-4929-b83e-924668157d8a", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2", Pod:"csi-node-driver-wq7zv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali253eec1ac13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:44.194526 containerd[1690]: 2025-07-07 06:09:44.130 [INFO][5680] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Jul 7 06:09:44.194526 containerd[1690]: 2025-07-07 06:09:44.130 [INFO][5680] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" iface="eth0" netns="" Jul 7 06:09:44.194526 containerd[1690]: 2025-07-07 06:09:44.130 [INFO][5680] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Jul 7 06:09:44.194526 containerd[1690]: 2025-07-07 06:09:44.130 [INFO][5680] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Jul 7 06:09:44.194526 containerd[1690]: 2025-07-07 06:09:44.167 [INFO][5693] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" HandleID="k8s-pod-network.9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Workload="ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0" Jul 7 06:09:44.194526 containerd[1690]: 2025-07-07 06:09:44.170 [INFO][5693] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:44.194526 containerd[1690]: 2025-07-07 06:09:44.170 [INFO][5693] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:44.194526 containerd[1690]: 2025-07-07 06:09:44.181 [WARNING][5693] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" HandleID="k8s-pod-network.9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Workload="ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0" Jul 7 06:09:44.194526 containerd[1690]: 2025-07-07 06:09:44.181 [INFO][5693] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" HandleID="k8s-pod-network.9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Workload="ci--4081.3.4--a--d5356a388e-k8s-csi--node--driver--wq7zv-eth0" Jul 7 06:09:44.194526 containerd[1690]: 2025-07-07 06:09:44.186 [INFO][5693] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:44.194526 containerd[1690]: 2025-07-07 06:09:44.192 [INFO][5680] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f" Jul 7 06:09:44.195007 containerd[1690]: time="2025-07-07T06:09:44.194566550Z" level=info msg="TearDown network for sandbox \"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\" successfully" Jul 7 06:09:44.204574 containerd[1690]: time="2025-07-07T06:09:44.204515570Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:09:44.204734 containerd[1690]: time="2025-07-07T06:09:44.204592450Z" level=info msg="RemovePodSandbox \"9c3d9c201b29d6f8f9c27d944b929508d16a68d59862c567a441e4f5fc1c0c2f\" returns successfully" Jul 7 06:09:44.205854 containerd[1690]: time="2025-07-07T06:09:44.205455692Z" level=info msg="StopPodSandbox for \"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\"" Jul 7 06:09:44.299013 containerd[1690]: 2025-07-07 06:09:44.249 [WARNING][5719] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0", GenerateName:"calico-apiserver-d8cb59fcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"536f7498-046f-4b77-a82d-d7619df81d7a", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8cb59fcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b", Pod:"calico-apiserver-d8cb59fcd-czbmd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b5f2235191", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:44.299013 containerd[1690]: 2025-07-07 06:09:44.250 [INFO][5719] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Jul 7 06:09:44.299013 containerd[1690]: 2025-07-07 06:09:44.250 [INFO][5719] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" iface="eth0" netns="" Jul 7 06:09:44.299013 containerd[1690]: 2025-07-07 06:09:44.250 [INFO][5719] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Jul 7 06:09:44.299013 containerd[1690]: 2025-07-07 06:09:44.250 [INFO][5719] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Jul 7 06:09:44.299013 containerd[1690]: 2025-07-07 06:09:44.278 [INFO][5727] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" HandleID="k8s-pod-network.fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0" Jul 7 06:09:44.299013 containerd[1690]: 2025-07-07 06:09:44.278 [INFO][5727] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:44.299013 containerd[1690]: 2025-07-07 06:09:44.278 [INFO][5727] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:44.299013 containerd[1690]: 2025-07-07 06:09:44.294 [WARNING][5727] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" HandleID="k8s-pod-network.fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0" Jul 7 06:09:44.299013 containerd[1690]: 2025-07-07 06:09:44.294 [INFO][5727] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" HandleID="k8s-pod-network.fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0" Jul 7 06:09:44.299013 containerd[1690]: 2025-07-07 06:09:44.295 [INFO][5727] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:44.299013 containerd[1690]: 2025-07-07 06:09:44.297 [INFO][5719] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Jul 7 06:09:44.299889 containerd[1690]: time="2025-07-07T06:09:44.299059278Z" level=info msg="TearDown network for sandbox \"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\" successfully" Jul 7 06:09:44.299889 containerd[1690]: time="2025-07-07T06:09:44.299090558Z" level=info msg="StopPodSandbox for \"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\" returns successfully" Jul 7 06:09:44.300407 containerd[1690]: time="2025-07-07T06:09:44.300013280Z" level=info msg="RemovePodSandbox for \"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\"" Jul 7 06:09:44.300407 containerd[1690]: time="2025-07-07T06:09:44.300070680Z" level=info msg="Forcibly stopping sandbox \"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\"" Jul 7 06:09:44.371451 containerd[1690]: 2025-07-07 06:09:44.337 [WARNING][5741] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0", GenerateName:"calico-apiserver-d8cb59fcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"536f7498-046f-4b77-a82d-d7619df81d7a", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8cb59fcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b", Pod:"calico-apiserver-d8cb59fcd-czbmd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b5f2235191", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:44.371451 containerd[1690]: 2025-07-07 06:09:44.337 [INFO][5741] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Jul 7 06:09:44.371451 containerd[1690]: 2025-07-07 06:09:44.337 [INFO][5741] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" iface="eth0" netns="" Jul 7 06:09:44.371451 containerd[1690]: 2025-07-07 06:09:44.337 [INFO][5741] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Jul 7 06:09:44.371451 containerd[1690]: 2025-07-07 06:09:44.337 [INFO][5741] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Jul 7 06:09:44.371451 containerd[1690]: 2025-07-07 06:09:44.357 [INFO][5748] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" HandleID="k8s-pod-network.fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0" Jul 7 06:09:44.371451 containerd[1690]: 2025-07-07 06:09:44.357 [INFO][5748] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:44.371451 containerd[1690]: 2025-07-07 06:09:44.357 [INFO][5748] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:44.371451 containerd[1690]: 2025-07-07 06:09:44.366 [WARNING][5748] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" HandleID="k8s-pod-network.fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0" Jul 7 06:09:44.371451 containerd[1690]: 2025-07-07 06:09:44.366 [INFO][5748] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" HandleID="k8s-pod-network.fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--czbmd-eth0" Jul 7 06:09:44.371451 containerd[1690]: 2025-07-07 06:09:44.368 [INFO][5748] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:44.371451 containerd[1690]: 2025-07-07 06:09:44.369 [INFO][5741] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23" Jul 7 06:09:44.372281 containerd[1690]: time="2025-07-07T06:09:44.371415302Z" level=info msg="TearDown network for sandbox \"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\" successfully" Jul 7 06:09:44.383319 containerd[1690]: time="2025-07-07T06:09:44.383267925Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:09:44.383468 containerd[1690]: time="2025-07-07T06:09:44.383349285Z" level=info msg="RemovePodSandbox \"fc1a2a28c765d237e8de99f69faf36e658cd67bb05ae25c56d7fc6a330f02a23\" returns successfully" Jul 7 06:09:44.383882 containerd[1690]: time="2025-07-07T06:09:44.383822086Z" level=info msg="StopPodSandbox for \"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\"" Jul 7 06:09:44.469944 containerd[1690]: 2025-07-07 06:09:44.423 [WARNING][5762] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0", GenerateName:"calico-apiserver-d8cb59fcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e40e25b-cc4f-4bf0-9b64-76658b196296", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8cb59fcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0", Pod:"calico-apiserver-d8cb59fcd-6bttb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib06b0b36129", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:44.469944 containerd[1690]: 2025-07-07 06:09:44.424 [INFO][5762] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Jul 7 06:09:44.469944 containerd[1690]: 2025-07-07 06:09:44.424 [INFO][5762] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" iface="eth0" netns="" Jul 7 06:09:44.469944 containerd[1690]: 2025-07-07 06:09:44.424 [INFO][5762] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Jul 7 06:09:44.469944 containerd[1690]: 2025-07-07 06:09:44.424 [INFO][5762] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Jul 7 06:09:44.469944 containerd[1690]: 2025-07-07 06:09:44.453 [INFO][5769] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" HandleID="k8s-pod-network.e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0" Jul 7 06:09:44.469944 containerd[1690]: 2025-07-07 06:09:44.453 [INFO][5769] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:44.469944 containerd[1690]: 2025-07-07 06:09:44.453 [INFO][5769] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:44.469944 containerd[1690]: 2025-07-07 06:09:44.463 [WARNING][5769] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" HandleID="k8s-pod-network.e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0" Jul 7 06:09:44.469944 containerd[1690]: 2025-07-07 06:09:44.463 [INFO][5769] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" HandleID="k8s-pod-network.e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0" Jul 7 06:09:44.469944 containerd[1690]: 2025-07-07 06:09:44.465 [INFO][5769] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:44.469944 containerd[1690]: 2025-07-07 06:09:44.467 [INFO][5762] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Jul 7 06:09:44.469944 containerd[1690]: time="2025-07-07T06:09:44.469782737Z" level=info msg="TearDown network for sandbox \"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\" successfully" Jul 7 06:09:44.469944 containerd[1690]: time="2025-07-07T06:09:44.469809017Z" level=info msg="StopPodSandbox for \"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\" returns successfully" Jul 7 06:09:44.471707 containerd[1690]: time="2025-07-07T06:09:44.471658461Z" level=info msg="RemovePodSandbox for \"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\"" Jul 7 06:09:44.471707 containerd[1690]: time="2025-07-07T06:09:44.471707661Z" level=info msg="Forcibly stopping sandbox \"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\"" Jul 7 06:09:44.565825 containerd[1690]: 2025-07-07 06:09:44.518 [WARNING][5783] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0", GenerateName:"calico-apiserver-d8cb59fcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e40e25b-cc4f-4bf0-9b64-76658b196296", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8cb59fcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0", Pod:"calico-apiserver-d8cb59fcd-6bttb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib06b0b36129", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:44.565825 containerd[1690]: 2025-07-07 06:09:44.519 [INFO][5783] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Jul 7 06:09:44.565825 containerd[1690]: 2025-07-07 06:09:44.519 [INFO][5783] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" iface="eth0" netns="" Jul 7 06:09:44.565825 containerd[1690]: 2025-07-07 06:09:44.519 [INFO][5783] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Jul 7 06:09:44.565825 containerd[1690]: 2025-07-07 06:09:44.519 [INFO][5783] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Jul 7 06:09:44.565825 containerd[1690]: 2025-07-07 06:09:44.546 [INFO][5790] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" HandleID="k8s-pod-network.e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0" Jul 7 06:09:44.565825 containerd[1690]: 2025-07-07 06:09:44.547 [INFO][5790] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:44.565825 containerd[1690]: 2025-07-07 06:09:44.547 [INFO][5790] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:44.565825 containerd[1690]: 2025-07-07 06:09:44.559 [WARNING][5790] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" HandleID="k8s-pod-network.e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0" Jul 7 06:09:44.565825 containerd[1690]: 2025-07-07 06:09:44.560 [INFO][5790] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" HandleID="k8s-pod-network.e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--apiserver--d8cb59fcd--6bttb-eth0" Jul 7 06:09:44.565825 containerd[1690]: 2025-07-07 06:09:44.561 [INFO][5790] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:44.565825 containerd[1690]: 2025-07-07 06:09:44.563 [INFO][5783] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d" Jul 7 06:09:44.566861 containerd[1690]: time="2025-07-07T06:09:44.566376890Z" level=info msg="TearDown network for sandbox \"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\" successfully" Jul 7 06:09:44.578956 containerd[1690]: time="2025-07-07T06:09:44.578885674Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:09:44.579900 containerd[1690]: time="2025-07-07T06:09:44.579704356Z" level=info msg="RemovePodSandbox \"e14e03a7deafb0113533909f2c9c908c4ecd53d40a0281e971c2e07b737bba8d\" returns successfully" Jul 7 06:09:44.580952 containerd[1690]: time="2025-07-07T06:09:44.580787878Z" level=info msg="StopPodSandbox for \"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\"" Jul 7 06:09:44.717288 containerd[1690]: 2025-07-07 06:09:44.641 [WARNING][5809] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0", GenerateName:"calico-kube-controllers-6b5f9856b7-", Namespace:"calico-system", SelfLink:"", UID:"ab60cfb0-6466-4ab8-aaf5-3dfba3e66388", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b5f9856b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645", Pod:"calico-kube-controllers-6b5f9856b7-2gvp4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9352f152fae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:44.717288 containerd[1690]: 2025-07-07 06:09:44.642 [INFO][5809] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Jul 7 06:09:44.717288 containerd[1690]: 2025-07-07 06:09:44.642 [INFO][5809] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" iface="eth0" netns="" Jul 7 06:09:44.717288 containerd[1690]: 2025-07-07 06:09:44.642 [INFO][5809] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Jul 7 06:09:44.717288 containerd[1690]: 2025-07-07 06:09:44.642 [INFO][5809] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Jul 7 06:09:44.717288 containerd[1690]: 2025-07-07 06:09:44.699 [INFO][5816] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" HandleID="k8s-pod-network.08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0" Jul 7 06:09:44.717288 containerd[1690]: 2025-07-07 06:09:44.700 [INFO][5816] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:44.717288 containerd[1690]: 2025-07-07 06:09:44.700 [INFO][5816] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:44.717288 containerd[1690]: 2025-07-07 06:09:44.709 [WARNING][5816] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" HandleID="k8s-pod-network.08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0" Jul 7 06:09:44.717288 containerd[1690]: 2025-07-07 06:09:44.709 [INFO][5816] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" HandleID="k8s-pod-network.08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0" Jul 7 06:09:44.717288 containerd[1690]: 2025-07-07 06:09:44.712 [INFO][5816] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:44.717288 containerd[1690]: 2025-07-07 06:09:44.713 [INFO][5809] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Jul 7 06:09:44.717288 containerd[1690]: time="2025-07-07T06:09:44.715499466Z" level=info msg="TearDown network for sandbox \"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\" successfully" Jul 7 06:09:44.717288 containerd[1690]: time="2025-07-07T06:09:44.715527306Z" level=info msg="StopPodSandbox for \"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\" returns successfully" Jul 7 06:09:44.717288 containerd[1690]: time="2025-07-07T06:09:44.716080827Z" level=info msg="RemovePodSandbox for \"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\"" Jul 7 06:09:44.717288 containerd[1690]: time="2025-07-07T06:09:44.716118187Z" level=info msg="Forcibly stopping sandbox \"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\"" Jul 7 06:09:44.744253 containerd[1690]: time="2025-07-07T06:09:44.742679480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:44.747727 containerd[1690]: time="2025-07-07T06:09:44.747681810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 7 06:09:44.751613 containerd[1690]: time="2025-07-07T06:09:44.751565058Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:44.759897 containerd[1690]: time="2025-07-07T06:09:44.759843594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:44.760648 containerd[1690]: time="2025-07-07T06:09:44.760532436Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 2.187966993s" Jul 7 06:09:44.760769 containerd[1690]: time="2025-07-07T06:09:44.760751756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 7 06:09:44.766372 containerd[1690]: time="2025-07-07T06:09:44.765832606Z" level=info msg="CreateContainer within sandbox \"bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 06:09:44.767474 containerd[1690]: time="2025-07-07T06:09:44.767436849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:09:44.819290 containerd[1690]: 2025-07-07 06:09:44.771 [WARNING][5830] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0", GenerateName:"calico-kube-controllers-6b5f9856b7-", Namespace:"calico-system", SelfLink:"", UID:"ab60cfb0-6466-4ab8-aaf5-3dfba3e66388", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b5f9856b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"c8285c2548bc24401b9d4af2fec514496c5a2b9e4085e06ae2426dfb4f08f645", Pod:"calico-kube-controllers-6b5f9856b7-2gvp4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9352f152fae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:44.819290 containerd[1690]: 2025-07-07 06:09:44.772 [INFO][5830] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Jul 7 06:09:44.819290 containerd[1690]: 2025-07-07 06:09:44.772 [INFO][5830] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" iface="eth0" netns="" Jul 7 06:09:44.819290 containerd[1690]: 2025-07-07 06:09:44.772 [INFO][5830] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Jul 7 06:09:44.819290 containerd[1690]: 2025-07-07 06:09:44.772 [INFO][5830] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Jul 7 06:09:44.819290 containerd[1690]: 2025-07-07 06:09:44.798 [INFO][5840] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" HandleID="k8s-pod-network.08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0" Jul 7 06:09:44.819290 containerd[1690]: 2025-07-07 06:09:44.798 [INFO][5840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:44.819290 containerd[1690]: 2025-07-07 06:09:44.798 [INFO][5840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:44.819290 containerd[1690]: 2025-07-07 06:09:44.813 [WARNING][5840] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" HandleID="k8s-pod-network.08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0" Jul 7 06:09:44.819290 containerd[1690]: 2025-07-07 06:09:44.813 [INFO][5840] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" HandleID="k8s-pod-network.08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Workload="ci--4081.3.4--a--d5356a388e-k8s-calico--kube--controllers--6b5f9856b7--2gvp4-eth0" Jul 7 06:09:44.819290 containerd[1690]: 2025-07-07 06:09:44.815 [INFO][5840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:44.819290 containerd[1690]: 2025-07-07 06:09:44.817 [INFO][5830] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0" Jul 7 06:09:44.819780 containerd[1690]: time="2025-07-07T06:09:44.819339993Z" level=info msg="TearDown network for sandbox \"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\" successfully" Jul 7 06:09:44.823931 containerd[1690]: time="2025-07-07T06:09:44.823843802Z" level=info msg="CreateContainer within sandbox \"bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e4669242cd0b9125ae282c6499a3e6c2ae2e5de97960713cdae90ebd220dcf29\"" Jul 7 06:09:44.825925 containerd[1690]: time="2025-07-07T06:09:44.824420923Z" level=info msg="StartContainer for \"e4669242cd0b9125ae282c6499a3e6c2ae2e5de97960713cdae90ebd220dcf29\"" Jul 7 06:09:44.844721 containerd[1690]: time="2025-07-07T06:09:44.844665123Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:09:44.845043 containerd[1690]: time="2025-07-07T06:09:44.845010044Z" level=info msg="RemovePodSandbox \"08e338f8d53bc7f39ed5a9425ee13de914a421526297ed6d91f2ab162929dae0\" returns successfully" Jul 7 06:09:44.847721 containerd[1690]: time="2025-07-07T06:09:44.847674689Z" level=info msg="StopPodSandbox for \"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\"" Jul 7 06:09:44.871002 systemd[1]: Started cri-containerd-e4669242cd0b9125ae282c6499a3e6c2ae2e5de97960713cdae90ebd220dcf29.scope - libcontainer container e4669242cd0b9125ae282c6499a3e6c2ae2e5de97960713cdae90ebd220dcf29. Jul 7 06:09:44.923722 containerd[1690]: time="2025-07-07T06:09:44.923484920Z" level=info msg="StartContainer for \"e4669242cd0b9125ae282c6499a3e6c2ae2e5de97960713cdae90ebd220dcf29\" returns successfully" Jul 7 06:09:44.965040 containerd[1690]: 2025-07-07 06:09:44.926 [WARNING][5871] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fa1b246f-73f9-4220-bb08-88842ebef68c", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786", Pod:"coredns-668d6bf9bc-shc5t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a816c0b0b7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:44.965040 containerd[1690]: 2025-07-07 06:09:44.926 [INFO][5871] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Jul 7 06:09:44.965040 containerd[1690]: 2025-07-07 06:09:44.926 [INFO][5871] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" iface="eth0" netns="" Jul 7 06:09:44.965040 containerd[1690]: 2025-07-07 06:09:44.926 [INFO][5871] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Jul 7 06:09:44.965040 containerd[1690]: 2025-07-07 06:09:44.926 [INFO][5871] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Jul 7 06:09:44.965040 containerd[1690]: 2025-07-07 06:09:44.950 [INFO][5895] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" HandleID="k8s-pod-network.644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0" Jul 7 06:09:44.965040 containerd[1690]: 2025-07-07 06:09:44.950 [INFO][5895] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:44.965040 containerd[1690]: 2025-07-07 06:09:44.950 [INFO][5895] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:44.965040 containerd[1690]: 2025-07-07 06:09:44.959 [WARNING][5895] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" HandleID="k8s-pod-network.644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0" Jul 7 06:09:44.965040 containerd[1690]: 2025-07-07 06:09:44.959 [INFO][5895] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" HandleID="k8s-pod-network.644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0" Jul 7 06:09:44.965040 containerd[1690]: 2025-07-07 06:09:44.961 [INFO][5895] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:44.965040 containerd[1690]: 2025-07-07 06:09:44.963 [INFO][5871] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Jul 7 06:09:44.965540 containerd[1690]: time="2025-07-07T06:09:44.965086283Z" level=info msg="TearDown network for sandbox \"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\" successfully" Jul 7 06:09:44.965540 containerd[1690]: time="2025-07-07T06:09:44.965115563Z" level=info msg="StopPodSandbox for \"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\" returns successfully" Jul 7 06:09:44.966056 containerd[1690]: time="2025-07-07T06:09:44.965751084Z" level=info msg="RemovePodSandbox for \"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\"" Jul 7 06:09:44.966056 containerd[1690]: time="2025-07-07T06:09:44.965788004Z" level=info msg="Forcibly stopping sandbox \"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\"" Jul 7 06:09:45.046845 containerd[1690]: 2025-07-07 06:09:45.010 [WARNING][5910] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fa1b246f-73f9-4220-bb08-88842ebef68c", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"94aba52306e1b2a67af27643958d13ff751e42a8b42c543ea3887ee209e1f786", Pod:"coredns-668d6bf9bc-shc5t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a816c0b0b7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:45.046845 containerd[1690]: 2025-07-07 06:09:45.010 [INFO][5910] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Jul 7 06:09:45.046845 containerd[1690]: 2025-07-07 06:09:45.010 [INFO][5910] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" iface="eth0" netns="" Jul 7 06:09:45.046845 containerd[1690]: 2025-07-07 06:09:45.010 [INFO][5910] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Jul 7 06:09:45.046845 containerd[1690]: 2025-07-07 06:09:45.010 [INFO][5910] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Jul 7 06:09:45.046845 containerd[1690]: 2025-07-07 06:09:45.030 [INFO][5917] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" HandleID="k8s-pod-network.644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0" Jul 7 06:09:45.046845 containerd[1690]: 2025-07-07 06:09:45.030 [INFO][5917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:45.046845 containerd[1690]: 2025-07-07 06:09:45.030 [INFO][5917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:45.046845 containerd[1690]: 2025-07-07 06:09:45.039 [WARNING][5917] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" HandleID="k8s-pod-network.644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0" Jul 7 06:09:45.046845 containerd[1690]: 2025-07-07 06:09:45.039 [INFO][5917] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" HandleID="k8s-pod-network.644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Workload="ci--4081.3.4--a--d5356a388e-k8s-coredns--668d6bf9bc--shc5t-eth0" Jul 7 06:09:45.046845 containerd[1690]: 2025-07-07 06:09:45.043 [INFO][5917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:45.046845 containerd[1690]: 2025-07-07 06:09:45.044 [INFO][5910] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd" Jul 7 06:09:45.046845 containerd[1690]: time="2025-07-07T06:09:45.046897285Z" level=info msg="TearDown network for sandbox \"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\" successfully" Jul 7 06:09:45.057585 containerd[1690]: time="2025-07-07T06:09:45.057531227Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:09:45.057741 containerd[1690]: time="2025-07-07T06:09:45.057633987Z" level=info msg="RemovePodSandbox \"644c52ac3bd745db70fa363e31d9241191fda8692a86944a7b1ae9da8f28cefd\" returns successfully" Jul 7 06:09:45.058435 containerd[1690]: time="2025-07-07T06:09:45.058335948Z" level=info msg="StopPodSandbox for \"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\"" Jul 7 06:09:45.141632 containerd[1690]: 2025-07-07 06:09:45.098 [WARNING][5931] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"aa20f3ca-cae5-4a79-b6b7-441c614e749e", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5", Pod:"goldmane-768f4c5c69-t795m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidae1cc2ffee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:45.141632 containerd[1690]: 2025-07-07 06:09:45.099 [INFO][5931] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Jul 7 06:09:45.141632 containerd[1690]: 2025-07-07 06:09:45.099 [INFO][5931] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" iface="eth0" netns="" Jul 7 06:09:45.141632 containerd[1690]: 2025-07-07 06:09:45.099 [INFO][5931] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Jul 7 06:09:45.141632 containerd[1690]: 2025-07-07 06:09:45.099 [INFO][5931] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Jul 7 06:09:45.141632 containerd[1690]: 2025-07-07 06:09:45.122 [INFO][5939] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" HandleID="k8s-pod-network.66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Workload="ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0" Jul 7 06:09:45.141632 containerd[1690]: 2025-07-07 06:09:45.123 [INFO][5939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:45.141632 containerd[1690]: 2025-07-07 06:09:45.123 [INFO][5939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:45.141632 containerd[1690]: 2025-07-07 06:09:45.136 [WARNING][5939] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" HandleID="k8s-pod-network.66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Workload="ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0" Jul 7 06:09:45.141632 containerd[1690]: 2025-07-07 06:09:45.136 [INFO][5939] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" HandleID="k8s-pod-network.66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Workload="ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0" Jul 7 06:09:45.141632 containerd[1690]: 2025-07-07 06:09:45.138 [INFO][5939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:45.141632 containerd[1690]: 2025-07-07 06:09:45.139 [INFO][5931] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Jul 7 06:09:45.142546 containerd[1690]: time="2025-07-07T06:09:45.141679954Z" level=info msg="TearDown network for sandbox \"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\" successfully" Jul 7 06:09:45.142546 containerd[1690]: time="2025-07-07T06:09:45.141707834Z" level=info msg="StopPodSandbox for \"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\" returns successfully" Jul 7 06:09:45.142546 containerd[1690]: time="2025-07-07T06:09:45.142181115Z" level=info msg="RemovePodSandbox for \"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\"" Jul 7 06:09:45.142546 containerd[1690]: time="2025-07-07T06:09:45.142325196Z" level=info msg="Forcibly stopping sandbox \"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\"" Jul 7 06:09:45.223122 containerd[1690]: 2025-07-07 06:09:45.183 [WARNING][5953] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"aa20f3ca-cae5-4a79-b6b7-441c614e749e", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-d5356a388e", ContainerID:"73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5", Pod:"goldmane-768f4c5c69-t795m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidae1cc2ffee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:45.223122 containerd[1690]: 2025-07-07 06:09:45.183 [INFO][5953] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Jul 7 06:09:45.223122 containerd[1690]: 2025-07-07 06:09:45.183 [INFO][5953] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" iface="eth0" netns="" Jul 7 06:09:45.223122 containerd[1690]: 2025-07-07 06:09:45.183 [INFO][5953] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Jul 7 06:09:45.223122 containerd[1690]: 2025-07-07 06:09:45.184 [INFO][5953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Jul 7 06:09:45.223122 containerd[1690]: 2025-07-07 06:09:45.207 [INFO][5960] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" HandleID="k8s-pod-network.66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Workload="ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0" Jul 7 06:09:45.223122 containerd[1690]: 2025-07-07 06:09:45.208 [INFO][5960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:45.223122 containerd[1690]: 2025-07-07 06:09:45.208 [INFO][5960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:45.223122 containerd[1690]: 2025-07-07 06:09:45.218 [WARNING][5960] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" HandleID="k8s-pod-network.66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Workload="ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0" Jul 7 06:09:45.223122 containerd[1690]: 2025-07-07 06:09:45.218 [INFO][5960] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" HandleID="k8s-pod-network.66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Workload="ci--4081.3.4--a--d5356a388e-k8s-goldmane--768f4c5c69--t795m-eth0" Jul 7 06:09:45.223122 containerd[1690]: 2025-07-07 06:09:45.219 [INFO][5960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:45.223122 containerd[1690]: 2025-07-07 06:09:45.221 [INFO][5953] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c" Jul 7 06:09:45.223573 containerd[1690]: time="2025-07-07T06:09:45.223165557Z" level=info msg="TearDown network for sandbox \"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\" successfully" Jul 7 06:09:45.234916 containerd[1690]: time="2025-07-07T06:09:45.234840421Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:09:45.235240 containerd[1690]: time="2025-07-07T06:09:45.234926381Z" level=info msg="RemovePodSandbox \"66db4b57f12fbb803f08f6e507d6d9ec0c3b104c1f577eba1de9f37925c78f1c\" returns successfully" Jul 7 06:09:45.236074 containerd[1690]: time="2025-07-07T06:09:45.235579742Z" level=info msg="StopPodSandbox for \"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\"" Jul 7 06:09:45.313573 containerd[1690]: 2025-07-07 06:09:45.273 [WARNING][5974] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-whisker--7b96468696--n7cks-eth0" Jul 7 06:09:45.313573 containerd[1690]: 2025-07-07 06:09:45.274 [INFO][5974] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Jul 7 06:09:45.313573 containerd[1690]: 2025-07-07 06:09:45.274 [INFO][5974] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" iface="eth0" netns="" Jul 7 06:09:45.313573 containerd[1690]: 2025-07-07 06:09:45.274 [INFO][5974] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Jul 7 06:09:45.313573 containerd[1690]: 2025-07-07 06:09:45.274 [INFO][5974] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Jul 7 06:09:45.313573 containerd[1690]: 2025-07-07 06:09:45.299 [INFO][5981] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" HandleID="k8s-pod-network.ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Workload="ci--4081.3.4--a--d5356a388e-k8s-whisker--7b96468696--n7cks-eth0" Jul 7 06:09:45.313573 containerd[1690]: 2025-07-07 06:09:45.299 [INFO][5981] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:45.313573 containerd[1690]: 2025-07-07 06:09:45.299 [INFO][5981] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:45.313573 containerd[1690]: 2025-07-07 06:09:45.308 [WARNING][5981] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" HandleID="k8s-pod-network.ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Workload="ci--4081.3.4--a--d5356a388e-k8s-whisker--7b96468696--n7cks-eth0" Jul 7 06:09:45.313573 containerd[1690]: 2025-07-07 06:09:45.308 [INFO][5981] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" HandleID="k8s-pod-network.ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Workload="ci--4081.3.4--a--d5356a388e-k8s-whisker--7b96468696--n7cks-eth0" Jul 7 06:09:45.313573 containerd[1690]: 2025-07-07 06:09:45.310 [INFO][5981] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:45.313573 containerd[1690]: 2025-07-07 06:09:45.311 [INFO][5974] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Jul 7 06:09:45.314641 containerd[1690]: time="2025-07-07T06:09:45.313621419Z" level=info msg="TearDown network for sandbox \"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\" successfully" Jul 7 06:09:45.314641 containerd[1690]: time="2025-07-07T06:09:45.313651619Z" level=info msg="StopPodSandbox for \"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\" returns successfully" Jul 7 06:09:45.314641 containerd[1690]: time="2025-07-07T06:09:45.314121620Z" level=info msg="RemovePodSandbox for \"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\"" Jul 7 06:09:45.314641 containerd[1690]: time="2025-07-07T06:09:45.314157860Z" level=info msg="Forcibly stopping sandbox \"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\"" Jul 7 06:09:45.386796 containerd[1690]: 2025-07-07 06:09:45.349 [WARNING][5995] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" WorkloadEndpoint="ci--4081.3.4--a--d5356a388e-k8s-whisker--7b96468696--n7cks-eth0" Jul 7 06:09:45.386796 containerd[1690]: 2025-07-07 06:09:45.349 [INFO][5995] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Jul 7 06:09:45.386796 containerd[1690]: 2025-07-07 06:09:45.349 [INFO][5995] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" iface="eth0" netns="" Jul 7 06:09:45.386796 containerd[1690]: 2025-07-07 06:09:45.349 [INFO][5995] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Jul 7 06:09:45.386796 containerd[1690]: 2025-07-07 06:09:45.349 [INFO][5995] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Jul 7 06:09:45.386796 containerd[1690]: 2025-07-07 06:09:45.371 [INFO][6002] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" HandleID="k8s-pod-network.ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Workload="ci--4081.3.4--a--d5356a388e-k8s-whisker--7b96468696--n7cks-eth0" Jul 7 06:09:45.386796 containerd[1690]: 2025-07-07 06:09:45.371 [INFO][6002] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:45.386796 containerd[1690]: 2025-07-07 06:09:45.371 [INFO][6002] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:45.386796 containerd[1690]: 2025-07-07 06:09:45.381 [WARNING][6002] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" HandleID="k8s-pod-network.ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Workload="ci--4081.3.4--a--d5356a388e-k8s-whisker--7b96468696--n7cks-eth0" Jul 7 06:09:45.386796 containerd[1690]: 2025-07-07 06:09:45.381 [INFO][6002] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" HandleID="k8s-pod-network.ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Workload="ci--4081.3.4--a--d5356a388e-k8s-whisker--7b96468696--n7cks-eth0" Jul 7 06:09:45.386796 containerd[1690]: 2025-07-07 06:09:45.383 [INFO][6002] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:45.386796 containerd[1690]: 2025-07-07 06:09:45.385 [INFO][5995] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e" Jul 7 06:09:45.387354 containerd[1690]: time="2025-07-07T06:09:45.386841565Z" level=info msg="TearDown network for sandbox \"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\" successfully" Jul 7 06:09:45.437343 containerd[1690]: time="2025-07-07T06:09:45.437286586Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:09:45.437497 containerd[1690]: time="2025-07-07T06:09:45.437381226Z" level=info msg="RemovePodSandbox \"ecce50db885e8079eb4c6b5d9536c38b72ce84103a82bca1bccaa21f5cd38c0e\" returns successfully" Jul 7 06:09:47.653907 containerd[1690]: time="2025-07-07T06:09:47.653840582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:47.664339 containerd[1690]: time="2025-07-07T06:09:47.664250163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 7 06:09:47.673143 containerd[1690]: time="2025-07-07T06:09:47.673062861Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:47.691050 containerd[1690]: time="2025-07-07T06:09:47.690968577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:47.692051 containerd[1690]: time="2025-07-07T06:09:47.691887819Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.924267049s" Jul 7 06:09:47.692051 containerd[1690]: time="2025-07-07T06:09:47.691949979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 7 06:09:47.694501 containerd[1690]: time="2025-07-07T06:09:47.694460464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 06:09:47.697645 containerd[1690]: time="2025-07-07T06:09:47.697045029Z" level=info msg="CreateContainer within sandbox \"5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:09:47.782599 containerd[1690]: time="2025-07-07T06:09:47.782531960Z" level=info msg="CreateContainer within sandbox \"5cac852d0c5dd0a3cf3abaabebe784e24647f8e50837b06fcf541e1ab6bd877b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f053195dc22c870cc37266d5f5b955cae9918349ff65d41dbce1c06b89e241e5\"" Jul 7 06:09:47.783655 containerd[1690]: time="2025-07-07T06:09:47.783609162Z" level=info msg="StartContainer for \"f053195dc22c870cc37266d5f5b955cae9918349ff65d41dbce1c06b89e241e5\"" Jul 7 06:09:47.818516 systemd[1]: Started cri-containerd-f053195dc22c870cc37266d5f5b955cae9918349ff65d41dbce1c06b89e241e5.scope - libcontainer container f053195dc22c870cc37266d5f5b955cae9918349ff65d41dbce1c06b89e241e5. Jul 7 06:09:47.857730 containerd[1690]: time="2025-07-07T06:09:47.857672870Z" level=info msg="StartContainer for \"f053195dc22c870cc37266d5f5b955cae9918349ff65d41dbce1c06b89e241e5\" returns successfully" Jul 7 06:09:48.165262 kubelet[3120]: I0707 06:09:48.164631 3120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d8cb59fcd-czbmd" podStartSLOduration=33.451643044 podStartE2EDuration="42.164611165s" podCreationTimestamp="2025-07-07 06:09:06 +0000 UTC" firstStartedPulling="2025-07-07 06:09:38.98002786 +0000 UTC m=+55.327859030" lastFinishedPulling="2025-07-07 06:09:47.692995981 +0000 UTC m=+64.040827151" observedRunningTime="2025-07-07 06:09:48.163991803 +0000 UTC m=+64.511822973" watchObservedRunningTime="2025-07-07 06:09:48.164611165 +0000 UTC m=+64.512442335" Jul 7 06:09:49.150349 kubelet[3120]: I0707 06:09:49.150027 3120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:09:50.923359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1216031403.mount: Deactivated successfully. Jul 7 06:09:52.841082 containerd[1690]: time="2025-07-07T06:09:52.841034204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:52.844192 containerd[1690]: time="2025-07-07T06:09:52.844146851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 7 06:09:52.852802 containerd[1690]: time="2025-07-07T06:09:52.852731388Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:52.863178 containerd[1690]: time="2025-07-07T06:09:52.863116729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:52.863913 containerd[1690]: time="2025-07-07T06:09:52.863726850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 5.169224826s" Jul 7 06:09:52.863913 containerd[1690]: time="2025-07-07T06:09:52.863764610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 7 06:09:52.865640 containerd[1690]: time="2025-07-07T06:09:52.865591974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:09:52.867700 containerd[1690]: time="2025-07-07T06:09:52.867658658Z" level=info msg="CreateContainer within sandbox \"73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 06:09:52.940688 containerd[1690]: time="2025-07-07T06:09:52.940637724Z" level=info msg="CreateContainer within sandbox \"73bf8099d28f904ef2f5e7d20f52b9d1956cf2ea4f3b47c0a0c1458034a400f5\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0e220ba7637b8bf4b6128b6804052b8c7ecd68aecb729a6521740357a4bf87e2\"" Jul 7 06:09:52.941509 containerd[1690]: time="2025-07-07T06:09:52.941475246Z" level=info msg="StartContainer for \"0e220ba7637b8bf4b6128b6804052b8c7ecd68aecb729a6521740357a4bf87e2\"" Jul 7 06:09:53.001442 systemd[1]: Started cri-containerd-0e220ba7637b8bf4b6128b6804052b8c7ecd68aecb729a6521740357a4bf87e2.scope - libcontainer container 0e220ba7637b8bf4b6128b6804052b8c7ecd68aecb729a6521740357a4bf87e2. Jul 7 06:09:53.040163 containerd[1690]: time="2025-07-07T06:09:53.039709122Z" level=info msg="StartContainer for \"0e220ba7637b8bf4b6128b6804052b8c7ecd68aecb729a6521740357a4bf87e2\" returns successfully" Jul 7 06:09:53.209663 kubelet[3120]: I0707 06:09:53.209246 3120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-t795m" podStartSLOduration=27.951052451 podStartE2EDuration="41.209211511s" podCreationTimestamp="2025-07-07 06:09:12 +0000 UTC" firstStartedPulling="2025-07-07 06:09:39.606776072 +0000 UTC m=+55.954607242" lastFinishedPulling="2025-07-07 06:09:52.864935132 +0000 UTC m=+69.212766302" observedRunningTime="2025-07-07 06:09:53.184574939 +0000 UTC m=+69.532406109" watchObservedRunningTime="2025-07-07 06:09:53.209211511 +0000 UTC m=+69.557042681" Jul 7 06:09:53.224072 containerd[1690]: time="2025-07-07T06:09:53.223359181Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:53.228123 containerd[1690]: time="2025-07-07T06:09:53.228058271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 06:09:53.239244 containerd[1690]: time="2025-07-07T06:09:53.237984892Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 371.673437ms" Jul 7 06:09:53.239244 containerd[1690]: time="2025-07-07T06:09:53.238042372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 7 06:09:53.243940 containerd[1690]: time="2025-07-07T06:09:53.243888745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 06:09:53.247600 containerd[1690]: time="2025-07-07T06:09:53.247548472Z" level=info msg="CreateContainer within sandbox \"e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:09:53.319051 containerd[1690]: time="2025-07-07T06:09:53.318972063Z" level=info msg="CreateContainer within sandbox \"e9ef6c5e61f19600b0b48391f0708b99adb65a287ea4de1a8e61dd60312d44c0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f2994d2e63cc49227a5069f040afba3a3d8c4baf2810e9e44de959fd4a6a1db9\"" Jul 7 06:09:53.320119 containerd[1690]: time="2025-07-07T06:09:53.320063745Z" level=info msg="StartContainer for \"f2994d2e63cc49227a5069f040afba3a3d8c4baf2810e9e44de959fd4a6a1db9\"" Jul 7 06:09:53.349723 systemd[1]: Started cri-containerd-f2994d2e63cc49227a5069f040afba3a3d8c4baf2810e9e44de959fd4a6a1db9.scope - libcontainer container f2994d2e63cc49227a5069f040afba3a3d8c4baf2810e9e44de959fd4a6a1db9. Jul 7 06:09:53.388671 containerd[1690]: time="2025-07-07T06:09:53.388618970Z" level=info msg="StartContainer for \"f2994d2e63cc49227a5069f040afba3a3d8c4baf2810e9e44de959fd4a6a1db9\" returns successfully" Jul 7 06:09:55.023241 containerd[1690]: time="2025-07-07T06:09:55.022602980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:55.028269 containerd[1690]: time="2025-07-07T06:09:55.028186032Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 7 06:09:55.039476 containerd[1690]: time="2025-07-07T06:09:55.039414536Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:55.048520 containerd[1690]: time="2025-07-07T06:09:55.048447235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:55.050079 containerd[1690]: time="2025-07-07T06:09:55.049944158Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.805832613s" Jul 7 06:09:55.050079 containerd[1690]: time="2025-07-07T06:09:55.049988678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 7 06:09:55.054292 containerd[1690]: time="2025-07-07T06:09:55.054207767Z" level=info msg="CreateContainer within sandbox \"bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 06:09:55.111609 containerd[1690]: time="2025-07-07T06:09:55.111430288Z" level=info msg="CreateContainer within sandbox \"bc1003d444891a23802058e9fe8c1d304e61dd0ca59d897f7ef38f1c8937a1f2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0b38c2848cc718ab61f52adf2126f7272e09a09e554aa2804721ebed64128e64\"" Jul 7 06:09:55.113128 containerd[1690]: time="2025-07-07T06:09:55.112314769Z" level=info msg="StartContainer for \"0b38c2848cc718ab61f52adf2126f7272e09a09e554aa2804721ebed64128e64\"" Jul 7 06:09:55.155472 systemd[1]: Started cri-containerd-0b38c2848cc718ab61f52adf2126f7272e09a09e554aa2804721ebed64128e64.scope - libcontainer container 0b38c2848cc718ab61f52adf2126f7272e09a09e554aa2804721ebed64128e64. Jul 7 06:09:55.201197 containerd[1690]: time="2025-07-07T06:09:55.201143917Z" level=info msg="StartContainer for \"0b38c2848cc718ab61f52adf2126f7272e09a09e554aa2804721ebed64128e64\" returns successfully" Jul 7 06:09:55.361648 kubelet[3120]: I0707 06:09:55.361473 3120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d8cb59fcd-6bttb" podStartSLOduration=35.867519553 podStartE2EDuration="49.361448655s" podCreationTimestamp="2025-07-07 06:09:06 +0000 UTC" firstStartedPulling="2025-07-07 06:09:39.74889968 +0000 UTC m=+56.096730850" lastFinishedPulling="2025-07-07 06:09:53.242828782 +0000 UTC m=+69.590659952" observedRunningTime="2025-07-07 06:09:54.19357579 +0000 UTC m=+70.541406920" watchObservedRunningTime="2025-07-07 06:09:55.361448655 +0000 UTC m=+71.709279825" Jul 7 06:09:55.895307 kubelet[3120]: I0707 06:09:55.895203 3120 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 06:09:55.899009 kubelet[3120]: I0707 06:09:55.898849 3120 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 06:09:56.199752 kubelet[3120]: I0707 06:09:56.199571 3120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wq7zv" podStartSLOduration=28.868974632 podStartE2EDuration="45.199555585s" podCreationTimestamp="2025-07-07 06:09:11 +0000 UTC" firstStartedPulling="2025-07-07 06:09:38.720666888 +0000 UTC m=+55.068498058" lastFinishedPulling="2025-07-07 06:09:55.051247841 +0000 UTC m=+71.399079011" observedRunningTime="2025-07-07 06:09:56.199017864 +0000 UTC m=+72.546849034" watchObservedRunningTime="2025-07-07 06:09:56.199555585 +0000 UTC m=+72.547386755" Jul 7 06:10:13.534263 kubelet[3120]: I0707 06:10:13.532921 3120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:11:06.526623 systemd[1]: Started sshd@7-10.200.20.11:22-10.200.16.10:38370.service - OpenSSH per-connection server daemon (10.200.16.10:38370). Jul 7 06:11:07.005594 sshd[6501]: Accepted publickey for core from 10.200.16.10 port 38370 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:11:07.007762 sshd[6501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:11:07.013110 systemd-logind[1663]: New session 10 of user core. Jul 7 06:11:07.020443 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:11:07.436207 sshd[6501]: pam_unix(sshd:session): session closed for user core Jul 7 06:11:07.440079 systemd[1]: sshd@7-10.200.20.11:22-10.200.16.10:38370.service: Deactivated successfully. Jul 7 06:11:07.442637 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:11:07.443735 systemd-logind[1663]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:11:07.444767 systemd-logind[1663]: Removed session 10. Jul 7 06:11:12.528572 systemd[1]: Started sshd@8-10.200.20.11:22-10.200.16.10:37364.service - OpenSSH per-connection server daemon (10.200.16.10:37364). Jul 7 06:11:13.004403 sshd[6523]: Accepted publickey for core from 10.200.16.10 port 37364 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:11:13.005080 sshd[6523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:11:13.010125 systemd-logind[1663]: New session 11 of user core. Jul 7 06:11:13.015545 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:11:13.431444 sshd[6523]: pam_unix(sshd:session): session closed for user core Jul 7 06:11:13.435797 systemd-logind[1663]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:11:13.436564 systemd[1]: sshd@8-10.200.20.11:22-10.200.16.10:37364.service: Deactivated successfully. Jul 7 06:11:13.439676 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:11:13.442879 systemd-logind[1663]: Removed session 11. Jul 7 06:11:18.526567 systemd[1]: Started sshd@9-10.200.20.11:22-10.200.16.10:37374.service - OpenSSH per-connection server daemon (10.200.16.10:37374). Jul 7 06:11:19.003963 sshd[6575]: Accepted publickey for core from 10.200.16.10 port 37374 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:11:19.006839 sshd[6575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:11:19.012564 systemd-logind[1663]: New session 12 of user core. Jul 7 06:11:19.018442 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:11:19.426500 sshd[6575]: pam_unix(sshd:session): session closed for user core Jul 7 06:11:19.430382 systemd[1]: sshd@9-10.200.20.11:22-10.200.16.10:37374.service: Deactivated successfully. Jul 7 06:11:19.433088 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:11:19.434697 systemd-logind[1663]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:11:19.435594 systemd-logind[1663]: Removed session 12. Jul 7 06:11:19.522556 systemd[1]: Started sshd@10-10.200.20.11:22-10.200.16.10:37378.service - OpenSSH per-connection server daemon (10.200.16.10:37378). Jul 7 06:11:19.993453 sshd[6588]: Accepted publickey for core from 10.200.16.10 port 37378 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:11:19.994928 sshd[6588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:11:20.000496 systemd-logind[1663]: New session 13 of user core. Jul 7 06:11:20.005447 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:11:20.436630 sshd[6588]: pam_unix(sshd:session): session closed for user core Jul 7 06:11:20.440512 systemd[1]: sshd@10-10.200.20.11:22-10.200.16.10:37378.service: Deactivated successfully. Jul 7 06:11:20.443919 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:11:20.445122 systemd-logind[1663]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:11:20.447009 systemd-logind[1663]: Removed session 13. Jul 7 06:11:20.531533 systemd[1]: Started sshd@11-10.200.20.11:22-10.200.16.10:54138.service - OpenSSH per-connection server daemon (10.200.16.10:54138). Jul 7 06:11:21.004309 sshd[6599]: Accepted publickey for core from 10.200.16.10 port 54138 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:11:21.005881 sshd[6599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:11:21.011425 systemd-logind[1663]: New session 14 of user core. Jul 7 06:11:21.019443 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:11:21.417511 sshd[6599]: pam_unix(sshd:session): session closed for user core Jul 7 06:11:21.420654 systemd[1]: sshd@11-10.200.20.11:22-10.200.16.10:54138.service: Deactivated successfully. Jul 7 06:11:21.423267 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:11:21.425850 systemd-logind[1663]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:11:21.427756 systemd-logind[1663]: Removed session 14. Jul 7 06:11:26.508436 systemd[1]: Started sshd@12-10.200.20.11:22-10.200.16.10:54144.service - OpenSSH per-connection server daemon (10.200.16.10:54144). Jul 7 06:11:26.986785 sshd[6634]: Accepted publickey for core from 10.200.16.10 port 54144 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:11:26.988401 sshd[6634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:11:26.992851 systemd-logind[1663]: New session 15 of user core. Jul 7 06:11:27.000442 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:11:27.401933 sshd[6634]: pam_unix(sshd:session): session closed for user core Jul 7 06:11:27.406113 systemd[1]: sshd@12-10.200.20.11:22-10.200.16.10:54144.service: Deactivated successfully. Jul 7 06:11:27.408199 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:11:27.409150 systemd-logind[1663]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:11:27.410377 systemd-logind[1663]: Removed session 15. Jul 7 06:11:32.496548 systemd[1]: Started sshd@13-10.200.20.11:22-10.200.16.10:46348.service - OpenSSH per-connection server daemon (10.200.16.10:46348). Jul 7 06:11:32.959850 sshd[6671]: Accepted publickey for core from 10.200.16.10 port 46348 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:11:32.961488 sshd[6671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:11:32.968052 systemd-logind[1663]: New session 16 of user core. Jul 7 06:11:32.975439 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:11:33.380897 sshd[6671]: pam_unix(sshd:session): session closed for user core Jul 7 06:11:33.385560 systemd[1]: sshd@13-10.200.20.11:22-10.200.16.10:46348.service: Deactivated successfully. Jul 7 06:11:33.387751 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:11:33.389165 systemd-logind[1663]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:11:33.390819 systemd-logind[1663]: Removed session 16. Jul 7 06:11:38.473555 systemd[1]: Started sshd@14-10.200.20.11:22-10.200.16.10:46352.service - OpenSSH per-connection server daemon (10.200.16.10:46352). Jul 7 06:11:38.937596 sshd[6684]: Accepted publickey for core from 10.200.16.10 port 46352 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:11:38.939099 sshd[6684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:11:38.943498 systemd-logind[1663]: New session 17 of user core. Jul 7 06:11:38.949435 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:11:39.353050 sshd[6684]: pam_unix(sshd:session): session closed for user core Jul 7 06:11:39.356213 systemd-logind[1663]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:11:39.356397 systemd[1]: sshd@14-10.200.20.11:22-10.200.16.10:46352.service: Deactivated successfully. Jul 7 06:11:39.358441 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:11:39.360800 systemd-logind[1663]: Removed session 17. Jul 7 06:11:39.450455 systemd[1]: Started sshd@15-10.200.20.11:22-10.200.16.10:46366.service - OpenSSH per-connection server daemon (10.200.16.10:46366). Jul 7 06:11:39.926505 sshd[6697]: Accepted publickey for core from 10.200.16.10 port 46366 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:11:39.928099 sshd[6697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:11:39.932455 systemd-logind[1663]: New session 18 of user core. Jul 7 06:11:39.938622 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:11:40.455621 sshd[6697]: pam_unix(sshd:session): session closed for user core Jul 7 06:11:40.459397 systemd[1]: sshd@15-10.200.20.11:22-10.200.16.10:46366.service: Deactivated successfully. Jul 7 06:11:40.461093 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:11:40.461893 systemd-logind[1663]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:11:40.462834 systemd-logind[1663]: Removed session 18. Jul 7 06:11:40.544484 systemd[1]: Started sshd@16-10.200.20.11:22-10.200.16.10:44250.service - OpenSSH per-connection server daemon (10.200.16.10:44250). Jul 7 06:11:41.039924 sshd[6708]: Accepted publickey for core from 10.200.16.10 port 44250 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:11:41.041396 sshd[6708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:11:41.045884 systemd-logind[1663]: New session 19 of user core. Jul 7 06:11:41.052416 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:11:42.202954 sshd[6708]: pam_unix(sshd:session): session closed for user core Jul 7 06:11:42.207355 systemd[1]: sshd@16-10.200.20.11:22-10.200.16.10:44250.service: Deactivated successfully. Jul 7 06:11:42.210794 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:11:42.211560 systemd-logind[1663]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:11:42.213266 systemd-logind[1663]: Removed session 19. Jul 7 06:11:42.292700 systemd[1]: Started sshd@17-10.200.20.11:22-10.200.16.10:44266.service - OpenSSH per-connection server daemon (10.200.16.10:44266). Jul 7 06:11:42.785272 sshd[6726]: Accepted publickey for core from 10.200.16.10 port 44266 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:11:42.786083 sshd[6726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:11:42.790785 systemd-logind[1663]: New session 20 of user core. Jul 7 06:11:42.796464 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 06:11:43.453934 sshd[6726]: pam_unix(sshd:session): session closed for user core Jul 7 06:11:43.457251 systemd[1]: sshd@17-10.200.20.11:22-10.200.16.10:44266.service: Deactivated successfully. Jul 7 06:11:43.462602 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 06:11:43.464817 systemd-logind[1663]: Session 20 logged out. Waiting for processes to exit. Jul 7 06:11:43.466104 systemd-logind[1663]: Removed session 20. Jul 7 06:11:43.545519 systemd[1]: Started sshd@18-10.200.20.11:22-10.200.16.10:44280.service - OpenSSH per-connection server daemon (10.200.16.10:44280). Jul 7 06:11:44.014918 sshd[6737]: Accepted publickey for core from 10.200.16.10 port 44280 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:11:44.016504 sshd[6737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:11:44.023873 systemd-logind[1663]: New session 21 of user core. Jul 7 06:11:44.030654 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 06:11:44.457546 sshd[6737]: pam_unix(sshd:session): session closed for user core Jul 7 06:11:44.461624 systemd[1]: sshd@18-10.200.20.11:22-10.200.16.10:44280.service: Deactivated successfully. Jul 7 06:11:44.464916 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 06:11:44.466342 systemd-logind[1663]: Session 21 logged out. Waiting for processes to exit. Jul 7 06:11:44.469026 systemd-logind[1663]: Removed session 21. Jul 7 06:11:49.551521 systemd[1]: Started sshd@19-10.200.20.11:22-10.200.16.10:44292.service - OpenSSH per-connection server daemon (10.200.16.10:44292). Jul 7 06:11:50.023215 sshd[6774]: Accepted publickey for core from 10.200.16.10 port 44292 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:11:50.024878 sshd[6774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:11:50.028887 systemd-logind[1663]: New session 22 of user core. Jul 7 06:11:50.034410 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 06:11:50.427799 sshd[6774]: pam_unix(sshd:session): session closed for user core Jul 7 06:11:50.432410 systemd[1]: sshd@19-10.200.20.11:22-10.200.16.10:44292.service: Deactivated successfully. Jul 7 06:11:50.433385 systemd-logind[1663]: Session 22 logged out. Waiting for processes to exit. Jul 7 06:11:50.435458 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 06:11:50.438058 systemd-logind[1663]: Removed session 22. Jul 7 06:11:55.522567 systemd[1]: Started sshd@20-10.200.20.11:22-10.200.16.10:60388.service - OpenSSH per-connection server daemon (10.200.16.10:60388). Jul 7 06:11:55.985266 sshd[6810]: Accepted publickey for core from 10.200.16.10 port 60388 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:11:55.986818 sshd[6810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:11:55.991160 systemd-logind[1663]: New session 23 of user core. Jul 7 06:11:55.999441 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 06:11:56.409043 sshd[6810]: pam_unix(sshd:session): session closed for user core Jul 7 06:11:56.411904 systemd[1]: sshd@20-10.200.20.11:22-10.200.16.10:60388.service: Deactivated successfully. Jul 7 06:11:56.414871 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 06:11:56.417978 systemd-logind[1663]: Session 23 logged out. Waiting for processes to exit. Jul 7 06:11:56.419412 systemd-logind[1663]: Removed session 23. Jul 7 06:11:59.345271 update_engine[1673]: I20250707 06:11:59.344368 1673 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 7 06:11:59.345271 update_engine[1673]: I20250707 06:11:59.344429 1673 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 7 06:11:59.345271 update_engine[1673]: I20250707 06:11:59.344630 1673 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 7 06:11:59.345271 update_engine[1673]: I20250707 06:11:59.345000 1673 omaha_request_params.cc:62] Current group set to lts Jul 7 06:11:59.345271 update_engine[1673]: I20250707 06:11:59.345091 1673 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 7 06:11:59.345271 update_engine[1673]: I20250707 06:11:59.345098 1673 update_attempter.cc:643] Scheduling an action processor start. Jul 7 06:11:59.345271 update_engine[1673]: I20250707 06:11:59.345115 1673 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 06:11:59.346448 update_engine[1673]: I20250707 06:11:59.346401 1673 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 7 06:11:59.346745 update_engine[1673]: I20250707 06:11:59.346709 1673 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 06:11:59.346745 update_engine[1673]: I20250707 06:11:59.346729 1673 omaha_request_action.cc:272] Request: Jul 7 06:11:59.346745 update_engine[1673]: Jul 7 06:11:59.346745 update_engine[1673]: Jul 7 06:11:59.346745 update_engine[1673]: Jul 7 06:11:59.346745 update_engine[1673]: Jul 7 06:11:59.346745 update_engine[1673]: Jul 7 06:11:59.346745 update_engine[1673]: Jul 7 06:11:59.346745 update_engine[1673]: Jul 7 06:11:59.346745 update_engine[1673]: Jul 7 06:11:59.346745 update_engine[1673]: I20250707 06:11:59.346736 1673 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:11:59.351590 locksmithd[1722]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 7 06:11:59.351878 update_engine[1673]: I20250707 06:11:59.350557 1673 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:11:59.351878 update_engine[1673]: I20250707 06:11:59.350885 1673 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:11:59.459401 update_engine[1673]: E20250707 06:11:59.459339 1673 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:11:59.459543 update_engine[1673]: I20250707 06:11:59.459447 1673 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 7 06:12:01.503540 systemd[1]: Started sshd@21-10.200.20.11:22-10.200.16.10:42084.service - OpenSSH per-connection server daemon (10.200.16.10:42084). Jul 7 06:12:01.983402 sshd[6823]: Accepted publickey for core from 10.200.16.10 port 42084 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:12:01.984111 sshd[6823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:01.990004 systemd-logind[1663]: New session 24 of user core. Jul 7 06:12:01.999482 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 06:12:02.060979 systemd[1]: run-containerd-runc-k8s.io-d697ffedd2a4af00353b5868e48a58fe4d29e8c2c845950bbb03dbcdbfc5dd66-runc.mekwh0.mount: Deactivated successfully. Jul 7 06:12:02.428102 sshd[6823]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:02.432272 systemd[1]: sshd@21-10.200.20.11:22-10.200.16.10:42084.service: Deactivated successfully. Jul 7 06:12:02.435784 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 06:12:02.439674 systemd-logind[1663]: Session 24 logged out. Waiting for processes to exit. Jul 7 06:12:02.441446 systemd-logind[1663]: Removed session 24. Jul 7 06:12:04.895048 systemd[1]: run-containerd-runc-k8s.io-0e220ba7637b8bf4b6128b6804052b8c7ecd68aecb729a6521740357a4bf87e2-runc.XdLe8T.mount: Deactivated successfully. Jul 7 06:12:07.518525 systemd[1]: Started sshd@22-10.200.20.11:22-10.200.16.10:42088.service - OpenSSH per-connection server daemon (10.200.16.10:42088). Jul 7 06:12:07.990961 sshd[6874]: Accepted publickey for core from 10.200.16.10 port 42088 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:12:07.992493 sshd[6874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:07.996816 systemd-logind[1663]: New session 25 of user core. Jul 7 06:12:08.001460 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 06:12:08.400495 sshd[6874]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:08.404430 systemd-logind[1663]: Session 25 logged out. Waiting for processes to exit. Jul 7 06:12:08.405625 systemd[1]: sshd@22-10.200.20.11:22-10.200.16.10:42088.service: Deactivated successfully. Jul 7 06:12:08.408557 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 06:12:08.410370 systemd-logind[1663]: Removed session 25. Jul 7 06:12:09.342538 update_engine[1673]: I20250707 06:12:09.342391 1673 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:12:09.343463 update_engine[1673]: I20250707 06:12:09.343015 1673 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:12:09.343463 update_engine[1673]: I20250707 06:12:09.343282 1673 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:12:09.420734 update_engine[1673]: E20250707 06:12:09.420604 1673 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:12:09.420734 update_engine[1673]: I20250707 06:12:09.420701 1673 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 7 06:12:13.501589 systemd[1]: Started sshd@23-10.200.20.11:22-10.200.16.10:54982.service - OpenSSH per-connection server daemon (10.200.16.10:54982). Jul 7 06:12:13.976057 sshd[6906]: Accepted publickey for core from 10.200.16.10 port 54982 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:12:13.978150 sshd[6906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:13.986621 systemd-logind[1663]: New session 26 of user core. Jul 7 06:12:13.993543 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 06:12:14.146414 systemd[1]: run-containerd-runc-k8s.io-df05deaefc753f05b2d9bf1d1451a84c3caabe08232f7c2fd4c7c5c98fd1b3cf-runc.yaeNIt.mount: Deactivated successfully. Jul 7 06:12:14.425797 sshd[6906]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:14.429624 systemd-logind[1663]: Session 26 logged out. Waiting for processes to exit. Jul 7 06:12:14.430734 systemd[1]: sshd@23-10.200.20.11:22-10.200.16.10:54982.service: Deactivated successfully. Jul 7 06:12:14.433990 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 06:12:14.438211 systemd-logind[1663]: Removed session 26. Jul 7 06:12:19.346316 update_engine[1673]: I20250707 06:12:19.346239 1673 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:12:19.346684 update_engine[1673]: I20250707 06:12:19.346487 1673 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:12:19.346758 update_engine[1673]: I20250707 06:12:19.346724 1673 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:12:19.459630 update_engine[1673]: E20250707 06:12:19.459563 1673 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:12:19.459773 update_engine[1673]: I20250707 06:12:19.459661 1673 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 7 06:12:19.512560 systemd[1]: Started sshd@24-10.200.20.11:22-10.200.16.10:54986.service - OpenSSH per-connection server daemon (10.200.16.10:54986). Jul 7 06:12:19.979306 sshd[6943]: Accepted publickey for core from 10.200.16.10 port 54986 ssh2: RSA SHA256:oCefl81iYy13tMbeenYuBuFHcdiDlhmj3jpc1G3JQP0 Jul 7 06:12:19.980939 sshd[6943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:19.986352 systemd-logind[1663]: New session 27 of user core. Jul 7 06:12:19.994543 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 06:12:20.400783 sshd[6943]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:20.405575 systemd-logind[1663]: Session 27 logged out. Waiting for processes to exit. Jul 7 06:12:20.405758 systemd[1]: sshd@24-10.200.20.11:22-10.200.16.10:54986.service: Deactivated successfully. Jul 7 06:12:20.408076 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 06:12:20.410739 systemd-logind[1663]: Removed session 27.